It is probably still an open question, but on February 11, 2016 the Fordham Institute (an education policy think tank) released a report on the next generation assessments (i.e. ACT Aspire, MCAS, SBAC, and PARCC) arguing that they meet the criteria for quality assessments. Fordham Institute contracted with two principal investigators to answer the following questions:
- Do these tests reflect strong content?
- Are they rigorous?
- What are their strengths and areas for improvement?
Using the CCSSO criteria for Procuring and Evaluating High Quality Assessments the investigators evaluated summative assessments and concluded:
- Overall, PARCC and Smarter Balanced assessments had the strongest matches to the CCSSO Criteria.
- ACT Aspire and MCAS both did well regarding the quality of their items and the depth of knowledge they assessed.
- Still, panelists found that ACT Aspire and MCAS did not adequately assess—or may not assess at all—some of the priority content reflected in the Common Core standards in both ELA/Literacy and mathematics.
The investigators argue that these next generation assessments are a significant improvement over previous assessments:
For too many years, state assessments have generally focused on low-level skills and have given parents and the public false signals about students’ readiness for postsecondary education and the workforce. They often weren’t very helpful to educators or policymakers either. States’ adoption of college and career readiness standards has been a bold step in the right direction. Using high-quality assessments of these standards will require courage: these tests are tougher, sometimes cost more, and require more testing time than the previous generation of state tests. Will states be willing to make the tradeoffs? (Executive Summary p. 24).
These assessments meet the criteria set forth by CCSSO. According to the report there are improvements that could be made, but, as Fordham Institute press release notes, these are " kind of tests that many teachers have asked state officials to build for years."
I think this is a good first step to ensure that the conversation about quality assessments is ongoing.
Over at Nonpartisan Education Blog there is an article by Richard Phelps sharply criticizing the Fordham report as lacking research rigor and being biased because Fordham has advocated for the Common Core. This article is worth reading, but keep in mind that Phelps has partnered with opponents of the Common Core on several articles, including some of the most vocal critics of the standards Sandra Stotsky and James Milgram. In addition, Phelps has co-authored a paper arguing that PARCC actually stunts student growth. So, Phelps is not without his own bias.