The Common Core—
A Brief Orientation to the Hullabaloo
What are the Common Core State Standards?
The Common Core State Standards (CCSS) are a set of learning objectives in mathematics and English language arts (ELA) created to better prepare all US students for college and careers. They establish uniform grade-level goals for skills and knowledge—pre-kindergarten through high school—to encourage consistency among the states for what students learn and when. Influenced by international benchmarking, the effort also reflects a desire for the US to compete more favorably on measures such as the Programme for International Student Assessment (PISA). Initially forty-four of the fifty US states and the District of Columbia agreed to replace their individual state standards with those of the Common Core.
So what’s the fuss about?
Since their release in 2010, the CCSS have become highly politicized. Although the initiative came about through a bipartisan states-led effort that included the National Governor’s Association (NGA), the Council of Chief State School Officers as well as numerous education, business, and public policy leaders, the Common Core is now often cast as a federal takeover of education (due, in part, to federal incentives created through the Race to the Top grants competition). Today efforts to reject the Common Core exist in more than 30 states (read about yours). Skeptics question, too, the role and interests of industry in the creation of the CCSS. The development of the standards was funded by the governors and the chief state school officers, but also the Bill and Melinda Gates Foundation (Microsoft), and Pearson Publishing Company (which produces and markets curricula and assessments), among other entities.
The quality of the CCSS has been trumpeted as a significant improvement to the standards that previously guided curricula and instruction in many states. They have also come under fire as establishing developmentally inappropriate expectations for the early grades, for being too rigorous with math in the elementary grades and not rigorous enough in high school, for a dramatic shift to close reading and nonfiction, for cultural bias with recommended literature, and for ambitious hurdles for English language learners (ELLs) and students with disabilities.
The CCSS are, however, largely a framework of learning objectives. How they influence curricula, instructional practice, the school day and—ultimately—achievement for students of all abilities remains to be determined (see CCSS myths vs facts).
Setting the stage for new assessments—
hopes and fears for special education
When No Child Left Behind (NCLB) was enacted in 2001, and the Individuals with Disabilities Education Act (IDEA) reauthorized in 2004, they set a new expectation; students with disabilities and other at-risk learners would be challenged to accomplish the same general curriculum standards as their grade-level peers. The Common Core State Standards released in 2010 then sought to establish those grade-level standards for all students in any state, aligned with that expectation; no one would be presumed incapable of learning (see the CCSS “Application to Students with Disabilities”). Students reading below grade level, for example, would still be taught grade-level content standards (as targeted in their Individualized Education Plans [IEPs]). Skill deficits wouldn’t necessitate the creation of large knowledge gaps, and no longer could a “life skills” curriculum be considered solely adequate. Many educators, parents, and others welcomed this shift in expectations for their students.
NCLB, however, also held schools accountable for making Adequate Yearly Progress (AYP) toward the proficiency of all students within the general curriculum, as determined by summative assessments. Additionally, incentives were created for states to tie teacher evaluations to student assessments through Race to the Top grants. Inevitably these accountability strategies meant schools focused concentrated attention on assessments and at-risk students, including students with disabilities, and at many schools assessments took center stage. After all, schools that did not meet AYP for several years would be determined “failing” under NCLB and suffer corrective actions, restructuring and sometimes closures.
IDEA’s requirement to include students with disabilities in summative assessments—while an accountability victory—became, therefore, also a source of anxiety due to the pressures NCLB applied. Indeed, at some schools, test performance by students with disabilities was determined the tipping point for failure to make AYP, generating something of a backlash against those students and their teachers. Taken to the extreme, this pressure to increase NCLB’s “disability subgroup” achievement scores created an incentive for schools to identify more higher achieving students as IDEA eligible (such as students receiving speech-related services only). In this way, NCLB could inadvertently penalize schools with higher rates of general education inclusion, hardly the intention of IDEA!
This assessment-focused culture is the education environment now receiving the Common Core. Education public policy has made bringing all struggling learners to grade-level proficiency high-stakes. To measure proficiency with the CCSS, states are now procuring new summative assessments (see below). Although NCLB awaits reauthorization and a reexamination of these policies (read Vermont’s resolution), how well students with disabilities will perform on the new tests matters—to schools, districts, teachers and students.
Equally important, therefore, is how well the tests will do for students with disabilities.
Posted at TestingTalk.Org
About the 2014 field tests:
Author: Anonymous, Administrator, Principal
Date: May 7 at 5:03 pm ET
Our district recently completed the Smarter Balanced Field Test. I was very disappointed with accessibility features of the assessment. I have heard and read that the assessment has unprecedented accessibility features and provides avenues for students to participate. The accommodations and embedded features were incredibly confusing for students with disabilities and struggling learners. Students needed to click and drag or click and highlight. The use of language glossaries were found to be inaccurate on many occasions. I’m concerned about the quality checks in place for language translations and the manner in which student can locate a word to see the glossaries.[ ….] Sad to develop a system that looks good on paper and creates a “good story” for accessibility but falls short with real world application.[ ….]
Author: Jen P., literacy facilitator
Test: PARCC – pilot
Date: April 15 at 12:24 pm ET
[….] BIGGEST CONCERNS: The test format is so unlike the learning format! Although our kids are using technology in the classroom, the ways they were expected to manipulate text while testing made it pretty difficult. In one 75 minute session (which some finished in 15) they had to highlight answers, click and drag, move text around, switch back and forth between passages, and type. We just don’t do this often in classrooms. One wonders if we are actually testing content or how well students can use the mouse and keyboard. Some will say they need to spend more time using technology BUT if we are using the lab for computer based testing for 10-12 weeks per year (including district and state assessments) how are we able to get our kids more access to technology? These tests are actually decreasing our access!
Read more and contribute to the conversation at TestingTalk.org
The “New Generation” of Assessments
Four federally-funded state consortia are currently creating new assessments aligned to the Common Core State Standards (CCSS). Two are developing general assessments and two are developing alternate assessments (administered to students with the most significant cognitive disabilities, roughly one percent of the student population). There are also states developing their own assessments and states still deciding what to do (including some that are a part of the consortia).
What makes them so new?
In general, the consortia’s assessments are referred to as a “new generation” because: 1) they reflect education reform inherent to the CCSS initiative, 2) they are the first summative assessments that are digital and accessed on the Internet, 3) they build in various “UDL” digital tools and accommodations for users to select or administrators to turn on (such as text magnification and spell check), and 4) two consortia are further innovating by developing adaptive tests— not “adaptive” as the word is used in a disability context—but adaptive for being responsive to student input, incorporating a diagnostic intelligence designed to provide information useful for individualized instruction.
The consortia creating general assessments:
PARCC: Partnership for Assessment of Readiness for Colleges and Careers. PARCC is creating general assessments (both formative and summative). 12 member states—AR, CO,IL, LA, MD, MA, MS, NJ, NM, NY, OH, RI, and the District of Columbia.
SBAC: Smarter Balanced Assessment Consortium. SBAC is also creating general assessments (both formative and summative). 20 member states—CA, CT, DE, HI, ID, ME, MI, MO, MT, NV, NH, NC, ND, OR, SD, VT, WA, WV, WI, WY (plus the US Virgin Islands is an affiliate member, and IA and PA are listed as advisory members). SBAC’s tests will be computer adaptive (CAT); the assessments will select the level of difficulty for questions posed to students based on the student’s performance on prior questions.
The consortia creating alternate assessments:
DLM: Dynamic Learning Maps Alternate Assessment System Consortium. DLM is creating the alternate assessment for students with the most significant cognitive disabilities (often referred to as the 1%). 19 member states—KS, IA, MI, MS,MO, NJ, NC, ND,OK, PA,’ UT, WV, WI, VT, VA, WA, AK, CO, IL, ND. (Note: AK and VA have not adopted the CCSS, but will still use DLM.) DLM’s assessments will be “dynamic adaptive delivery” to integrate instruction with assessment. Two testing options are in the works: 1) “testlets” with tasks embedded in instruction and an option to aggregate results for summative assessment, and 2) a summative assessment that forgoes the embedded tasks.
NCSC: The National Center and State Collaborative Partnership. Like DLM, NCSC is creating alternate assessments (formative and summative). 12 partners—AZ, CT, FL, IN, LA, PAC-6 (Pacific Assessment Consortium), RI, SC, SD, TN, WY, and the District of Columbia. Affiliates are AR, CA, DE, ID, ME, MD, MT, NM, NY, OR, and the US Virgin Islands. This test is not adaptive and will be delivered one-on-one as with prior alternate assessments.
PARCC and SBAC Accessibility:
the Promise vs. the Product
“The disability community has been told repeatedly that these new online assessments will revolutionize accessibility for students with disabilities. To do any less is a violation of trust.”
--from the Missouri Council of Administrators in Special Education and Missouri School Board’s Association white paper, “Common Core State Standards Assessments Accessibility by Students with Disabilities” (hereafter “the MO white paper”)
For years students with disabilities who do not use pencil and paper—students who are blind or have another print disability, students with particular physical disabilities, all students covered under IDEA who do not effectively “show what they know” with this format—have been specially accommodated at test taking time. For many this has meant, and still means, taking assessments one-on-one with a “human reader” who poses questions and records responses. This accommodation has added a social-emotional layer of difficulty to test-taking as well as implications for test validity (since readers can inadvertently influence correct answers). The promise of a digital-based assessment, therefore, was to move beyond the limitations of the paper-based approach. With so many students now successfully accessing digital text using various forms of computer access and assistive technology, the opportunity for an independent test taking experience was within reach.
The PARCC and SBAC assessment developers, however, did not approach the task of creating the “new generation of assessments” with the expertise of assistive technology users and practitioners, a working legal definition of Universal Design, or the application of Web accessibility standards as expressed by the WCAG 2.0 and/or Section 508 of the Rehab Act. This is the disheartening message consistently carried over the last two years by disability policy, advocacy, and assistive technology experts. They have been speaking out, seeking to draw attention to, and resolution of, the accessibility deficits in this $1 billion test development undertaking.
Diane Cordry Golden, Project Coordinator for the Association of Assistive Technology Act Programs and Policy Coordinator for the Missouri Council of Administrators of Special Education, emphasizes there are also philosophical differences underpinning the consortia’s assessment and policy development process. “Disability is approached as something that needs fixing,” she reported at the RESNA Catalyst conference last July. “Technology-supported achievement is de-valued.”
To the layperson, that message seems odd. Both the PARCC and SBAC assessments appear innovative for their sensitivity to disability. Each boasts a host of built-in universal-design tools, designated supports and embedded accommodations—everything from text highlighting, color contrast and spell check to on-screen calculators, Braille and videos in American Sign Language.
Those with direct AT experience, however, often see the tests differently—something akin, perhaps, to a bathroom stall fully equipped with grab bars … and yet no room for the wheelchair.
One size rarely fits all
“I supported building in tools,” acknowledges Dave Edyburn, PhD. “I supported it, but not to the exclusion of a student’s own assistive technology.” Edyburn is a member of the PARCC Accessibility, Accommodations and Fairness Technical Working Group and a professor in the department of Exception Education at the University of Washington. He’s been presenting at AT conferences and through webinars to explain how the consortia approach accessibility and raise awareness of coming challenges.
Excluding a user’s own AT, according to Edyburn, was the initial PARCC and SBAC strategy. Test developers sought to create their own closed all-inclusive tech platforms. Part of the motivation was grounded in test security. A student’s own AT may function in a way that temporarily stores test content. Besides, the use of built-in components can reduce stigma (no need to look different with your own AT).
Members of the AT community, however, highlighted several weaknesses to this approach:
Ziolkowski and Golden have been reaching out to PARCC and SBAC to raise awareness of the AT user experience and advocate for the use of students’ own AT. “It started as a grassroots effort among a few members of the AT community,” Ziolkowski explains (though now, as an ATIA Board member, she says she’s acting more formally in that capacity).
- The built-in tools imply one size fits all—“And that is not true and it has never been true," remarks David Dikter, CEO of the Assistive Technology Industry Association (ATIA) (in Education Week). Indeed, most AT is highly customizable for individualized needs.
- The built-in tools are unfamiliar –which makes the test yet another technology to learn. “It’s like handing someone a completely different cell-phone or operating system,” notes Diane Cordry Golden, “and then testing them on how well they can complete tasks!” The assessments, she feels, will put pressure on schools to sacrifice academic instructional time to teaching students how to use and access these features.
- The built-in tools are non-standard—“Features such as masking and answer elimination are not available with common AT tools,” observes Ruth Ziolkowski, President of Don Johnston, Inc. (makers of educational AT) in a recent interview. “So the only time a student can build skills in these tools is during test preparation.”
- The built-in tools are inferior—not all text-to-speech, word prediction, or on-screen calculators are created equal. “Students need access to the same tools they access for instruction, the tools they are going to carry with them into college and careers,” emphasizes Golden. “Without those tools their scores are invalid.”
The advocacy has had its impact. PARCC and SBAC will now allow some students to use their customary AT (in some situations). Magda Chia, SBAC’s Director of Support for Underrepresented Students, acknowledged SBAC’s learning curve in Education Week: "’There has been a little bit of a paradigm shift to understand that having the same function embedded in a test does not mean the same test experience for every kid […].’"
A question now is whether that paradigm shift comes too late.
AT compatibility—the retrofit problem
“The problem is the foundation was already poured. It’s like trying to build a ten story house on a foundation meant for two,” explains Ziolkowski.
An example Edyburn points to is TestNAV, the online testing platform developed by PARCC. TestNAV was programmed in Flash—software commonly understood as inherently not accessible with assistive technology. As a retrofit, PARCC has had to create a completely different version for use with screen readers. In addition, field testing this past spring has likely alerted the test developers to a host of other issues, but no one knows how many retrofits the consortia can afford (Edyburn notes they are running out of money as the grant funded period ends) or how well they will work. “Will Dragon [speech recognition software] get kicked off [for incompatibility]?” Ziolkowski poses. “We’ll have to wait and see.”
Test security locks out AT
Beyond compatibility is the ongoing issue of test security locking out AT. According to Ziolkowski, test developers are currently working on the assessment versions for Chromebooks. To make the platform secure, developers are working in “kiosk mode” which locks out AT. Support from Google will need to be sought to overcome this barrier.
between consortia, between states
Policy differences between the consortia are equally concerning. In key instances the consortia have decidedly different positions on if and when certain technologies are permissible (and don’t “violate the testing construct.”) To Edyburn this is ironic. Part of the promise of the Common Core and the assessment consortia was to make research-based decisions. “If their policies were research based, they would have come to the same conclusions, and they haven’t.” Instead, Edyburn explains, “the consortia looked at the policies of their respective member states on assessment accommodations and tried to identify commonalities. However, this methodology is flawed because the previous policies were developed to provide accommodations for paper-based tests – not digital assessments that were designed for universal accessibility.”
The most dramatic difference between the consortia is who has final say on the fairness of a particular accommodation. In PARCC states, the test developers maintain they have final legal authority (which means the PARCC accommodations manual overrules a student’s IEP team as well as state policy). In SBAC states, the decision can ultimately be guided by each state’s own laws and policies.
Thorny AT conflicts in brief:
Text-to-Speech (TTS or “Read Aloud”)
PARCC: TTS is a built-in accommodation that may be turned on for students who have it documented in their IEP or Section 504 Plan. TTS is available for students of any grade for both math and English/language arts (ELA). However, PARCC will flag test results for students who use TTS with a notation stating “no claims should be inferred regarding the student’s ability to demonstrate foundational reading skills (i.e., decoding and fluency).”
SBAC: TTS is not built in. It is allowable for grades 6 and up for math and ELA if identified as a needed accommodation by the student’s IEP or 504 team. It is not permitted for use by students in grades 3-5 for ELA including for students who are blind or visually impaired (however SBAC will defer to each state’s own laws and policies).
Many special educators and others are disappointed with the consortia’s TTS policies, particularly with SBAC for not permitting TTS for ELA to elementary students who are blind and not yet Braille proficient or for students who have no other way of demonstrating their comprehension skills (a separate skill from decoding). In addition, Ziolkowski notes there has been no acknowledgment by either consortia of the use of TTS for editing. “Many students use it for proofreading their writing. This is one of the most common uses of AT and it isn't available to students.” (Read more about the Read Aloud controversy in Education Week.)
PARCC: Word prediction is not built in, but is allowable for students of any grade (if documented in the student’s IEP or 504 plan). According to Ziolkowski, Co:Writer (a Don Johnston product) works in the publicly available versions of the test and she is asking for assurances it will not be blocked in the secured version. Additional common forms of word prediction include TextHelp’s Read&Write Gold and Quillsoft’s WordQ.
For the field tests, students using word prediction had to use a separate device and have all their work scribed. If this continues, students will be deterred from using these supports, as it requires the use of two screens (one for reading test passages and one for writing).
SBAC: word prediction is not mentioned anywhere in the SBAC accommodations manual. Yet word prediction is one of the most common tools used by students with disabilities.
The word prediction issue highlights an additional responsibility that school-based AT specialists must now be willing to take on, argues Ziolkowski. “The AT community needs to get comfortable with these tests and learn what they require. Word prediction, for example, must be able to predict the tougher vocabulary, the high-level academic vocabulary of the Common Core. It’s a matter of the SETT framework—considering the task.”
PARCC and SBAC: graphic organizers are not mentioned in the accommodations manuals for either consortium. Yet graphic organizers are the top most common tools used by students with disabilities for writing.
For both word prediction and graphic organizers, Ziolkowski reports the tools can be requested, but they will be made available on a case-by-case basis. This means extra work by practitioners and states, and ongoing inconsistencies with accommodations policies.
Accessing AT Accommodations
PARCC: IEP and 504 teams identify access features and accommodations needed by their students to take the PARCC assessment. Requested accommodations must abide by the policies set forth in the PARCC Accessibility Features and Accommodations Manual and be documented in the student’s IEP or 504 plan. PARCC has created an optional form for planning for the PARCC assessment (used during field testing): the PARCC Accessibility Features and Accommodations Documentation Form. Student needs for accessibility features, tools, and AT become part of their PARCC Personal Needs Profile (PNP)—a collection of student information embedded into the technology platform of the assessment that is individualized for each student. (Read more about PARCC from MassMATCH.)
SBAC also has a manual of Usability, Accessibility, and Accommodations Guidelines for IEP and 504 teams. And SBAC creates a student needs profile. But SBAC goes beyond PARCC’s documentation form to recommend an assessment accommodation team carry out a seven step Individual Student Assessment Accessibility Profile (ISAAP) Process and complete SBAC’s ISAAP Tool. The MO white paper has a strong response to this proposal, calling it “fraught with potential legal and pragmatic problems” and “yet another layer of unnecessary paperwork.”
Edyburn highlights practical access challenges created by both PARCC and SBAC. Schools must ensure each student has their required accessibility features and accommodations available at test time. Both consortia take a tiered approach to their built-in tools; some are available to all students under the rubric of universal-design, and others require “turning on” by administrators (which may include universal tools deemed too distracting unless specifically requested, as well as tools that are legal accommodations). “Who will turn on these supports for each student when they need them? Realistically, do schools have adequate numbers of AT staff available to carry out this function on each day of testing?”
Advice from the experts
Although the assessments now allow more than just built-in supports, it remains doubtful that all students will have access to their customary AT. Policies vary between the consortia and between states including which accommodations “invalidate the testing construct.”
Yet these summative assessments are high stakes in some states, impacting grade promotion and the earning of a diploma. So what can we do for students who cannot adequately demonstrate their skills and knowledge with the computer-based test?
Consider the Paper Version
This is the accommodation option of last resort for the general assessment. It is provided by each consortium, complete with reader and scribe guidelines. Its provision, Edyburn emphasizes, brings us full circle—far from the promise of the “new generation of assessments.”
Remember the AA-GLAS
Ron Hager, Senior Staff Attorney with National Disability Rights Network, reminds us that it is still a legal option under IDEA to advocate for an Alternate Assessment based on Grade Level Achievement Standards (AA-GLAS). “If you can’t get a school district or a state to bend on an accommodation that your student will need to really be able to graduate from high school on a high-stakes test, then that alternate option is available,” he emphasized in a recent RESNA Catalyst webinar. What does an AA-GLAS look like? According to the National Center on Educational Outcomes, AA-GLAS are rarely an alternate format test, but usually “performance assessments with both evidence of student achievement and jury review of the evidence, and a collection of evidence submitted to independent scorers.”
Advocate for State-level Guiding Principles
The Missouri Council of Administrators in Special Education and Missouri School Board’s Association has outlined six guiding principles for summative assessments to help ensure fairness for students with disabilities. The principles have since been adopted as MO state policy (which may eventually impact MO’s procurement of SBAC). AT programs with policy-maker relationships might consider the MO experience and adapt the MO white paper as a tool for state-level advocacy.
Missouri’s Guiding Principles (in brief):
- Digital assessment applications must conform to an accepted set of accessibility standards and students must be allowed to use their own assistive technology to demonstrate their true academic proficiency.
- Guidelines restricting the use of access features must be patently justified and cannot result in disability-based discrimination or cause invalid proficiency scores for students with disabilities.
- Mandating yet another “individual student plan” to authorize and activate the access features a student needs is unnecessary and will create burdensome compliance requirements in addition to those that already exist under disability laws.
- Technology supported academic achievement must be valued equally with non-technology supported.
- Educators in collaboration with students and families should make decisions about when to implement skill deficit remediation, when to utilize compensatory strategies (such as AT) to mitigate skill deficits, and when to do both.
- Students with disabilities along with their families and educators will be irreparably harmed if the CCSS assessments are not fully accessible.
Reflections on version 1.0
“Moving from paper to digital assessments should mean we’re removing barriers for students with disabilities,” muses Edyburn on his experience advising PARCC. “However, I’ve come to see there are some inherent barriers in this process.” Edyburn says the Assessment development community holds positions diametrically opposed to those held by the AT community, concerns that hinge on not violating the “test construct” (i.e. skills targeted). The Assessment community, he explains, believes standardization means everybody should get exactly the same thing; whereas the Assistive Technology community believes everyone should get what they need. “So if you have a learning disability and we know you can’t spell, providing you with word prediction and a spell checker is part of how we educate you. We don’t keep flogging you every day because you can’t spell. The impact of these disabilities is that you don’t learn your way out of it! These are fundamental philosophical differences about what it means to provide an appropriate level of access.”
Edyburn joined the working group excited to design the “next generation” of assessments, and for him that meant the capacity to finally give the AT field a vast cohort of students to study for their use of accessibility tools, to understand how they are used and to what benefits. He sees the future of digital assessments as making all tools available to everyone, like Siri on the iPad. He talks about psychometrics and dynamic norming.... In effect, he realizes he came to the table with a set of assumptions about powerful opportunities made possible by assistive technologists teaming with assessment developers, opportunities that are often once in a lifetime. "When I get discouraged about the current state of affairs, it seems that all we have accomplished is putting paper on the screen. We're not talking about usability; we're still talking about accommodations. We're stuck in these paper-based policies and we're trying to apply them to a screen. Really, we are a bit beyond that. It's just this isn't the 'next generation' digital assessment. We know so much more about universal accessibility than what we have been able to implement here. As far as large-scale digital assessments, this is the first generation."
Tweet PARCC October 23rd!
...and SBAC anytime!
PARCC is holding official Twitter "office hours" on issues of accessibility and accommodations on October 23rd from 5:00 to 6:00 p.m. ET
use #askPARCC. @PARCCPlace
(Otherwise contact PARCC)
SBAC's Twitter address is
@SmarterBalanced. For SBAC, Twitter appears to be the only direct contact info publicly available. So... time to start Tweeting! (Also: contact your SBAC member state agency directly.)
When will students be able to use their own AT and voices to have the test read aloud?
If I test my AT with the publicly available versions, will that assure me that it will work in a secured version?
What if I find a problem in a secured version? How can I report this and what is the process and timeline to make this compatible?
When will graphic organizers, Text-to-Speech for editing and (with SBAC) Word Prediction be considered as accommodations?
Your specific concerns for your students!
Report your test experiences!
Up Next: AT Policy Advocacy!
ATPN is planning an AT Policy Advocacy edition for the winter. Tell us your organizational, state and/or national priorities; your strategies and stories; and what we can all do to help. Have an idea or an article to submit? Contact ATPN