Over-testing students is a real thing across the U.S., according to Scott Norton, strategic initiative director for Standards, Assessment and Accountability at the Council of Chief State School Officers (CCSSO), who discussed the purpose of state and local assessment programs at Tuesday’s meeting of the new state Mastery Examination Committee.
Norton called states “smart” that try to create inventories of all of the tests being administered to students.
While it was a project broader than just one, individual state, Norton shared with the committee news of the first comprehensive study ever undertaken to ascertain the true extent of mandatory testing in the nation’s schools. The study was conducted by the Council of Great City Schools and included data from 66 urban school districts across the U.S.
Norton encouraged educators on the committee to examine the study.
There were about 400 test titles being used in the nation’s largest urban school systems in the 2014-15 school year, and students sat over 6,500 times for tests across the 66 school systems studied, the research found. Some of these tests were administered to fulfill federal requirements under No Child Left Behind, NCLB waivers, or Race to the Top (RTT), while many others originated at the state and local levels. Others were optional.
According to the study, there is redundancy in the tests that districts give. For example, multiple exams are sometimes given in the same subjects and grades to the same students because not all results yield data by item, grade, subject, student, or school—thereby prompting districts to give another exam in order to get data at the desired level of granularity.
Test redundancy is just one of the many concerns being expressed by CEA, who is represented on the new committee by Windsor teacher Marcia Ferreira and Donald Williams, CEA director of Policy, Research, and Reform.
Williams told the committee, “Unfortunately, a testing system has been designed that encourages a proliferation of tests and distorts their purpose. For example, in addition to a state mastery exam (SBAC) being slated to be used for a teacher’s evaluation, a district can decide to use another standardized test in addition to the mastery test. This usually means a Progress Monitoring Test (PMT) as an indicator of student growth and development. That can be part of an additional 22.5 percent of a teacher’s evaluation.”
According to Williams, testing corporations are adapting certain PMTs to essentially provide a barometer for schools to assess, ‘Are we on track to do well on SBAC?’ “Rather than being an incentive to cut back on testing, superintendents, principals, and teachers are faced with an increasing amount of testing to evaluate teachers” and serve as test prep to increase SBAC scores, said Williams.
Williams added that those pressures are greatest in the schools of greatest need and in high poverty areas where actually students need more instruction. “This turns into the equivalent of drill and kill. That’s what’s happening right now,” he said.
Dr. Stephen Hegedus, dean of the School of Education at Southern Connecticut State University and member of the committee, said a “truly fair test should not need test prep at the end of the day.” He posed the question, “What is going on (in the classroom) before the summative assessment? Is it truly formative?”
Norton, the featured speaker at yesterday’s meeting, said a good statewide test is aligned with high standards, assesses higher order cognitive skills and critical abilities, proves to be instructionally sensitive and educationally valuable, and is valid, reliable, and fair.
Representatives of the CT PTA on the committee described challenges about the fairness and equitable nature of Connecticut’s move from SBAC to the SAT as the state’s official high school assessment. They asked Norton what other states are doing. “It’s a fairly small number moving to the SAT,” Norton replied.
For his colleagues on the committee, Williams underscored how unfair SBAC has been to English language learners, special education students, and children from economically disadvantaged homes where technology is missing. (SBAC is a test that relies on technology and computer knowledge and skills.) “There are fairness and equity problems. When it comes to using the SBAC as an evaluation of schools, administrators, and teachers. No way has this test been validated,” said Williams.
Norton shared with the committee that his boss at CCSSO just had a lengthy phone call about the equity issue in Montana. Norton said, “It was pretty hard for him to listen to examples of inequities on reservations. This is a fair and valid point. The same thing with disabled, disadvantaged, and special education students.”
Norton added that no measure is perfect, and the measurements we have today are better than the ones that preceded them.
The next meeting of the committee is Monday, November 23, at 9 am in Hartford.
There are few issues as important as testing. BlogCEA encourages you read the new and biggest study yet on testing. Let us know what resonates with you. Surely, you will find some of your students experiences mirrored in the new study.
Once again, the SDE calls upon an official from the pro-reform CCSSO, the quasi-education organization that was instrumental in organizing the development of the the Common Core Standards, to explain the role of testing in educational policy today. Similar to Chris Minnich’s misleading presentation before the Legislature’s Education Committee (2.28.14) in defense of the Common Core, Mr. Norton uses all the unproven reform rhetoric in describing “a good statewide test” as “aligned with high standards, assesses higher order cognitive skills and critical abilities, proves to be instructionally sensitive and educationally valuable, and is valid, reliable, and fair”. The implication is that, though “no measure is perfect, the measurements we have today are better than the ones that preceded them” or so he told the members of the Mastery Examination Committee. However, none of these claims can be verified by research independent of the companies that had a hand in the test development itself. And, there is plenty of evidence available to raise serious questions as to whether the CCSS tests have met acceptable standards of assessment design. Dr. Roxana Marachi, a professor at San Jose State University, wrote in an Open Letter to the California BOE commenting on recent SBAC test results: “False data are false data. Period. And to compare future results with current 2015 scores as ‘baseline’ would be just as fraudulent as it would be to promote these 2015 scores as somehow valid.” Perhaps, the Mastery Examination Committee could hear from some highly-regarded educational experts who do not have a financial interest in the continued promotion and selling of this flawed, destructive, and persistent “corporate education reform” agenda. Or, will this committee suffer the same fate as the Governor’s Implementation Task Force, characterized by CT Newsjunkie reporter Sarah Darer Littman: “Hardly a well-rounded airing of views – but then that doesn’t appear to have been what the governor and his allies wanted.” Some names and faces may have changed, but the top-down effort to control the message remains the same. Sad for CT’s students, disheartening to CT’s teachers, and dishonest to CT’s parents.