Sunday, January 13, 2019

Data Was Supposed to Fix the U.S. Education System. Here’s Why It Hasn’t.

Simon Rodberg

For too long, the American education system failed too many kids, including far too many poor kids and kids of color, without enough public notice or accountability. To combat this, leaders of all political persuasions championed the use of testing to measure progress and drive better results. Measurement has become so common that in school districts from coast to coast you can now find calendars marked “Data Days,” when teachers are expected to spend time not on teaching, but on analyzing data like end-of-year and mid-year exams, interim assessments, science and social studies and teacher-created and computer-adaptive tests, surveys, attendance and behavior notes. It’s been this way for more than 30 years, and it’s time to try a different approach.


The big numbers are necessary, but the more they proliferate, the less value they add. Data-based answers lead to further data-based questions, testing, and analysis; and the psychology of leaders and policymakers means that the hunt for data gets in the way of actual learning. The drive for data responded to a real problem in education, but bad thinking about testing and data use has made the data cure worse than the disease.

How We Got Here
In 2001, Congress adopted No Child Left Behind, key legislation that mandated annual testing and led to data-based decision making for schools. That was the same year I started teaching. When I joined a charter school in Washington, DC, the school had recently expanded. It had a fabulously charismatic CEO with an inspiring life story. All its students completed internships and all the seniors wrote theses about public policy. The best of these made for great stories, to be told to donors and the charter oversight board. But the data — standardized tests required by the new law — revealed that our students, overall, struggled to read and do math anywhere near grade level. The graduation rate stunk.

The new data meant that we could no longer ignore most students’ reality: Our teachers were failing. As Michelle Rhee, former chancellor of the District of Columbia Public Schools, said, “When we took control of this school district in 2007, 8 percent of the 8th graders were operating on grade level in mathematics—8 percent. And if you would have looked at the performance evaluations of the adults in the system at the same time, you would have seen that 95 percent of them were being rated as doing a good job. How can you possibly have a system where the vast majority of adults are running around thinking, ‘I’m doing an excellent job,’ when what we’re producing for kids is 8 percent success?”

One of Michelle Rhee’s core values for the public school system was “Our decisions at all levels must be guided by robust data.” (I worked for Rhee in 2009-10, and I was a total believer.) This gospel spread throughout K-12 education. Under Barack Obama, the federal Race to the Top program demanded measurement of teacher impact as part of evaluations. Teachers got used to setting SMART goals for their lessons (M for Measurable!) and putting up data walls in their classrooms. A guide for principals mandated goal-setting, with the proviso that “each target must be quantifiable…you and your school will be most successful if you can justify a goal and target with hard data.” Another popular book for principals is called, simply, Driven by Data.

By the time I became principal of a middle and high school, the data bug had infiltrated our methodology so much so that we effectively shut down all non-test related activities for six days in the spring for state testing. Earlier in the year, we had six other days of testing to judge where students began in reading and math, and how they were progressing according to nationwide norms. We spent the equivalent of a full day of teacher professional development teaching teachers how to give the tests and avoid the appearance of cheating. An assistant principal, along with an assessment manager, devoted the equivalent of almost two months to attending required trainings, creating testing plans, and completing forms and spreadsheets related to the state testing.

We’ve slid from a reasonable, necessary, straightforward question — are the students learning? — to the current state of education leadership: where school leaders and policy-makers expect too much of data, over-test student learning to the detriment of learning itself, and get lost in their abundance of numbers.

No comments:

Post a Comment