Research papers fly as value-added debate heats up
It has been hard to miss the roiling debate in recent months over the use of value-added techniques to measure teacher effectiveness.
First the Los Angeles Times drew national attention by conducting its own study of LA teachers, putting those deemed weak on the front page and earning the limited endorsement of federal Secretary of Education Arne Duncan. More recently, teachers unions protested the possible public release of teachers' value-added scores in New York.
What's it all about?
In essence, value-added uses statistical analysis to predict individual student growth on standardized tests. Once you've made those calculations, you can also rate teachers on whether they hit or surpassed those predictions, or "added value" for each of their students.
States and school districts from Louisiana to Tennessee and Washington have used the techniques, and Hillsborough County is in the midst of designing its own value-added formula (with backing from both national teachers' unions) as part of its seven-year, $202 million reform effort with the Bill & Melinda Gates Foundation.
Now we're seeing researchers choosing sides in the fray.
There was last summer's big report from the Economic Policy Institute in which some heavyweights like Stanford professor Edward Haertel, chairman of the National Research Council's Board on Testing and Assessment, cautioned against relying too heavily on value-added methods.
And last week brought a response of sorts from the Brown Center on Education Policy at Brookings. In a new report, researchers led by Steven Glazerman of Mathematica Policy Research say that value-added calculations add confidence and reliability to teacher evaluations. What's important is to not lean too heavily on such measures.
"One can, for example, be in favor of an evaluation system that includes value-added information without endorsing the release to the public of value-added data on individual teachers," they write.
To a large degree, researchers on both sides of the fence agree on that point. And no one seems to be arguing that you should base hiring and firing decisions solely on value-added measures.
Still, it takes a patient reader to figure out exactly where they disagree:
"We have a lot to learn about how to improve the reliability of value-added and other sources of information on teacher effectiveness, as well as how to build useful personnel policies around such information. However, too much of the debate about value-added assessment of teacher effectiveness has proceeded without consideration of the alternatives and by conflating objectionable personnel policies with value-added information itself." (Brown Brookings)
"Used with caution, value-added modeling can add useful information to comprehensive analyses of student progress and can help support stronger inferences about the influences of teachers, schools, and programs on student growth." (Economic Policy Institute.)
And even some who generally argue for tough accountability standards, like Frederick Hess of the conservative American Enterprise Institute, have voiced queasiness over the trend. In a blog post last summer, he pointed out how volatile value-added calculations can be, depending on which variables are used. And he worried that policymakers will reach too far and expect too much from such measures.
"When the shortcomings become clear, when reanalysis shows that some teachers were unfairly dinged, or when it becomes apparent that some teachers were scored using sample sizes too small to generate robust estimates, value-added will suffer a heated backlash," he predicted, decrying the "overcaffeinated enthusiasm that turns value-added from a smart tool into a public crusade."
In other words, stay tuned.