Gradebook

Education news and notes from Tampa Bay and Florida

How far is too far on value-added?

28

February

TAMPA -- On Sunday we told you about Hillsborough's plans to roll out a new value-added system for calculating just how much each teacher contributes to student learning.

Under value-added, statisticians use test scores to predict how individual students will do on future tests, correcting for at least some of the variables that might affect their performance. Then school districts evaluate teachers on their ability to meet or exceed those goals. (In other words: did Mrs. Brown "add value" for each of her students by helping them to achieve that year's-worth of predicted growth?) In Hillsborough such scores will make up 40 percent of teachers' evaluations, while Florida legislators are considering 50 percent under teacher effectiveness bills like SB 736.

Value-added has been nothing if not controversial. It prompted a boycott of the Los Angeles Times (and perhaps a teacher suicide) when that paper commissioned its own value-added analysis. New York City newspapers are all in a lather over the question of whether such scores should be made public.

And even well-respected education researchers and statisticians worry that school systems may be asking too much of a methodology that has been shown to yield high error rates. In one study commissioned last year by the federal Department of Education, researchers from the Mathematica consulting firm found that value-added can misclassify teachers up to 35 percent of the time using a single year's-worth of scores; using three years of scores, as Hillsborough plans to do, that error rate falls to 25 percent.

But we should point out that those Mathematica researchers suggest another way of reducing those error rates with value-added: use scores to evaluate whole schools, rather than individual teachers.

"A performance measurement system at the school level will likely yield error rates that are about 5 to 10 percentage points lower than at the teacher level," they wrote. "This is because school-level mean gain scores can be estimated more precisely due to larger student sample sizes."

Colorado has taken that approach, aggregating student value-added scores to draw conclusions about whole schools, rather than individual teachers.

"Does it raise a flag for certain schools?" asked Derek Briggs, an associate professor of education at the University of Colorado at Boulder. "If you get the number and then go in and see what’s going on, you're much more likely to see something that’s very revealing about the school."

Among other things, such an approach avoids the sticky questions that come up when you're trying to figure out how to measure the contribution of each person in a co-teaching pair; the teacher who works with a small number of special-needs students within a larger classroom group; or the school principal.

But it appears that Colorado, too, may be headed toward using value-added calculations to measure individual teachers. Under legislation approved last spring, 50 percent of a teacher's evaluation must be based upon student performance measures that will likely be drawn from value-added calculations.

Rick Hess of the conservative American Enterprise Institute said Colorado legislators went too far.

"While the bill is a giant step forward, the impatient rush to 'fix' teacher quality in one furious burst of legislating amounts to troubling overreach; it is a case of putting the cart before the horse," he wrote. "The result: Hugely promising efforts to uproot outdated and stifling arrangements get enveloped in crudely drawn, sketchily considered, and potentially self-destructive efforts to mandate a heavy reliance upon value-added assessment."

What do you think? Are we going to see a thoughtful debate over the potential uses of value-added during Florida's upcoming legislative session, or are minds already made up?

[Last modified: Sunday, March 6, 2011 8:37pm]

Join the discussion: Click to view comments, add yours

Loading...