Clear77° WeatherClear77° Weather

Gradebook

Education news and notes from Tampa Bay and Florida

CJR looks at media response to New York teacher rankings

If you have time for a long read, this is a good one.

LyNell Hancock, a professor at the Columbia Graduate School of Journalism, offers an analysis of how the media responded to the February release of teacher rankings in New York. The piece, called "When Big Data is Bad Data," was published in the Columbia Journalism Review this month. In it, Hancock considers the good, bad and ugly of the stories that followed New York City's controversial release of teacher rankings.

She writes:

"Just because you have data doesn’t mean it is always right to publish it—especially if you know the numbers are no good. And these numbers do have huge problems. Everyone from economists, to educators, to knowledgeable city education reporters know that the arcane algorithms that generated the teacher-rating numbers are as statistically flawed as they are politically fraught.

The complex formulas are meant to measure how much value a teacher contributes to a student’s learning growth (or lack of growth) over time. It would be useful if they actually did. But the data are riddled with mistakes, useless sample sizes, flawed measuring tools, and cavernous margins of error. The Department of Education says that a math teacher’s ranking could be off by 35 percent; an English teacher’s by 53 percent. That means a reading teacher with a ho-hum 35 could either be as horrid as a 1 or as awesome as an 86—take your pick. What election survey with these kinds of gaping margins would be published in the papers?

Most damning—and most often ignored in the coverage—is that the sole basis for these ratings are old student tests that have since been discredited by the New York State Board of Regents. The 2007-2010 scores used for these teacher rankings were inflated, the Regents determined. The Department of Education had lowered the pass score so far that the tests had become far too easy. So not only were the algorithms suspect, but the numbers fed into them were flawed. News organizations that publish them next to teachers’ names run the risk of not only knowingly misleading the public, but also of becoming entangled in the political web surrounding teacher evaluations, which extends from the mayor’s office, to the state house, to unions, philanthropy board rooms, and to the White House.

And yet, nearly every city news organization went ahead and printed them anyway."

Hancock writes about the varied approaches of the new agencies, from the New York Times to GothamSchools.org, and asks editors about the choices they made in handling the data. It's an interesting behind-the-scences account.

She also makes the point that release of the obviously flawed data had a strange, perhaps unintended effect:

"To my mind, all reasons not to publish still exist. They are still true. But in the last month, I’ve come around to an opposite, perhaps more cynical, conclusion about the virtues of making them public. Publishing them, it seems to me, has had an odd, clarifying effect. Releasing the data to public scrutiny, alongside context and caveats, has exposed just how flawed they really are."

 

[Last modified: Wednesday, May 30, 2012 5:33pm]

    

Join the discussion: Click to view comments, add yours

Loading...