Gradebook

Education news and notes from Tampa Bay and Florida

Why we didn't run lists ranking teachers

24

October

Tens of thousands of teachers. All listed by name. All ranked by the state.

What a gold mine.

What newspaper would obtain a list like that and not run it? Well, this one.

Today's story about teachers being rated by the FCAT noted in passing that the St. Petersburg Times had obtained rankings that the state Department of Education put together, for free, to satisfy a request from one of Jeb Bush's education foundations.

Publishing the rankings would have been potentially explosive. But we didn't do it. And we wanted to explain why.

Just to recap: The foundation asked the DOE for the names of the teachers whose students showed the most impressive gains last year on the reading and math FCAT tests. It also asked that the teachers be grouped in 30 categories that separated them by subject, school level and student type. The DOE used "value-neutral tables" so it could compare teachers who teach different types of students. And ultimately it gave the foundation the names of the 50 top teachers on each list.

The Times asked for, and got, all the lists, in their entirety. And we seriously considered spotlighting them for all the world to see. At a time when teacher quality is such a huge issue (despite what the lack of coverage in most news outlets would lead you to believe), we thought the lists might be useful, especially to parents.

But teacher quality experts we talked to made compelling arguments against publication. And when we looked at the data, we saw a ton of errors. That suggested the lists were far from reliable.

Teacher ratings based on student test scores are very "volatile," the experts told us. Many teachers who score near the top one year are likely to fall to the middle the next, they said. And some of the experts had studied the issue enough to offer examples.

It made us wonder: Were the teachers on top of a list based on one year's data really the best, even according to this single measure? Or did they look great because of a one-year spike?

One day, maybe even one day soon, there might be a way to analyze student test scores and tease out a teacher's contribution. But as Tim Sass of Florida State University said in the story, we're not there yet. Student quality and other, nonteacher factors remain part of the mix.

Using one year's test scores to measure teacher quality is "just inaccurate," said Kate Walsh, president of the National Council on Teacher Quality. "You're testing all the influences of that classroom, so there's all sorts of problems that are not accounted for, and they don't know how much that matters."

Walsh, who also argued against publicizing the lists for reasons of privacy and fairness, said the lists might be more reliable if they included three years worth of data. If some teachers ranked near the bottom three years in a row, and school districts had not taken action to deal with them, "then you'd have a story to tell," she said. (Lawmakers, did you hear that?)

Another expert, Eric Hanushek at Stanford, said even though the rankings fluctuate to some extent, the lists might still be useful for more generalized conclusions. He suggested that instead of exact rankings, we publish the teachers' names according to who's in the top 25 percent, the bottom 25 percent and the middle 50 percent.

We considered that, too. But then the other, more insurmountable problem came into play: the errors.

More than 1,000 teachers on the lists weren't named. Some teachers were listed as having hundreds of students. And school grade and demographic information was often wrong. Times researcher Connie Humburg found more than 20 percent of the entries for Pasco teachers alone had inconsistencies.

As the story noted, the Bush foundation found problems, too, including cases where the wrong teachers had been given credit for student gains. The DOE acknowledged the lists did not go through the same verification process that they would have had they been used for policy development and/or publication.

Because the foundation only dealt with a few names, it was able to call each teacher's school and check whether the DOE data was correct. With tens of thousands of names, we couldn't do that.

So, we decided, the world will be better served if we keep the lists in the dark.

Ron Matus, state education reporter

*

[Last modified: Tuesday, May 25, 2010 10:01am]

    

Join the discussion: Click to view comments, add yours

Loading...