Last week the Hillsborough County School District delivered its first complete teacher assessments under the Gates-funded Empowering Effective Teachers. In what has become a model, the district is replacing the single-source evaluation with one made up of observations by the principal and a peer evaluator, and a component that measures student improvement and other data. Earlier this year, teachers received the written portion worth up to 60 points. The data-driven portion is worth up to 40 points, and teachers received that Tuesday. As the district prepared to give teachers their scores, the St. Petersburg Times spoke with project director David Steele. In addition to scores, teachers will get details on how each student performed.
So can teachers really look and say, "I helped Jimmy but I didn't help the other one …"
Yes. We emphasize validating their rosters, almost to death. … When they first came back to school, they got their roster that had the pretest and the post-test information on it so they had the opportunity to look at it and say, "Hey, none of my science kids have their pretests showing." …
We want to make it as easy to read as possible. We're thinking of doing it maybe the way they do movie ratings. Like, "This is a five-star kid for you" and, "This is a two-star kid," so now they can see the relative gain of each student.
Aren't you measuring the growth of some very different students?
One of the concerns you get from teachers is, "I have a lot of Level 1 students in my class." That's okay because each student's expected growth is compared to students like them. If they're a special education student, or an English language learner, or a highly mobile student who maybe moved schools three times, or some of the other things we take into account. Are they too old for their grade level? Or too young for their grade level? We put all those things in to get the growth and that's something we've never been able to do before.
The district worked with the University of Wisconsin on the student improvement data?
Right. … We always did the calculations ourselves. When we got the grant, we now had the means to afford some of the things we would have liked to have afforded. And the way we do the calculations, the value-added, is just much more complicated and inclusive than what we were able to do in-house. We wanted somebody who had done it numerous times. The University of Wisconsin started working with the Milwaukee schools about 10 years ago. They also do the same calculation for Chicago schools, and they've done two years of New York, and since we started working with them they've taken on Los Angeles and Atlanta. … You want it to be as accurate, as fair and as consistent as you can.
Are results from FCAT the only measure used?
There are many more tests that are in the model. One of the things we do that no other district yet in the country does that I know of is, we include every single teacher. So if you're the third-grade art teacher, your kids have had a third-grade art test. In high school we're kind of ahead of the curve because we've always had (end-of-course) exams.
Will you refine the instrument if, for example, second-grade teachers score much higher than seventh grade?
To a certain extent there's some judgment involved. When groups don't look exactly alike, then you say, "Would we expect them to look exactly alike, or do we think there's a problem?" For example, high school math teachers and high school English teachers. Should there really be much of an evaluation difference between those two groups? My experience as a principal is, probably no. And we'll break it down. How did they do compared to other groups on the student growth part, but then how did they do compared to other groups on this written part of the evaluation? We'll look for differences there. Then the other thing is, how good is the correlation between the written evaluation and the value added? We would not expect it to be perfect because if there was a perfect correlation, you wouldn't need to use both parts.
A longer version of this interview is on the Gradebook blog at tampabay.com/blogs/gradebook.