Saturday, August 28, 2010

Keeping Score, Take 2

My cousin Eric, a psychiatrist, posted the following comment on yesterday's blog post about teacher evaluations, and it's worth discussing for another day.

Charles, this is an issue that we struggle with in medicine too. There is a strong push to rate doctors, but there is huge disagreement as to how to do it fairly. Every doctor, when faced with being ranked on their outcomes, claims that their patients are sicker than their colleagues', and so any comparison would be apples to oranges. This is of course true of some of them, but can't be true of all of them, logically. So we need a way of ranking just how sick the patients are to begin with (or, for teachers, just how much "potential" each student might have?). Another thorny problem is the fact that teachers and doctors and other professionals deal with human beings (as opposed to, say, widgets), and that humans have a way of doing what they want to do regardless of what their teachers or doctors try to persuade or "make" them do. We just don't have the same control over our "product" as many other workers do. Thirdly, the measurement process cannot be more burdensome than the job itself, however to really do it in ways that capture all the qualities you want in a teacher or doctor or whatever, it seems like it might have to be. Test scores are easy to measure, but don't tell the whole story, just as blood glucose levels or cholesterol levels or scores on a depression questionnaire are easy to measure but don't tell the whole story about what makes a good doctor. But the data is easy to get, so this is what is measured. It's like the old joke about the drunk guy looking for his keys under the streetlight, when he lost them 50 feet away. When asked why he was looking there, he replied "because that's where the light is."
I agree with you that teachers, and doctors, should be evaluated to encourage improvement in quality. The measurement problem, however, is formidable. 

Excellent points on the measurement problem, but our products might not be that different.  There was a "good ole days" (and might still be some good ole employers) when a computer system made up the entire product.  However, we now deliver the ability for our customers to achieve their goals.  We can't just provide an Inventory Report - we validate the inventory data, allow users to modify incorrect numbers, and lock a final version.  Only then can the client confidently use the Inventory Report for budgeting and planning.  I had clients who gave us terrible data files, made a mishmash of modifying it in every way possible, and blamed us for their decisions based on their own data (fortunately, we don't have any of these at S3).  Protecting clients against themselves takes far more time than building the reports, our official output.  People are messy.

I'm not minimizing the added difficulty of dealing with patients/students - my doctor's a saint for putting up with me, and teachers are the most patient people I know.  At some point, though, the outcome is all that matters, regardless of the subject or circumstance.  We may not have enough metadata about education to achieve that level of measurement rigor*, but teachers need to define and enforce their own evaluation, or others will arbitrarily impose easy-to-get metrics like the value-add score.

I know teachers who evaluate their own efforts and work to improve their evaluation, but I've never seen a group/union/department adopt outcome-based standards with any accountability.  I'm sure exceptions exist, but this needs to be a broad trend in education to improve teachers' public image, and, most importantly, students' educations.

*Not that I'm an education expert, but I know so much less about medical practice that I have no idea how much this applies to your field.

1 comment:

  1. I didn't mean to imply that teachers and doctors are the only professionals whose outcomes depend on the actions of others, if that's what it sounded like. Obviously that's untrue. I sometimes think about football coaches (right up your alley, Charles) in this regard. How much of a team's success can be attributed to the coach's actions independently of what the players do, or would have done without him? There's obviously some influence, but it is very hard to measure precisely. Nevertheless coaches are still held responsible for the success or failure of the team, and are paid and fired based on what the players do on the field. Sometimes this is fair and sometimes it's not. In other fields it will be the same, sometimes fair and sometimes not. In the case of coaches, they at least have some ability to select the players they coach, unlike many other professionals.

    In football, the definition of success is pretty clear-cut, wins (or more importantly in the NFL, playoff wins). In education, it might be test scores, maybe. If we think of test scores as ends in themselves, like wins on a football field, then the only debate we're left with is that of how much influence a teacher can have, like in the case of the football coach, and the answer is certainly "at least some - but only some". But assuming that the purpose of education is to lay a foundation for people to succeed in a job or profession later, test scores are less precise a predictor, since they are just correlated with success in the future, rather than defining it. In some areas of medicine, say in management of diabetes, lower blood glucose levels are a pretty good predictor of better outcomes in the long run. In my field, scores on a questionnaire might be, but we're not as sure. So I think it depends. In education, if we want kids who will go on to have productive work lives, test scores are correlated (not sure how highly) but other things that could influence success (confidence, motivation, discipline, creativity, maturity) are much harder to measure. So we don't measure them. Furthermore, we don't know how much teachers influence these variables either (though I remember my favorite teacher ever, who did more than anyone in my life to give me confidence, which was extremely important), so it may be equally unfair to hold them responsible for these variables as well. I don't really have an answer. As I said, I do fundamentally agree with measuring outcomes in some way. I think I just want to make sure that the limits are understood properly.

    Regarding your point that the professions should come up with their own productive proposals, rather than just complain about being measured, I couldn't agree more. The ain't-it-awful stuff without something to contribute irks me a lot. I think the reason there aren't many counter-proposals is that once you start trying to do it yourself, you realize just how hard it is. I don't think teachers or doctors have any better ideas, really. But the stakes can be high, which is where the complaining comes from. I'm sure certain football coaches complain just as much.