Setting the bar low

setting the bar low

My new school district is voluntarily using the process outlined by Washington State’s Teacher & Principal Evaluation Pilot (TPEP). As of the 2013-14 school year, all districts in Washington will use the new evaluation process.

I’m already experiencing its negative effects…

At the beginning of the year, all teachers were required to create Professional Growth Plans (PGPs). We were required by our district to use improving student achievement as one of our two goal areas. The second area had to be aligned with the rest of our professional learning community.

There lies problem #1…

We were not allowed to choose our own goals in this process. On the flipside, we also weren’t even encouraged to select goals that were aligned to a personal deficit or weakness. Shouldn’t a true improvement process be either, a) personal, or b) based on improving actual weaknesses?

Problem #2 relates to the requirement to assess student growth with data…

Of course, I see nothing wrong with assessing student growth. That is part of the basic core of teaching. The real problem lies in the requirement for growth data. We were essentially encouraged to give a pre-assessment as a baseline (one which we know students will score low on, since they haven’t yet encountered the content) and then measure growth from there. 70% of our students must show measurable growth. This shouldn’t be hard to do since we are comparing to a baseline pre-test and they just have to show some growth. This is the first example of setting the bar low that I see in this process.

Of course, administrator observation of the teacher in action is a big part of the evaluation. The flipside of the TPEP is that principals are evaluated as well. I’m not sure exactly but I believe that principals are required to show measurable evidence of teacher improvement under their leadership… and this is where I just busted my shins on the low bar set for me…

Problem #3 is that the requirement for administrators to show measurable teacher growth causes them to set the bar low as well…

Every administrator I have taught under has been hesitant to rate teachers as exemplary during evalutions. More than one has told me flat-out that they don’t rate teachers too highly because they want to leave room to document “growth.” It’s much easier to show growth when you set a low bar in the first place.

Which brings me around to the observation that I received today. My administrator spent several minutes gushing to me about my content knowledge, pedagogical skill, rapport with students, professional leadership, etc. Then he handed me my very underwhelming evaluation. Don’t get me wrong, he didn’t score me as “Unsatisfactory” in any areas. However, he also didn’t score me as “Distinguished” in any areas. For most areas I was scored “Proficient” with a few as “Basic.” Yet, he gave no explanation as to why some areas were only “Basic.”

Now, you must understand that I am a perfectionist and would be the first to score myself very harshly. Of course, this perfectionism also makes me obsess over my “score.” My real point here is that all of the descriptive feedback was glowingly positive and yet I had a few areas with a score of “Basic” and none with a score of “Distinguished.”

Of course, I think I know what is really going on here. My evaluator is also being evaluated. Part of his evaluation is based upon the improvement of the teachers reporting to him. Thus, if he sets the bar low, he can then score me higher at the end of the year and show clear “evidence” of improvement. This sounds just like what we were encouraged to do with our student achievement “data.”

Nonetheless, I’m left feeling that this whole process is yet another shell game that has replaced our – admittedly ┬áless descriptive – evaluation process with one that masquerades as being more nuanced and specific but really just creates more hoops to jump through.

Color me underwhelmed…