Setting the bar low

setting the bar low

My new school district is voluntarily using the process outlined by Washington State’s Teacher & Principal Evaluation Pilot (TPEP). As of the 2013-14 school year, all districts in Washington will use the new evaluation process.

I’m already experiencing its negative effects…

At the beginning of the year, all teachers were required to create Professional Growth Plans (PGPs). We were required by our district to use improving student achievement as one of our two goal areas. The second area had to be aligned with the rest of our professional learning community.

There lies problem #1…

We were not allowed to choose our own goals in this process. On the flipside, we also weren’t even encouraged to select goals that were aligned to a personal deficit or weakness. Shouldn’t a true improvement process be either, a) personal, or b) based on improving actual weaknesses?

Problem #2 relates to the requirement to assess student growth with data…

Of course, I see nothing wrong with assessing student growth. That is part of the basic core of teaching. The real problem lies in the requirement for growth data. We were essentially encouraged to give a pre-assessment as a baseline (one which we know students will score low on, since they haven’t yet encountered the content) and then measure growth from there. 70% of our students must show measurable growth. This shouldn’t be hard to do since we are comparing to a baseline pre-test and they just have to show some growth. This is the first example of setting the bar low that I see in this process.

Of course, administrator observation of the teacher in action is a big part of the evaluation. The flipside of the TPEP is that principals are evaluated as well. I’m not sure exactly but I believe that principals are required to show measurable evidence of teacher improvement under their leadership… and this is where I just busted my shins on the low bar set for me…

Problem #3 is that the requirement for administrators to show measurable teacher growth causes them to set the bar low as well…

Every administrator I have taught under has been hesitant to rate teachers as exemplary during evalutions. More than one has told me flat-out that they don’t rate teachers too highly because they want to leave room to document “growth.” It’s much easier to show growth when you set a low bar in the first place.

Which brings me around to the observation that I received today. My administrator spent several minutes gushing to me about my content knowledge, pedagogical skill, rapport with students, professional leadership, etc. Then he handed me my very underwhelming evaluation. Don’t get me wrong, he didn’t score me as “Unsatisfactory” in any areas. However, he also didn’t score me as “Distinguished” in any areas. For most areas I was scored “Proficient” with a few as “Basic.” Yet, he gave no explanation as to why some areas were only “Basic.”

Now, you must understand that I am a perfectionist and would be the first to score myself very harshly. Of course, this perfectionism also makes me obsess over my “score.” My real point here is that all of the descriptive feedback was glowingly positive and yet I had a few areas with a score of “Basic” and none with a score of “Distinguished.”

Of course, I think I know what is really going on here. My evaluator is also being evaluated. Part of his evaluation is based upon the improvement of the teachers reporting to him. Thus, if he sets the bar low, he can then score me higher at the end of the year and show clear “evidence” of improvement. This sounds just like what we were encouraged to do with our student achievement “data.”

Nonetheless, I’m left feeling that this whole process is yet another shell game that has replaced our – admittedly ┬áless descriptive – evaluation process with one that masquerades as being more nuanced and specific but really just creates more hoops to jump through.

Color me underwhelmed…

6 thoughts on “Setting the bar low

  1. Thanks for the post! I am working on prepping for my first formal observation under the system this week and decided to #tpep to see if there has been much dialog on the topic. I just came out of a 2 day training by Carol Sims on the Danielson Framework and then had the webinar slides from the OSPI webinar shared with me at the end of the day on Friday. The student growth goals, I believe, were pretty vague. More clarity coming the first of December we’re told. I was really hopeful for this tool. Based on your post, it sounds like it can easily be turned into hoop jumping.

  2. Hi there I am Melissa Willis. I am an elementary education major at the University Of South Alabama. I’m enrolled in a class called EDM 310 and have been given your blog to read for the next month. I found your post to be very eye opening. I believe you had a very strong point. As a future teacher I want to know that I will scored correctly. No one is ever done learning and improving so lets start doing that. If we are going to see real improvement in students, teachers, and administrators the bar needs to be set higher.

  3. Hi again!
    I do think you have a problem in your school with this “documentation” of growth. I understand why they are telling you to set the bar low, but how could you go about changing this system? I hope by the time I am a teacher this will be different.
    Thanks again for keeping such a great and informative blog! I enjoy learning from you, as you have a different and more in-depth perspective on a lot of ideas than my teachers do.
    Kacey at The University of South Alabama

  4. Hello again!

    I was quite stunned that administrators would want to set the bar low at your school. I think the bar should be set higher. By setting the bar high, you can evaluate students, teachers, and administrators to see where they stand.

    Thanks for sharing your post!

    -Stephanie Tisdale, student in EDM 310

  5. Laura,

    Our district is using the tool based on Danielson’s framework. I think it is a solid rubric to define what makes for a good teacher. I found it thought provoking for self-assessment.

    When you try to turn in into a quantitative evaluation, that is where you get problems.

  6. Melissa, Kacey, Stephanie:

    I don’t think the problem lies in the evaluation tool. I also don’t think the problem really lies with my evaluator.

    This problem really speaks to the difficulty of evaluating teachers effectively when there are so many aspects of teaching. Many of these aspects do not lend themselves to accurate quantification.

    When you then try to cram this qualitative evaluation into a system that expects all teachers to show growth, the reality is that you almost HAVE to set the bar low.

    How else will I show growth year after year?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>