By Teresa Littrell McDaniel
As school opened in August of 2011, teachers and administrators at Jackson Central-Merry Academy of Medical Technology in Tennessee experienced the same trepidation and angst felt across the state. With the implementation of a new comprehensive evaluation model, teachers pondered the equity of including student test scores as a part of their evaluation, and administrators questioned how they would carve out enough time in their day to complete as many as five formal observations for non-tenured teachers per year in three categories: instruction, planning, and environment. Now that our first year is behind us and we have all either celebrated our high composite educator score or made peace with our low score, we can reflect on the most important question: Does the new system improve instruction?
The 2012-13 school year marks the second year for Tennessee’s shift to a new evaluation model based on the state’s 2010 First to the Top legislation. A quick Google search for the Tennessee Educator Acceleration Model (TEAM) will reveal 49,700 sites dedicated to various aspects of Tennessee’s implementation of the new model. Using US Department of Education Race to the Top funds, Tennessee is one of the first states to implement a plan for a comprehensive evaluation process that includes a student performance component. In Tennessee’s model, administrator observations count 50%, student assessment growth performance counts 35%, and student achievement data counts 15%. For the 35%, the state uses its Tennessee Value-Added Assessment System (TVAAS) data to determine a teacher’s or school’s “effect score” based on the predicted performance of a student compared to the observed performance. For the 15%, the state offers certain student achievement data choices including state assessments, “off the shelf” assessments such as AIMS web or Discovery Ed/ThinkLink, graduation rate, ACT/SAT scores, AP exams, or the schoolwide TVAAS. The remaining 50% of the overall score is derived from administrator observations based on the TEAM rubric.
The Tennessee State Department of Education web site http://team-tn.org states that the TEAM is “about principals and teachers working together to ensure that students benefit from the best possible instruction every day” and appeals to the professionalism of all educators with the claim that “like all professionals, teachers deserve regular feedback on their job performance” that includes “frequent, constructive conversations about what’s happening in the classroom.”
TEAM critics suggest that principals and assistant principals spend exponentially more time observing teachers than in previous years. TEAM supporters argue that observing teachers and giving feedback is exactly what an administrator should spend most of his or her time doing. However, the point of this article is not to discuss the pros and cons of the TEAM, but rather to discuss the merits of implementation in my Tennessee high school.
TEAM in Practice
As a new assistant principal in the middle of the semester in 2011, I began the observation process one month behind in completing the observations that had been assigned to me. Of all the duties assigned to me that first week, observing instruction is the one task I felt most confident in completing. I didn’t know if I would be a good leader or administrator (in fact, the jury is still out on that), but I knew that I had been a good teacher and I knew good instruction when I saw it. I completed the training, passed the required certification test, and studied the rubric with complete confidence. I scripted feverishly as I had seen administrators do in my own classroom in the past, and collected “evidence” of the various components of the rubric. But when the time came for me to score that first observation, my confidence waned as the reality hit me that the score I put on that observation form would be used, in part, to determine if a teacher would receive tenure or not, and more importantly, to determine if that teacher will be viewed as an effective teacher. I poured over my script and notes for hours, comparing scripted lesson details to the components of the rubric, identifying evidence, and judging whether my observations constituted a score of 1 “significantly below expectation,” a score of 3, “at expectation,” or a score of 5, “significantly above expectation.”
Perhaps the most perplexing aspect of the new rubric was the state trainer’s insistence that a score of “3” was considered “rock solid teaching.” In a world of high accountability testing, what does “rock solid teaching” mean? Imagine trying to sell veteran teachers on the idea that “rock solid teaching” constitutes a score of “3” and not a “4” or “5.” As added pressure, the state trainers had informed us that each administrator’s scores would be compared to other administrators across the state, and that our scores would be compared to the actual student performance data for the individual teacher or school. In fact, the state had identified schools that were “misaligned” for targeted professional development to ensure that evaluations more closely aligned with the TVAAS student performance data. Fortunately, our high school was not one of those schools, but data does suggest that I still have some work to do to more closely align my scores with the individual teacher data.
One can argue the ethical question of measuring a teacher’s ability by student performance, the inequity across the state in administrator score consistency, or the reasonable practice of granting tenure based on what could be a few snapshots of a teacher’s instruction, but few would argue whether or not the rubric represents evidenced based upon best practices for teachers. The seven-page TEAM rubric covers three broad categories: instruction, planning, and environment.
The instruction portion is subdivided into 12 specific focus areas: standards and bbjectives, motivating students, presenting instructional content, lesson structure and pacing, activities and materials, questioning, academic feedback, grouping students, teacher content knowledge, teacher knowledge of students, thinking, and problem solving. Who could argue that a good lesson should include all 12 of the specific focus areas?
A score of “5” includes observed evidence of at least 65 individual bullets at an exemplary level of mastery. For example, a “significantly above expectation” lesson must include “oral and written feedback that is consistently academically focused, frequent, and mostly high quality and regularly used to monitor and adjust instruction” with questions that are “varied and high quality providing for some of the question types: knowledge and comprehension, application and analysis, creation and evaluation and include wait time of 3-5 seconds for both volunteers and non-volunteers.”
The Presentation of Instructional Content should include “visuals that establish the purpose of the lesson...examples and analogies that label new concepts...modeling by the teacher to demonstrate his or her performance expectation” for which “most students demonstrate mastery of the standards and objectives.” For the planning observation, student work must include assignments that require students to “organize, interpret, analyze, synthesize, draw conclusions, make generalizations and produce arguments supported through extended writing, and connect what they are learning to experiences, observations, feelings, or situations significant to their daily lives.”
In the environment observation, “teachers must show evidence of setting high and demanding academic expectations for every student and use several techniques such as social approval, contingent activities, and consequences to maintain appropriate student behavior.” This provides a sampling of the language of the 65 bullets from the TEAM rubric, language which echoes educational gurus such as Madeline Hunter or Benjamin Bloom.
Perhaps the most radically different aspect of the new model is the post conference. As part of the post conference procedure, teachers use the post evaluation form to report their self-evaluation scores for their lesson. The form asks the administrator and the teacher to score each of the focus areas and identify one area for “reinforcement” and one area for “refinement.” I find this particularly helpful in the post conference for two reasons. First, it helps me to know if the teacher and I have a similar evaluation of the lesson. Second, it provides intentional conversations about specific areas in the rubric. Using the rubric and evidence from the lesson, we discuss specific components of the rubric that were included or not included in the lesson.
The post conference conversation is directed to very specific evidence from the lesson that demonstrates teaching mastery. For example, in one post conference the teacher did not agree with my score of “3” for assessment. However, when I pointed out that the lesson did not include “descriptions of how assessment results will be used to inform future instruction” or “require portfolio-based assessment aligned with state content standards,” the teacher could understand that his level of assessment did not warrant a score of 4 or 5 for “significantly above expectation” because the lesson did not include these components.
The detail of the rubric, the instructional focus of the post conference, and the increased number of observations for each teacher has shifted the role of Tennessee administrators to a more instructional leadership focus. At JCM, the principal and three assistant principals divide the number of required observations and agree on a timeline for completion. We observe an entire 90-minute block class to give teachers every opportunity to include all components of the rubric. Since our school attracts new teachers (that’s the topic of another article), we are required to conduct more observations than a school with a higher number of tenured teachers or level “5” teachers.
Certainly, without question, the new model requires us to spend more time in classrooms observing teachers than in previous years, but in a teacher-centered school like ours, improving instruction is our primary focus. How can we do that if we are not in the classrooms giving teachers specific feedback for how they can improve instruction?
Does the TEAM Improve Instruction?
This leads us back to the question: Does the Tennessee Educator Acceleration Model improve instruction? I have no doubt that the new model has had a positive impact on instruction in our school. As I have completed observations for this semester, I have reviewed areas of refinement for last year to assess improvement, and I definitely see evidence that teachers have worked to improve their instructional skills. But more importantly, a survey of our teachers reveals how they feel about the model.
When asked if they believed the TEAM has improved their instruction, 68% of the JCM faculty responded “agree,” and 8% responded “strongly agree.” When asked if the post conferences with principals and assistant principals provide helpful feedback, 48% answered “agree,” and 28% answered “strongly agree. In response to the qualitative request for additional comments, one teacher wrote, “It causes me to be more analytical about instruction, and makes me more conscious of the particular rubric lines. It makes me constantly think about whether or not I am hitting the points of the rubric.”
If student performance is an indicator of teacher ability, as suggested by the Tennessee model, then JCM teachers are performing at expectation or above. Five of our seven state assessment scores suggest that our students achieved at least one year of growth or more last year, so the evidence certainly suggests that TEAM does improve instruction at JCM.
Advice to Assistant Principals Who Want to Implement a Similar Model
One piece of advice that I would offer assistant principals in a state that does not have an existing comprehensive instructional rubric for observations would be to borrow Tennessee’s model < http://team-tn.org/>. Whether you agree or disagree with the philosophies and practices behind Tennessee’s valiant effort at revamping educator observations, few could argue that the TEAM rubric exemplifies good teaching. An assistant principal who wants to improve instruction should identify a rubric and use that common language in post conferences to give teachers genuinely useful feedback.
The other piece of advice I would give would be to take steps to ensure consistency within your scoring. First, borrowing from the TEAM guidelines, score the evidence of the lesson, not the teacher as a person. It is difficult to resist the urge to score a lesson based on evidence alone when, for instance, you know that a particular teacher uses a variety of activities and materials even though he or she had only one or two activities on the day you dropped in to observe.
Consistency in your own scoring and among other evaluators is the key to reassuring teachers that the observations are fair and equitable, an area which JCM assistant principals continue to address. When the survey asked if JCM teachers believed that the model is a fair assessment of their teaching ability, only 52% agreed or strongly agreed as compared to the 76% who agreed or strongly agreed that the TEAM both improves instruction and provides helpful feedback. In regards to consistency, one teacher wrote, “there needs to be more of a unified consensus of what administrators are looking for.”
Finally, adopt the mission that all educators, including yourself, must constantly grow professionally to improve their craft. Shift your faculty mindset from the idea that an observation is a onetime snapshot of one’s teaching ability to the mindset that each observation is an opportunity to have a dialogue about the fact that their instructional skill set is a fluid, ever-changing, multi-faceted conglomerate that never ceases to strive for excellence.
Teresa Littrell McDaniel is an assistant principal at Jackson Central-Merry Academy of Medical Technology in Tennessee.