Feedback to teachers must be actionable, realistic, and specific

As a school leader, when was the last time you received written feedback about your practice? Did that feedback prompt you to make any changes? If so, did those changes make a difference in your subsequent performance?

Across the country, new educator evaluation policies require more rigorous evaluations of teachers. These systems typically include, among other requirements, classroom observation followed by verbal and written feedback, as well as action planning to improve practice. Researchers and practitioners have paid significant attention to developing and utilizing observation instruments to assess instruction and have crafted guidance about post-observation discussions. However, there has been less focus on the criteria for written feedback that will elicit actionable, realistic, and specific recommendations for teachers to improve their practice.

To address this gap in the criteria for written feedback, the Boston Public Schools (BPS) Office of Human Capital and researchers at the Education Development Center Inc. (EDC) developed the Evaluative Feedback Rubric to assess the written feedback that evaluators-typically school leaders such as secondary school principals-provide to teachers who have been identified as needing improvement.

A Prescription for Success

BPS implemented a new teacher evaluation system in 2012-13 that included a professional practice framework with four standards: 

  • Curriculum, Planning, and Instruction 
  • Teaching All Students Family and Community Engagement 
  • Professional Culture

Evaluators who are assessing teachers assign a rating of “exemplary,” “proficient,” “needs improvement,” or “unsatisfactory” for each standard and for overall summative performance. If a teacher receives a rating of “needs improvement” or “unsatisfactory” on any of the standards, the evaluator provides what the district refers to as a prescription. A prescription has four components:

  • The standard and indicator in which the teacher is expected to improve. For example, Curriculum, Planning, and Instruction (standard) and well-structured lessons (indicator).
  • A problem statement that outlines the problem that needs to be addressed. For example, “<Name> does not create lessons that appropriately meet the needs of the range of learners in his class, nor does he establish measurable objectives.”
  • Evidence to support the assignment of the rating. For example, “On <date>, <Name> conducted a lesson with the whole class answering in unison.”
  • A prescription statement that describes action items, such as professional development or recommended practices, required of the teacher to address the problem. For example, “<Name> must submit lesson plans every Thursday that include accommodations for students on IEPs, student engagement strategies, and differentiated groupings of students. We will address the lesson plans and differentiation strategies in our weekly meeting. <Name> should observe Ms. X’s classroom, take notes, and bring these to our discussions.”

Although prescriptions, as a way of providing written feedback, may be specific to BPS, 29 states require written improvement plans for teachers who receive less-than-proficient ratings.

Evaluative Feedback Rubric 

EDC and BPS developed the Evaluative Feedback Rubric based on a review of BPS’ teacher evaluation documentation and the literature suggesting that high-quality feedback is based on observable data, is specific and actionable, and promotes reflection. The Evaluative Feedback Rubric addresses the three written components of the prescription (problem statement, evidence, and prescription statement). These components are evaluated based on at least two of the following criteria: alignment, clarity, and specificity. Each prescription receives a rating of “meets,” “partially meets,” or “does not meet” each criterion (see Table 1).

Testing Reliability of Evaluative Feedback Rubric

To examine the reliability of the Evaluative Feedback Rubric and identify potential modifications, two researchers independently rated 57 prescriptions on each rubric criterion, and a third researcher calculated the percent of prescriptions for which the two raters agreed on the ratings. The agreement rates ranged from 96 percent for alignment of problem statement to the standard, to 65 percent for alignment of the evidence to the problem statement. Two researchers agreed on the ratings on at least five criteria for 77 percent of the prescriptions. (To put these numbers in perspective, classroom observers in the “Measures of Effective Teaching” study were required to demonstrate 50 to 70 percent agreement, depending on the observation instrument.)

Questions to Guide Implementation

The development and research on the Evaluative Feedback Rubric described in this article raised the following questions for BPS that may guide the implementation and further modification of the rubric:

  • Should all criteria have the same weight, or should specificity and clarity have greater weight, since they are more closely tied to the possible actions teachers may take to address the problem?
  • To ensure that the issues identified in the problem statement are manageable for the teacher to address and for the evaluator to track over time, should there be a limit on the number of issues included in the problem statement?
  • Should the evaluator be required to indicate how the identified problem impacts students?
  • Should the prescription statement include information about how teachers may demonstrate that they took the prescribed action?

Written feedback to teachers should clearly articulate an issue, provide specific evidence to support the issue, and describe clear and specific actions to address the issue. Developing this rubric helped build consensus about the nature of feedback that supports effective teaching and learning. Sharing the rubric and the issues that were identified in the early use of the rubric may provide other school leaders with insight into the criteria to consider and the questions to raise about the characteristics of feedback that will best support educators. 

Jacqueline Zweig, PhD; Karen Shakman, PhD; and Jessica Bailey, PhD, are researchers at the Education Development Center Inc. in Waltham, MA. Leah Levine and Jerome Doherty work in Boston Public Schools’ Office of Human Capital in Roxbury, MA.