Dept of Ed Waivers on Teacher-Evaluation Systems
The Department of Education updated its guidance to state chiefs on Friday, saying it will no longer be requiring some states to have their teacher-evaluation systems fully in place for their waiver from the No Child Left Behind Act to be extended. This additional flexibility will no doubt allow states to consider how they will ensure their teacher-evaluation systems are fit-for-purpose.
In a report released Monday (5/12/14), Morgan Polikoff, assistant professor of education at the University of Southern California, and Andrew Porter, dean of the University of Pennsylvania’s Graduate School of Education, analyzed the relationship between Value-Added Model (VAM) measures and teacher performance and found only a weak correlation. This calls into question whether VAMs are useful in evaluating teachers or improving classroom instruction. Polikoff commented that this reprieve for states will allow further research to be completed on the use of VAMs to link student outcomes and teacher effectiveness.
A paper from The Kingsbury Center at NWEA previously discussed the dangers of using student growth measures to inform teacher effectiveness and ultimately suggested more research is needed.
Thirty states are moving to, or currently are, linking teacher performance with student growth measures. Polikoff and Porter’s findings should cause these states to pause and consider whether this very weak link should be used when judging the quality of teaching and the subsequent evaluation of a teacher’s performance.
Inappropriate use of VAMs is further called into question by the American Statistical Association (ASA) which released a position statement this April warning states that they should be very aware when using VAMs as part of education accountability systems. The report comments that “Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores... Ranking teachers by their VAM scores can have unintended consequences that reduce quality.”
While the ASA endorses the wise use of data to improve the quality of education, they recommend that VAMS, due to their complexity, should be accompanied by measures of precision and a recognition of their limitation, notably when used for high-stakes purposes. The ASA also goes on to note that VAMs, which are generally based on standardized test scores, measure correlation rather than causation and do not directly measure the potential contributions a teacher makes to student outcomes.