VAMboozled! A blog by Audrey Amrein-Beardsley
In November of 2013, I published a blog post about a “working paper” released by the National Bureau of Economic Research (NBER) and written by authors Thomas Dee – Economics and Educational Policy Professor at Stanford, and James Wyckoff – Economics and Educational Policy Professor at the University of Virginia.
In the study titled “Incentives, Selection, and Teacher Performance: Evidence from IMPACT,” Dee and Wyckoff (2013) analyzed the controversial IMPACT educator evaluation system that was put into place in Washington DC Public Schools (DCPS) under the then Chancellor, Michelle Rhee.
In this paper, Dee and Wyckoff (2013) presented what they termed to be “novel evidence” to suggest that the “uniquely high-powered incentives” linked to “teacher performance” via DC’s IMPACT initiative worked to improve the performance of high-performing teachers, and that dismissal threats worked to increase the voluntary attrition of low-performing teachers, as well as improve the performance of the students of the teachers who replaced them.
I critiqued this study in full (see both short and long versions of this critique here), and ultimately asserted that the study had “fatal flaws” which compromised the (exaggerated) claims Dee and Wyckoff (2013) advanced.
These flaws included but were not limited to that only 17% of the teachers included in this study (i.e., teachers of reading and mathematics in grades 4 through 8) were actually evaluated under the value-added component of the IMPACT system.
Put inversely, 83% of the teachers included in this study about teachers’ “value-added” did not have student test scores available to determine if they were indeed of “added value.”
That is, 83% of the teachers evaluated, rather, were assessed on their overall levels of effectiveness or subsequent increases/decreases in effectiveness as per only the subjective observational and other self-report data include within the IMPACT system.
Hence, while authors’ findings were presented as hard fact, given the 17% fact, their (exaggerated) conclusions did not at all generalize across teachers given the sample limitations, and despite what they claimed.
In short, the extent to which Dee and Wyckoff (2013) oversimplified very complex data to oversimplify a very complex context and policy situation, after which they exaggerated questionable findings, was of issue, that should have been reconciled or cleared out prior to the study’s release.
I should add that this study was published in 2015 in the (economics-oriented and not-educational-policy specific) Journal of Policy Analysis and Management (see here), although I have not since revisited the piece to analyze, comparatively (e.g., via a content analysis), the original 2013 to the final 2015 piece.
Anyhow, they are at it again. Just this past January (2017) they published another report, albeit alongside two additional authors: Melinda Adnot – a Visiting Assistant Professor at the University of Virginia, and Veronica Katz – an Educational Policy PhD student, also at the University of Virginia. This study titled “Teacher Turnover, Teacher Quality, and Student Achievement in DCPS,” was also (prematurely) released as a “working paper” by the same NBER, again, without any internal or external vetting but (irresponsibly) released “for discussion and comment.”
Hence, I provide below my “discussion and comments” below, all the while underscoring how this continues to be problematic, also given the fact that I was contacted by the media for comment.
Frankly, no media reports should be released about these (or for that matter any other) “working papers” until they are not only internally but also externally reviewed (e.g., in press or published, post vetting). Unfortunately, as they too commonly do, however, NBER released this report, clearly without such concern.
Now, we as the public are responsible for consuming this study with much critical caution, while also advocating that others (and helping others to) do the same. Hence, I write into this post my critiques of this particular study.
First, the primary assumption (i.e., the “conceptual model”) driving this Adnot, Dee, Katz, & Wyckoff (2016) piece is that low-performing teachers should be identified and replaced with more effective teachers. This is akin to the assumption noted in the first Dee and Wyckoff (2013) piece.
It should be noted here that in DCPS teachers rated as “Ineffective” or consecutively as “Minimally Effective” are “separated” from the district; hence, DCPS has adopted educational policies that align with this “conceptual model” as well. Interesting to note is how researchers, purportedly external to DCPS, entered into this study with the same a priori “conceptual model.” This, in and of itself, is an indicator of researcher bias (see also forthcoming).
Nonetheless, Adnot et al.’s (2016) highlighted finding was that “on average, DCPS replaced teachers who left with teachers who increased student achievement by 0.08 SD [standard deviations] in math.”
Buried further into the report they also found that DCPS replaced teachers who left with teachers who increased student achievement by 0.05 SD in reading (at not a 5% but a 10% statistical significance level).
These findings, in simpler but also more realistic terms, mean that (if actually precise and correct, also given all of the problems with how teacher classifications were determined at the DCPS level), “effective” mathematics teachers who replaced “ineffective” mathematics teachers increased student achievement by approximately 2.7%, and “effective” reading teachers who replaced “ineffective” reading teachers increased student achievement by approximately 1.7% (at not a 5% but a 10% statistical significance level).
These are hardly groundbreaking results as these proportional movements likely represented one or maybe two total test items on the large-scale standardized tests uses to assess DCPS’s policy impacts.
Interesting to also note is that not only… READ THE FULL BLOG POST HERE: VAMboozled! | VAMboozled! A blog by Audrey Amrein-Beardsley