The Michigan Department of Education is requesting information this week on assessment tools to be used by local school districts in determining areas for growth in third grade reading evaluations. Requirements for such literacy standards … Read More »
To address the demands of the modern economy, a report issued by the Michigan League for Public Policy (MLPP) this month calls for the state to increase adult education funding by $10 million. That would … Read More »
BOULDER, CO (April 13, 2017) – A new report by the American Enterprise Institute (AEI) compares differences in approaches and demographics between and among charter school models and local “traditional public schools.” The report links varied models to stratified parental choices and then to correspondingly stratified student composition, concluding that these differences and stratification are either beneficial or benign.
T. Jameson Brewer of the University of North Georgia and Christopher Lubienski of Indiana University reviewed Differences by Design? Student Composition in Charter Schools with Different Academic Models for the Think Twice Think Tank Review Project at the National Education Policy Center, housed at the University of Colorado Boulder’s School of Education.
Using three national data sets, the report effectively captures the universe of charter schools. It takes a separate look at enrollment demographics for different models: arts, no-excuses, progressive, credit-recovery, classical, single-sex, STEM, vocational, and international. It empirically demonstrates that different demographic groups attend different types of charter schools. The report documents this de facto segregation with regard to, among other categories, race and ethnicity, family income, and special education status.
Charter schools, the authors contend, provide differentiated and “innovative schooling options” through varied academic models that cater to, and ultimately reflect, parental choices for their children. The resulting stratification is presented as a benign byproduct of beneficial choices differentially associated with, e.g., different racial and ethnic groups. They contend this is “consistent with the theory behind charters” and “in line with a properly functioning charter sector.”
Unfortunately, the reviewers conclude, the report does not demonstrate familiarity with the research on parent decision-making or with the extensive research suggesting that charter schools are not particularly innovative in the curricular or instructional options. Despite what the report claims, traditional public schools do, in fact, offer various academic model specializations like the ones offered by the charter schools.
Finally, the reviewers express concern and disagreement with the report’s dismissive characterization of charters’ de facto segregation and stratification of students by other demographic characteristics, which they contend is at odds with the purpose and aims of equitable public education.
Find the review by T. Jameson Brewer and Christopher Lubienski at:
Find Differences by Design? Student Composition in Charter Schools with Different Academic Models, by Jenn Hatfield & Nat Malkus, published by the American Enterprise Institute, at:
The slightly-cranky voice navigating the world of educational “reform” while trying to still pursue the mission of providing quality education.
We’re all going to be hearing about a piece of research, a working paper that suggests that teacher merit pay works. Sort of. Depending on what you mean by “works.”
Matthew G. Springer, an assistant professor of public policy and education at Vanderbilt University, has produced a meta-analysis (that’s research of the research) entitled “Teacher Merit Pay and Student Test Scores: A Meta-Analysis” in which he concludes that merit pay is connected to increased student test scores. Springer is also the director of the National Center on Performance Incentives,”a national research and development center for state and local policy” housed by Vanderbilt (he’s actually had that job longer than his professor position).
During the past several decades, policymakers have grown increasingly interested in innovative compensation plans, including performance-based pay for K-12 educators. Yet, efforts to reform pay have lacked grounding in a scholarly base of knowledge regarding the effectiveness of such plans.
So I’m not sure whether the center’s mission is “see if this stuff works” so much as it is “prove this stuff works,” which is a somewhat less objective mission. And Springer does some worjk outside of Vanderbilt as well, like his post on the advisory board of Texas Aspires, where he sits with Rick Hess (AEI), Mike Petrilli (Fordham), Erik Haushek (Hoover Institute), Chris Barbic (Reformster-at-Large, now apparently with Arnold Foundation)and other reformy types.
Springer certainly has some ideas about teacher pay:
“The bottom line is the single-salary pay schedule does not allow systems to reward the highest performing teachers,” Springer said. “These teachers deserve a six-figure salary, but we’ll never get there with a single-salary schedule that would require all teachers of equal experience and degree attainment to get paid the same amount. It’s just impossible.”
The EdWeek quote would suggest that Springer and I do not agree on what a “high-performing teacher” looks like. Here’s the quote from EdWeek that suggests to me that Springer doesn’t entirely understand what he’s studying:
The findings suggest that merit pay is having a pretty significant impact on student learning.
Only if you believe that Big Standardized Tests actually measure student learning– a finding that remains unfound, an assumption that remains unproven, and an assertion that remains unsupported. My faith in their understanding of the real nature of BS Tests is further damaged by their reference to “weeks of earning.” Researchers’ fondness for describing learning in units of years, weeks, or days is great example of how far removed this stuff is from the actual experience of actual live humans in actual classrooms, where learning is not a featureless tofu-like slab from which we slice an equal, qualitatively-identical serving every day. In short, measuring “learning” in days, weeks, or months is absurd. As absurd as applying the same measure to researchers and claiming, for instance, that I can see that Springer’s paper represents three more weeks of research than less-accomplished research papers.
Springer et al note some things they don’t know in the “for further study” part of the paper.
EdWeek missed one of the big implications in the conclusion:
Teacher recruitment and retention, however, is another theoretically supported pathway through which merit pay can affect student test scores. Our qualitative review of the emerging literature on this pathway suggests that the positive effect reported in our primary studies may partly be the result of lower levels of teacher turnover.
In other words, burning and churning doesn’t help with your test scores. You know what doesn’t encourage teachers to stay? Tying their pay (and job security) to the results of bad tests the results of which are more clearly tied to student background than teacher efforts. You know what else encourages teachers to stay? The knowledge that they are looking at a pay structure that at least helps them keep pace with the increases in cost of living, and not a pay structure that will swing about wildly from year to year depending on which students they end up teaching.
Springer also acknowledges a caveat parenthetically which really deserves to be in the headline:
our evidence supports the notion that opportunities to earn pay incentives can lead to improved test scores, perhaps through some increased teacher effort (or, nefariously, gaming of the performance measure system).
Yes, that nefarious gaming of the system, which in fact the remains the best and often only truly effective method of raising BS Test scores. This is a huge caveat, a giant caveat, the equivalent of saying “Our research has proven that this really works– or that if you offer people money, some will cheat in order to get it.” This research might prove something kind of interesting, or it might prove absolutely nothing at all. That deserves more than a parenthetical comment or two.
Springer’s research suffers from the same giant, gaping ridiculous hole as the research that he meta-analyzed– he assumes that his central measure measures what it claims to measure. This is like meta-analysis of a bunch of research from eight-year-olds who all used home made rulers to measure their own feet and “found” that their feet are twice as big as the feet of eight-year-olds in other country. If you don’t ever check their home-made rulers for accuracy, you are wasting everyone’s time.
At a minimum, this study shows that the toxic testing that is already narrowing and damaging education in this country can be given a extra jolt of destructive power when backed with money. The best this study can hope to say is that incentives encourage teachers to aim more carefully for the wrong target. As one of the EdWeek commenters put it, “Why on earth would you want to reward teachers with cash for getting higher test scores?” What Springer may have proven is not that merit pay works, but that Campbell’s Law does.