Teachers are being told to "accelerate" learning--but does research back up this approach?
Leading up to the mass return to in-person learning this fall, media outlets and state governments painted doomsday pictures of student-learning loss. The solution, according to some education interest groups? Learning “acceleration.” The research that this fad is based on, however, does not actually demonstrate its claimed results—but schools across the country are still promoting it as truth.
The appeal of acceleration is obvious: if we try to review (or remediate) all of the gaps in learning caused by the pandemic, students will never “catch up” to where they are supposed to be. Typically, teachers might be inclined to go over all of the material students missed during the pandemic (to remediate). Acceleration, according to The New Teacher Project (TNTP), promises to “start with the current grade’s content and provide ‘just-in-time’ supports when necessary.”
Education reform groups like TNTP, The 74 Million, and Carnegie Learning have been eager to sell this narrative—and products, software, and services that will help schools “accelerate” student learning. New Jersey and Michigan directly referenced a TNTP report highlighting the benefits of accelerating in their white papers on pandemic learning loss; executives with Zearn and TNTP wrote an op-ed in the Baltimore Sun; and well-known education commentators like ChalkBeat, EdWeek, and Edmentum all preached the gospel of acceleration.
Do the findings of TNTP’s study, though, actually warrant the outsized impact it is having on education policy across the country this year? The report’s claims are based in comparisons of two groups of students: those who received remediation on the software Zearn and those who received “acceleration.” The report states: “some teachers ultimately followed these new scope and sequences [acceleration], while others opted for traditional remediation by starting with other, less-connected below-grade-level content—essentially beginning their instruction wherever students left off when schools closed in the 2019-20 school year. This created a natural experiment to compare the effectiveness of remediation to that of learning acceleration.”
It is not clear that the remediated and accelerated groups of students are actually comparable, though. What if teachers who had classes where students were more disengaged during the pandemic chose remediation? And therefore, the students who were remediated didn’t struggle more because of the remediation; they received remediation because they were struggling.
The second problem with the study comes with the level of observation: the researchers with TNTP try to make inferences about individual children and whether they benefited from acceleration, but the report only uses classroom- and school-level data. The report claims that the two groups of students struggled at similar levels pre-pandemic. But classrooms don’t struggle on lessons—the individual students struggle. This is what is called an “ecological fallacy”: making inferences about individual students based on data observed at a different level (e.g. a classroom or a school). It’s not to say that acceleration isn’t better, per se—just that the data they provided do not show it. The authors of this report are doing the equivalent of reading tea leaves: looking at data and making it say what they want it to say.
For example, in Figure 4, TNTP impressively shows that classrooms using acceleration completed 27% more grade-level lessons (with even bigger results in predominantly non-white and Title I schools). But do these data actually mean anything? A 27 percent increase at the classroom level could be achieved in many different ways. Take the two hypothetical classrooms below:
In this case, the accelerated classroom completed about 29% more grade-level lessons. But is that classroom actually doing better if three of the four students have not completed any grade level assignments? Without having the individual student data, it is impossible to know if something like this is what is actually occurring.
Similarly, the study relies on school-level data to make its arguments about equity. The researchers look at “classrooms in schools with a majority students of color.” That says nothing about the actual students; just because a school with a majority of students of color is performing well doesn’t mean that the individual students of color are performing well. Going back to the previous example:
In this hypothetical, students of color did much better in the remediated classroom. The data that the researchers have provided do not show that this is not in fact the case. Without individual-level data, we do not know the true effect of remediation versus acceleration.
In addition to misusing classroom-level data to make inferences about individual students, the report also manipulates how it presents data to mislead about the severity of differences. One of the authors’ claims is that schools ought to accelerate because students of color and students in Title I schools are more likely to be remediated than their wealthier, whiter peers—a clear case of inequity. But to illustrate this, the authors play with the y axis to make the differences seem way more dire than they really are: notice that the y axis in Figure 2 goes from 6% to 20%--as opposed to a more normal 0 to 100%. So, what is only a 6 percentage point difference between these kinds of schools in terms of their use of remediation looks like a giant chasm when the entire y axis is 14 percentage points. Below, I have corrected the y axis; while the difference is certainly still there, it looks much less dire when put into its proper context.
TNTP’s report makes lofty claims based on shaky evidence, and this report is having an outsized impact on students and teachers across the country this year. Rather than accelerate student learning, policymakers would be wise to hit the brakes on this fad.