The Summer Slide: Fact or Fiction?

One of the things that “everyone knows” from educational research is that while advantaged students gain in achievement over the summer, disadvantaged students decline. However, the rate of gain during school time, from fall to spring, is about the same for advantaged and disadvantaged students. This pattern has led researchers such as Alexander, Entwisle, and Olson (2007) and Allington & McGill-Franzen (2018) to conclude that differential gain/loss over the summer completely explains the gap in achievement between advantaged and disadvantaged students. Middle class students are reading, going to the zoo, and going to the library, while disadvantaged students are less likely to do these school-like things.

The “summer slide,” as it’s called, has come up a lot lately, because it is being used to predict the amount of loss disadvantaged students will experience as a result of Covid-19 school closures. If disadvantaged students lose so much ground over 2 ½ months of summer vacation, imagine how much they will lose after five or seven or nine months (to January, 2021)!  Remarkably precise-looking estimates of how far behind students will be when school finally re-opens for all are circulating widely. These estimates are based on estimates of the losses due to “summer slide,” so they are naturally called “Covid slide.”

I am certain that most students, and especially disadvantaged students, are in fact losing substantial ground due to the long school closures. The months of school not attended, coupled with the apparent ineffectiveness of remote teaching for most students, do not bode well for a whole generation of children. But this is abnormal. Ordinary summer vacation is normal. Does ordinary summer vacation lead to enough “summer slide” to explain substantial gaps in achievement between advantaged and disadvantaged students?

 I’m pretty sure it does not. In fact, let me put this in caps:

SUMMER SLIDE IS PROBABLY A MYTH.

Recent studies of summer slide, mostly using NWEA MAP data from millions of children, are finding results that call summer slide into question (Kuhfeld, 2019; Quinn et al., 2016) or agree that it happens but that summer losses are similar for advantaged and disadvantaged students (Atteberry & McEachin, 2020). However, hiding in plain sight is the most conclusive evidence of all: NWEA’s table of norms for the MAP, a benchmark assessment widely used to monitor student achievement. The MAP is usually given three times a year. In the chart below, calculated from raw data on the NWEA website (teach.mapnwea.org), I compute the gains from fall to winter, winter to spring, and spring to fall (the last being “summer”). These are for grades 1 to 5 reading.

GradeFall to winterWinter to springSpring to fall (summer)
19.925.550.95
28.854.371.05
37.283.22-0.47
45.832.33-0.35
54.641.86-0.81
Mean7.303.470.07

NWEA’s chart is probably accurate. But it suggests something that cannot possibly be true. No, it’s not that students gain less in reading each year. That’s true. It is that students gain more than twice as much from fall to winter as they do from winter to spring. That cannot be true.Why would students gain so much more in the first semester than the second? One might argue that they are fresher in the fall, or something like that. But double the gain, in every elementary grade? That cannot be right.

 Here is my explanation. The fall score is depressed.

The only logical explanation for extraordinary fall-to-winter gain is that many students score poorly on the September test, but rapidly recover.

I think most elementary teachers already know this. Their experience is that students score very low when they return from summer vacation, but this is not their true reading level. For three decades, we have noticed this in our Success for All program, and we routinely recommend that teachers place students in our reading sequence not where they score in September, but no lower than they scored last spring. (If students score higher in September than they did on a spring test, we do use the September score).

What is happening, I believe, is that students do not forget how to read, they just momentarily forget how to take tests. Or perhaps teachers do not invest time in preparing students to take a pretest, which has few if any consequences, but they do prepare them for winter and spring tests. I do not know for sure how it happens, but I do know for sure, from experience, that fall scores tend to understate students’ capabilities, often by quite a lot. And if the fall score is artificially or temporarily low, then the whole summer loss story is wrong.

Another indicator that fall scores are, shall we say, a bit squirrely, is the finding by both Kuhfield (2019) and Atteberry & McEachin (2020) that there is a consistent negative correlation between school year gain and summer loss. That is, the students who gain the most from fall to spring lose the most from spring to fall. How can that be? What must be going on is just that students who get fall scores far below their actual ability quickly recover, and then make what appear to be fabulous gains from fall to spring. But that same temporarily low fall score gives them a summer loss. So of course there is a negative correlation, but it does not have any practical meaning.

So far, I’ve only been talking about whether there is a summer slide at all, for all students taken together. It may still be true, as found in the Heyns (1978) and Alexander, Entwisle, and Olson (2007) studies, that disadvantaged students are not gaining as much as advantaged students do over the summer. Recent studies by Atteberry & McEachin (2020) and Kuhfeld (2019) do not find much differential summer gain/loss according to social class. One the other hand, it could be that disadvantaged students are more susceptible to forgetting how to take tests. Or perhaps disadvantaged students are more likely to attend schools that put little emphasis on doing well on a September test that has no consequences for the students or the school. But it is unlikely they are truly forgetting how to read. The key point is that if fall tests are unreliable indicators of students’ actual skills, if they are just temporary dips that do not indicate what students can do, then taking them seriously in determining whether or not “summer slide” exists is not sensible.

By the way, before you begin thinking that while summer slide may not happen in reading but it must exist in math or other subjects, prepare to be disappointed again. The NWEA MAP scores for math, science, and language usage follow very similar patterns to those in reading.

Perhaps I’m wrong, but if I am, then we’d better start finding out about the amazing fall-to-winter surge, and see how we can make winter-to-spring gains that large! But if you don’t have a powerful substantive explanation for the fall-to-winter surge, you’re going to have to accept that summer slide isn’t a major factor in student achievement.

References

Alexander, K. L., Entwisle, D. R., & Olson, L. S. (2007). Lasting consequences of the summer learning gap. American Sociological Review, 72(2), 167-180.  doi:10.1177/000312240707200202

Allington, R. L., & McGill-Franzen, A. (Eds.). (2018). Summer reading: Closing the rich/poor reading achievement gap. New York, NY: Teachers College Press.

Atteberry, A., & McEachin, A. (2020). School’s out: The role of summers in understanding achievement disparities. American Educational Research Journal https://doi.org/10.3102/0002831220937285

Heyns, B. (1978). Summer learning and the effect of schooling. New York: Academic Press.

Kuhfeld, M (2019). Surprising new evidence on summer learning loss. Phi Delta Kappan, 101 (11), 25-29.

Quinn, D., Cook, N., McIntyre, J., & Gomez, C. J. (2016). Seasonal dynamics of academic achievement inequality by socioeconomic status and race/ethnicity: Updating and extending past research with new national data. Educational Researcher, 45 (8), 443-453.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

 Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

3 thoughts on “The Summer Slide: Fact or Fiction?

  1. I think a factor is that some teachers stop teaching as intensely after the annual performance test that is administered early. That translates into lots of instruction fall to winter and more months without focused instruction after the test is administered (late winter, early spring)

    Like

Leave a comment