The Summer Slide: Fact or Fiction?

One of the things that “everyone knows” from educational research is that while advantaged students gain in achievement over the summer, disadvantaged students decline. However, the rate of gain during school time, from fall to spring, is about the same for advantaged and disadvantaged students. This pattern has led researchers such as Alexander, Entwisle, and Olson (2007) and Allington & McGill-Franzen (2018) to conclude that differential gain/loss over the summer completely explains the gap in achievement between advantaged and disadvantaged students. Middle class students are reading, going to the zoo, and going to the library, while disadvantaged students are less likely to do these school-like things.

The “summer slide,” as it’s called, has come up a lot lately, because it is being used to predict the amount of loss disadvantaged students will experience as a result of Covid-19 school closures. If disadvantaged students lose so much ground over 2 ½ months of summer vacation, imagine how much they will lose after five or seven or nine months (to January, 2021)!  Remarkably precise-looking estimates of how far behind students will be when school finally re-opens for all are circulating widely. These estimates are based on estimates of the losses due to “summer slide,” so they are naturally called “Covid slide.”

I am certain that most students, and especially disadvantaged students, are in fact losing substantial ground due to the long school closures. The months of school not attended, coupled with the apparent ineffectiveness of remote teaching for most students, do not bode well for a whole generation of children. But this is abnormal. Ordinary summer vacation is normal. Does ordinary summer vacation lead to enough “summer slide” to explain substantial gaps in achievement between advantaged and disadvantaged students?

 I’m pretty sure it does not. In fact, let me put this in caps:

SUMMER SLIDE IS PROBABLY A MYTH.

Recent studies of summer slide, mostly using NWEA MAP data from millions of children, are finding results that call summer slide into question (Kuhfeld, 2019; Quinn et al., 2016) or agree that it happens but that summer losses are similar for advantaged and disadvantaged students (Atteberry & McEachin, 2020). However, hiding in plain sight is the most conclusive evidence of all: NWEA’s table of norms for the MAP, a benchmark assessment widely used to monitor student achievement. The MAP is usually given three times a year. In the chart below, calculated from raw data on the NWEA website (teach.mapnwea.org), I compute the gains from fall to winter, winter to spring, and spring to fall (the last being “summer”). These are for grades 1 to 5 reading.

GradeFall to winterWinter to springSpring to fall (summer)
19.925.550.95
28.854.371.05
37.283.22-0.47
45.832.33-0.35
54.641.86-0.81
Mean7.303.470.07

NWEA’s chart is probably accurate. But it suggests something that cannot possibly be true. No, it’s not that students gain less in reading each year. That’s true. It is that students gain more than twice as much from fall to winter as they do from winter to spring. That cannot be true.Why would students gain so much more in the first semester than the second? One might argue that they are fresher in the fall, or something like that. But double the gain, in every elementary grade? That cannot be right.

 Here is my explanation. The fall score is depressed.

The only logical explanation for extraordinary fall-to-winter gain is that many students score poorly on the September test, but rapidly recover.

I think most elementary teachers already know this. Their experience is that students score very low when they return from summer vacation, but this is not their true reading level. For three decades, we have noticed this in our Success for All program, and we routinely recommend that teachers place students in our reading sequence not where they score in September, but no lower than they scored last spring. (If students score higher in September than they did on a spring test, we do use the September score).

What is happening, I believe, is that students do not forget how to read, they just momentarily forget how to take tests. Or perhaps teachers do not invest time in preparing students to take a pretest, which has few if any consequences, but they do prepare them for winter and spring tests. I do not know for sure how it happens, but I do know for sure, from experience, that fall scores tend to understate students’ capabilities, often by quite a lot. And if the fall score is artificially or temporarily low, then the whole summer loss story is wrong.

Another indicator that fall scores are, shall we say, a bit squirrely, is the finding by both Kuhfield (2019) and Atteberry & McEachin (2020) that there is a consistent negative correlation between school year gain and summer loss. That is, the students who gain the most from fall to spring lose the most from spring to fall. How can that be? What must be going on is just that students who get fall scores far below their actual ability quickly recover, and then make what appear to be fabulous gains from fall to spring. But that same temporarily low fall score gives them a summer loss. So of course there is a negative correlation, but it does not have any practical meaning.

So far, I’ve only been talking about whether there is a summer slide at all, for all students taken together. It may still be true, as found in the Heyns (1978) and Alexander, Entwisle, and Olson (2007) studies, that disadvantaged students are not gaining as much as advantaged students do over the summer. Recent studies by Atteberry & McEachin (2020) and Kuhfeld (2019) do not find much differential summer gain/loss according to social class. One the other hand, it could be that disadvantaged students are more susceptible to forgetting how to take tests. Or perhaps disadvantaged students are more likely to attend schools that put little emphasis on doing well on a September test that has no consequences for the students or the school. But it is unlikely they are truly forgetting how to read. The key point is that if fall tests are unreliable indicators of students’ actual skills, if they are just temporary dips that do not indicate what students can do, then taking them seriously in determining whether or not “summer slide” exists is not sensible.

By the way, before you begin thinking that while summer slide may not happen in reading but it must exist in math or other subjects, prepare to be disappointed again. The NWEA MAP scores for math, science, and language usage follow very similar patterns to those in reading.

Perhaps I’m wrong, but if I am, then we’d better start finding out about the amazing fall-to-winter surge, and see how we can make winter-to-spring gains that large! But if you don’t have a powerful substantive explanation for the fall-to-winter surge, you’re going to have to accept that summer slide isn’t a major factor in student achievement.

References

Alexander, K. L., Entwisle, D. R., & Olson, L. S. (2007). Lasting consequences of the summer learning gap. American Sociological Review, 72(2), 167-180.  doi:10.1177/000312240707200202

Allington, R. L., & McGill-Franzen, A. (Eds.). (2018). Summer reading: Closing the rich/poor reading achievement gap. New York, NY: Teachers College Press.

Atteberry, A., & McEachin, A. (2020). School’s out: The role of summers in understanding achievement disparities. American Educational Research Journal https://doi.org/10.3102/0002831220937285

Heyns, B. (1978). Summer learning and the effect of schooling. New York: Academic Press.

Kuhfeld, M (2019). Surprising new evidence on summer learning loss. Phi Delta Kappan, 101 (11), 25-29.

Quinn, D., Cook, N., McIntyre, J., & Gomez, C. J. (2016). Seasonal dynamics of academic achievement inequality by socioeconomic status and race/ethnicity: Updating and extending past research with new national data. Educational Researcher, 45 (8), 443-453.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

 Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

Spend Smart to Achieve Equity in Education

Politics, it is said, is all about who gets what. “What” is defined as money. Good people of all parties generally want to use government funding to improve peoples’ lives. But is giving people more money the same as improving their lives?

In education, money is important. Improving education usually costs money. You can’t make chicken soup out of chicken feathers, as we say in Baltimore. Further, inequalities in education funding between wealthy and disadvantaged districts within the same regions remain substantial. The children who need the most get the least, because education funding is usually tied to property taxes. Obviously, areas high in wealth can raise a lot more money with the same tax rate than can neighboring districts low in wealth. This is understood by all Americans as just the way of the world, but it is in fact anything but the way of the world. In fact, our system is so unfair and so unlike what happens in our peer nations that when I talk about it abroad, I have to explain it three or four times before my foreign friends can understand how any advanced country could do such a thing. In all other countries I know about, all schools receive equal funding, most often supplemented to help impoverished schools catch up. This has been true for decades, under right wing and left wing governments throughout the developed world.

I believe that equalizing school funding, and supplementing it for disadvantaged schools, is a moral responsibility, well worth fighting for. But will it solve the inequities in outcomes we see among our schools serving wealthy and disadvantaged neighborhoods?

This is a more complex question. But the simple answer is that for improving outcomes, there are better and worse ways to use money. We know a lot of proven strategies for turning money into achievement, and with modest continuing investment in the resources to help schools adopt and implement proven programs and in national R & D to create and evaluate proven programs, we could make substantial progress in reducing gaps and gaining on our international competitors. But if we expect that simply adding money to the current system will be sufficient, we are likely to be disappointed.

Spending is always a contentious issue. Spending smart should not be. Whatever we have decided to spend on education, we can all agree that every penny should count and the best way to ensure that money makes a difference is to use it on proven approaches.

Equalizing or even supplementing funding for high-poverty schools is the right thing to do, but we cannot just spend. We have to spend smart.

Money and Evidence

Many years ago, I spent a few days testifying in a funding equity case in Alabama. At the end of my testimony, the main lawyer for the plaintiffs drove me to the airport. “I think we’re going to win this case,” he said, “But will it help my clients?”

The lawyer’s question has haunted me ever since. In Alabama, then and now, there are enormous inequities in education funding in rich and poor districts due to differences in property tax receipts in different districts. There are corresponding differences in student outcomes. The same is true in most states. To a greater or lesser degree, most states and the federal government provide some funding to reduce inequalities, but in most places it is still the case that poor districts have to tax themselves at a higher rate to produce education funding that is significantly lower than that of their wealthier neighbors.

Funding inequities are worse than wrong, they are repugnant. When I travel in other countries and try to describe our system, it usually takes me a while to get people outside the U.S. to even understand what I am saying. “So schools in poor areas get less than those in wealthy ones? Surely that cannot be true.” In fact, it is true in the U.S., but in all of our peer countries, national or at least regional funding policies ensure basic equality in school funding, and in most cases I know about they then add additional funding on top of equalized funding for schools serving many children in poverty. For example, England has long had equal funding, and the Conservative government added “Pupil Premium” funding in which each disadvantaged child brings additional funds to his or her school. Pupil Premium is sort of like Title I in the U.S., if you can imagine Title I adding resources on top of equal funding, which it does in only a few U.S. states.

So let’s accept the idea that funding inequity is a BAD THING. Now consider this: Would eliminating funding inequities eliminate achievement gaps in U.S. schools? This gets back to the lawyer’s question. If we somehow won a national “case” that required equalizing school funding, would the “clients” benefit?

More money for disadvantaged schools would certainly be welcome, and it would certainly create the possibility of major advances. But in order to maximize the impact of significant additional funding, it all depends on what schools do with the added dollars. Of course you’d have to increase teachers’ salaries and reduce class sizes to draw highly qualified teachers into disadvantaged schools. But you’d also have to spend a significant portion of new funds to help schools implement proven programs with fidelity and verve.

Again, England offers an interesting model. Twenty years ago, achievement in England was very unequal, despite equal funding. Children of immigrants from Pakistan and Bangladesh, Africans, Afro-Caribbeans, and other minorities performed well below White British children. The Labour government implemented a massive effort to change this, starting with the London Challenge and continuing with a Manchester Challenge and a Black Country Challenge in the post-industrial Midlands. Each “challenge” provided substantial professional development to school staffs, as well as organizing achievement data to show school leaders that other schools with exactly the same demographic challenges were achieving far better results.

Today, children of Pakistani and Bangladeshi immigrants are scoring at the English mean. Children of African and Afro-Caribbean immigrants are just below the English mean. Policy makers in England are now turning their attention to White working-class boys. But the persistent and substantial gaps we see as so resistant to change in the U.S. are essentially gone in England.

Today, we are getting even smarter about how to turn dollars into enhanced achievement, due to investments by the Institute of Education Sciences (IES) and Investing in Innovation (i3) program in the U.S. and the Education Endowment Foundation (EEF) in England. In both countries, however, we lack the funding to put into place what we know how to do on a large enough scale to matter, but this need not always be the case.

Funding matters. No one can make chicken soup out of chicken feathers, as we say in Baltimore. But funding in itself will not solve our achievement gap. Funding needs to be spent on specific, high-impact investments to make a big difference.

Restoring Opportunity

2014-03-13-HP32March_13_14.jpg

I just read a fascinating book, Restoring Opportunity, by Greg Duncan and Richard Murnane.. It describes the now-familiar problems of growing inequality in America between the educational haves and have-nots, but then goes on to describe some outstanding preschool, elementary, and high school programs that may offer models of how to help disadvantaged children close the gap. Refreshingly, Duncan and Murnane do not stop with heartwarming tales of successful schools, but also present data from randomized experiments showing the impacts on children, especially for the small high school initiative in New York City and the University of Chicago Charter Network.

From these and other examples, Duncan and Murnane derive some factors common to outstanding schools: Accountability for outcomes within schools, extensive professional development and support for teachers, and experimentation and evaluation to identify effective models.

Readers of this blog won’t be surprised to learn that I support each of these recommendations. So let’s start there. How do we get more than 100,000 schools to become markedly more effective? Or to make the problem a little easier, how about just the 55,000 Title I schools?

Duncan and Murnane are forthright about many of the solutions that aren’t likely to make a widescale difference. They note that while there are a few promising charter management organizations, charters overall are not generally being found to improve learning outcomes, and some of the most celebrated charters achieve good results by burning out young, talented teachers, a strategy that is hard to sustain and harder to scale. Popular solutions such as ratcheting up accountability, providing vouchers, and changing governance have also been disappointing in evaluations.

There is a strategy that puts all of Duncan and Murnane’s principles into practice on a very large scale: Comprehensive school reform. They note the strong evaluation results and widespread impact of two CSR models, one of which is our Success for All programfor elementary and middle schools. CSR models exemplify the principles Duncan and Murnane arrive at, but they can do so at a substantial scale. That is, they invariably provide a great deal of professional development and support to teachers, accountability for outcomes within schools, links to parents, provisions for struggling students, and so on. Unlike charters, CSR models do not require radical changes in governance, which explains why they have been able to go to substantial scale far more rapidly.

The only problem with comprehensive school reform models is that at present, there are too few to choose from. Yet with support from government and foundations, this could rapidly change.

Imagine a situation in which Title I school staffs could choose among proven, whole-school reform models the one they thought best for their needs. The schools themselves and the leaders of their CSR reform networks would be responsible for the progress of children on state standards, but otherwise these schools would be free to implement their proven models with fidelity, without having to juggle district and CSR requirements.

Such a strategy could accomplish the goals Duncan and Murnane outline in thousands of schools, enough to make real inroads in the problems of inequity they so articulately identify.