Large-Scale Tutoring in England: Countering Effects of School Closures

The government of England recently announced an investment of £1 billion to provide tutoring and other services to help the many students whose educational progress has been interrupted by Covid-19 school closures. This is the equivalent of $1.24 billion, and adjusting for the difference in populations, it is like a U.S. investment of $7.44 billion, even larger than the equivalent of the similar Dutch investment recently announced.

Both England and the Netherlands have Covid-19 disease and death rates like those of the U.S., and all three countries are unsure of when schools might open in the fall, and whether they will open fully or partially when they do. All three countries have made extensive use of online learning to help students keep up with core content. However, participation rates in online learning have been low, especially for disadvantaged students, who often lack access to equipment and assistance at home. For this reason, education leaders in all of these countries are very concerned that academic achievement will be greatly harmed, and that gaps between middle class and disadvantaged students will grow. The difference is that Dutch and English schools are taking resolute action to remedy this problem, primarily by providing one-to-one and one-to-small group tutoring nationwide. The U.S. has not yet done this, except for an initiative in Tennessee.

blog_6-25-20_brittutor_500x425The English initiative has two distinct parts. £650 million will go directly to schools, with an expectation that they will spend most of it on one-to-four tutoring to students who most need it. The schools will mostly use the money to hire and train tutors, mainly student teachers and teaching assistants.

The remaining £350 million will go to fund an initiative led by the Education Endowment Foundation. In this National Tutoring Programme (NTP), 75% of the cost of tutoring struggling students will be subsidized. The tutoring may be either one-to-one or one-to-small group, and will be provided by organizations with proven programs and proven capacity to deliver tutoring at scale in primary and secondary schools. EEF is also carrying out evaluations of promising tutoring programs in various parts of England.

What Do the English and Dutch Tutoring Initiatives Mean for the U.S.?

The English and Dutch tutoring initiatives serve as an example of what wealthy nations can do to combat the learning losses of their students in the Covid-19 emergency. By putting these programs in place now, these countries have allowed time to organize their ambitious plans for fall implementation, and to ensure that the money will be wisely spent. In particular, the English National Tutoring Programme has a strong emphasis on the use of tutoring programs with evidence of effectiveness. In fact, the £350 million NTP could turn out to be the largest pragmatic education investment ever made anywhere designed to put proven programs into widespread use, and if all goes well, this aspect of the NTP could have important implications for evidence-based reform more broadly.

The U.S. is only now beginning to seriously consider tutoring as a means of accelerating the learning of students whose learning progress has been harmed by school closures. There have been proposals to invest in tutoring in both houses of Congress, but these are not expected to pass. Unless our leaders embrace the idea of intensive services to help struggling students soon, schools will partially or fully open in the fall into a very serious crisis. The economy will be in recession and schools will be struggling just to keep qualified teachers in every classroom. The amount of loss in education levels will become apparent. Yet there will not be well-worked-out or well-funded means of enabling schools to remedy the severe losses sure to exist, especially for disadvantaged students. These losses could have long-term negative effects on students’ progress, as poor basic skills reduce students’ abilities to learn advanced content, and undermine their confidence and motivation. Tutoring or other solutions would still be effective if applied later next school year, but by then the problems will be even more difficult to solve.

Perhaps national or state governments or large private foundations could at least begin to pilot and evaluate tutoring programs capable of going to scale. This would be immediately beneficial to the students involved and would facilitate effective implementation and scale-up when government makes the needed resources available. But action is needed now. Gaps in achievement between middle class and disadvantaged students were already the most important problem in American education, and the problem has certainly worsened. This is the time to see that all students receive whatever it takes to get back on a track to success.

 This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to

How Biased Measures Lead to False Conclusions


One hopeful development in evidence-based reform in education is the improvement in the quality of evaluations of educational programs. Because of policies and funding provided by the Institute of Education Sciences (IES) and Investing in Innovation (i3) in the U.S. and by the Education Endowment Foundation (EEF) in the U.K., most evaluations of educational programs today use far better procedures than was true as recently as five years ago. Experiments are likely to be large, to use random assignment or careful matching, and to be carried out by third-party evaluators, all of which give (or should give) educators and policy makers greater confidence that evaluations are unbiased and that their findings are meaningful.

Despite these positive developments, there remain serious problems in some evaluations. One of these relates to measures that give the experimental group an unfair advantage.

There are several ways in which measures can unfairly favor the experimental group. The most common is where measures are made by the creator of the program and are precisely aligned with the curriculum taught in the experimental group but not the control group. For example, a developer might reason that a new curriculum represents what students should be taught in, say, science or math, so it’s all right to use a measure aligned with the experimental program. However, use of such measures gives a huge advantage to the experimental group. In an article published in the Journal of Research on Educational Effectiveness, Nancy Madden and I looked at effect sizes for such over-aligned measures among studies accepted by the What Works Clearinghouse (WWC). In reading, we found an average effect size of +0.51 for over-aligned measures, compared to an average of +0.06 for measures that were fair to the content taught in experimental and control groups. In math, the difference was +0.45 for over-aligned measures, -0.03 for fair ones. These are huge differences.

A special case of over-aligned measures takes place when content is introduced earlier than usual in students’ progression through school in the experimental group, but not the control group. For example, if students are taught first-grade math skills in kindergarten, they will of course do better on a first grade test (in kindergarten) than will students not taught these skills in kindergarten. But will the students still be better off by the end of first grade, when all have been taught first grade skills? It’s unlikely.

One more special case of over-alignment takes place in relatively brief studies when students are pre-tested, taught a given topic, and then post-tested, say, eight weeks later. The control group, however, might have been taught that topic earlier or later than that eight-week period, or might have spent much less than 8 weeks on it. In a recent review of elementary science programs, we found many examples of this, including situations in which experimental groups were taught a topic such as electricity during an experiment, while the control group was not taught about electricity at all during that period. Not surprisingly, these studies produce very large but meaningless effect sizes.

As evidence becomes more important in educational policy and practice, we researchers need to get our own house in order. Insisting on the use of measures that are not biased in favor of experimental groups is a major necessity in building a body of evidence that educators can rely on.

Making Effective Use of Paraprofessionals


The Education Endowment Foundation (EEF) in England has just released its first six reports of studies evaluating various interventions. In each case, rigorous, randomized evaluations were done by third parties. As is typical in such studies, most found that treatments did not have significant positive outcomes, but two of them did. Both evaluated different uses of paraprofessionals. In England, as in the U.S., paraprofessionals usually assist teachers in classrooms, helping individual students with problems, helping the teacher with classroom management, and “other duties as assigned.” As in the U.S., teachers, parents, and politicians like paraprofessionals, because they are usually nice, helpful people from the community who free teachers from mundane tasks so the teachers can do what they do best. Unfortunately, research in both countries finds that paraprofessionals make no difference in student learning. The famous Tennessee Class Size study, for example, compared larger and smaller classes, but also had a large-class-with-paraprofessional condition, in which student achievement was precisely the same as it was in the large classes without paraprofessionals.

In one of the recent EEF-funded evaluations, teaching assistants taught struggling secondary readers one-to-one 20 minutes a day for 10 weeks. The study involved 308 middle schoolers randomly assigned to tutoring or ordinary teaching in 19 schools. The tutored students gained significantly more in reading than did controls. Similarly, a studyin which 324 elementary students in 54 schools were randomly assigned to one-to-one tutoring in math or to regular teaching found that the tutored students gained significantly more.

The EEF reports add to a considerable body of research in the U.S. showing that well-trained paraprofessionals can obtain substantial gains with struggling readers in one-to-one and small-group tutoring.

What these findings tell us is crystal clear. Already in our schools we have a powerful but underutilized resource, paraprofessionals who, with training and assistance, could be making a substantial difference in the lives of struggling students. This resource is costing us a lot. Most of the $15 billion we spend on Title I every year is spent on paraprofessionals, as is a lot of state and local funding. From personal experience, paraprofessionals are caring and capable people who want to make a difference. Why not use the evidence to help them do just that?