Whenever I speak to skeptical audiences about the enormous potential of evidence-based reform in education, three of the top complaints I always hear are as follows.
- In high-quality, randomized experiments, nothing works.
- Since educational outcomes depend so much on context, even programs that do work somewhere cannot be assumed to work elsewhere.
- Even if a given approach is found to be effective in many contexts, it is unlikely to be scalable to serve large numbers of students and schools.
In light of these criticisms, I was delighted to see a recent blog by Jonathan Sharples at the Education Endowment Foundation (EEF), the main funder of randomized evaluations of educational programs in England (and a former colleague at the University of York). The blog summarizes results from six experiments in England that used what they call teaching assistants (we call them paraprofessionals or aides) to tutor struggling students one-to-one or in small groups, in reading or math, at various grade levels.
Sharples included a table summarizing the results, which I have adapted here:
What is interesting about this chart is that although every study was a third-party randomized experiment, the effect sizes fall within a range from moderately positive to very positive (+0.12 to +0.51).
Another interesting thing about the table is that it resembles findings in U.S. studies of tutoring by paraprofessionals. Here is a chart of such studies:
The contents of the Tables 1 and 2 are heartening in providing relatively consistent positive effects in rigorous studies for replicable, pragmatic interventions for struggling students, a population of great substantive importance. Because paraprofessionals are relatively inexpensive and usually poorly utilized in their current roles, providing them with good training materials and software to work with individuals and small groups of students in dire need of help in reading and math just makes good sense.
However, think back to the criticisms so often thrown at evidence-based reform in general. The findings from tutoring and small-group teaching studies devastates those criticisms:
- Nothing works. Really? Not everything works, and it would be nice to have a larger set of positive examples. But tutoring by paraprofessionals (and also by teachers and well-supervised and trained volunteers) definitely works, over and over. There are numerous other programs also proven to work in rigorous studies.
- Nothing replicates. Really? Context matters, but here we have relatively consistent findings across the U.S. and England, two very different systems. The effects vary for one-to-one and small-group tutoring, reading and math, and other factors, and we can learn from this variation. But it is clear that across very different contexts, positive effects do replicate.
- Nothing scales. Really? Various proven forms of tutoring – by teachers, paraprofessionals, and volunteers – are working right now in schools across the U.S., U.K., and many other countries. Reading Recovery alone, a one-to-one model that uses certified teachers as tutors, works with thousands of teachers worldwide. With the slightest encouragement, proven tutoring models could be expanded to serve many more schools and students, at modest cost.
Proven tutoring models of all types should be a standard offering for every school. More research is always needed to find more effective and cost-effective strategies. But there is no reason whatsoever not to use what we have now. And I hope this example will help critics of evidence-based reform move on to better arguments.