Tutoring Could Change Everything

Starting in the 1990s, futurists and technology fans began to say, “The Internet changes everything.” And eventually, it did. The Internet has certainly changed education, although it is unclear whether these changes have improved educational effectiveness.

Unlike the Internet, tutoring has been around since hunters and gatherers taught their children to hunt and gather. Yet ancient as it is, making one-to-one or small group tutoring widely available in Title I schools could have profound impacts on the most nettlesome problems of education.

            If the National Tutoring Corps proposal I’ve been discussing in recent blogs (here , here, and here) is widely implemented and successful, it could have both obvious and not-so-obvious impacts on many critical aspects of educational policy and practice. In this blog, I’ll discuss these revolutionary and far-reaching impacts.

Direct and Most Likely Impacts

Struggling Students

            Most obviously, if the National Tutoring Corps is successful, it will be because it has had an important positive impact on the achievement of students who are struggling in reading and/or mathematics. At 100,000 tutors, we expect as many as four million low-achieving students in Title I schools will benefit, about 10% of all U.S. students in grades 1-9, but, say, 50% of the students in the lowest 20% of their grades.

Title I

            In a December 20 tweet, former Houston superintendent Terry Grier suggested: “Schools should utilize all or most of their Title I money to implement tutoring programs…to help K-2 students catch up on lost literacy skills.”

            I’d agree, except that I’d include later grades and math as well as reading if there is sufficient funding. The purpose of Title I is to accelerate the achievement of low-achieving, disadvantaged students. If schools were experienced with implementing proven tutoring programs, and knew them from their own experience to be effective and feasible, why would such programs not become the main focus of Title I funding, as Grier suggests?

Special Education

            Students with specific learning disabilities and other “high-incidence” disabilities (about half of all students in special education) are likely to benefit from structured tutoring in reading or math. If we had proven, reliable, replicable tutoring models, with which many schools will have had experience, then schools might be able to greatly reduce the need for special education for students whose only problem is difficulty in learning reading or mathematics. For students already in special education, their special education teachers may adopt proven tutoring methods themselves, and may enable students with specific learning disabilities to succeed in reading and math, and hopefully to exit special education.

Increasing the Effectiveness of Other Tutoring and Supportive Services

            Schools already have various tutoring programs, including volunteer programs. In schools involved in the National Tutoring Corps, we recommend that tutoring by paid, well-trained tutors go to the lowest achievers in each grade. If schools also have other tutoring resources, they should be concentrated on students who are below grade level, but not struggling as much as the lowest achievers. These additional tutors might use the proven effective programs provided by the National Tutoring Corps, offering a consistent and effective approach to all students who need tutoring. The same might apply to other supportive services offered by the school.

Less Obvious But Critical Impacts

A Model for Evidence-to-Practice

            The success of evidence-based tutoring could contribute to the growth of evidence-based reform more broadly. If the National Tutoring Corps is seen to be effective because of its use of already-proven instructional approaches, this same idea could be used in every part of education in which robust evidence exists. For example, education leaders might reason that if use of evidence-based tutoring approaches had a big effect on students struggling in reading and math, perhaps similar outcomes could be achieved in algebra, or creative writing, or science, or programs for English learners.

Increasing the Amount and Quality of Development and Research on Replicable Solutions to Key Problems in Education

            If the widespread application of proven tutoring models broadly improves student outcomes, then it seems likely that government, private foundations, and perhaps creators of educational materials and software might invest far more in development and research than they do now, to discover new, more effective educational programs.

Reductions in Achievement Gaps

            If it were widely accepted that there were proven and practical means of significantly improving the achievement of low achievers, then there is no excuse for allowing achievement gaps to continue. Any student performing below the mean could be given proven tutoring and should gain in achievement, reducing gaps between low and high achievers.

Improvements in Behavior and Attendance

            Many of the students who engage in disruptive behavior are those who struggle academically, and therefore see little value in appropriate behavior. The same is true of students who skip school. Tutoring may help prevent behavior and attendance problems, not just by increasing the achievement of struggling students, but also by giving them caring, personalized teaching with a tutor who forms positive relationships with them and encourages attendance and good behavior.

Enhancing the Learning Environment for Students Who Do Not Need Tutoring

            It is likely that a highly successful tutoring initiative for struggling students could enhance the learning environment for the schoolmates of these students who do not need tutoring. This would happen if the tutored students were better behaved and more at peace with themselves, and if teachers did not have to struggle to accommodate a great deal of diversity in achievement levels within each class.

            Of course, all of these predictions depend on Congress funding a national tutoring plan based on the use of proven programs, and on implementation at scale actually producing the positive impacts that they have so often shown in research. But I hope these predictions will help policy makers and educational leaders realize the potential positive impacts a tutoring initiative could have, and then do what they can to make sure that the tutoring programs are effectively implemented and produce their desired impact. Then, and only then, will tutoring truly change everything.

Clarification:

Last week’s blog, on the affordability of tutoring, stated that a study of Saga Math, in which there was a per-pupil cost of $3,600, was intended as a demonstration, and was not intended to be broadly replicable.  However, all I meant to say is that Saga was never intended to be replicated AT THAT PRICE PER STUDENT.  In fact, a much lower-cost version of Saga Math is currently being replicated.  I apologize if I caused any confusion.

Photo credit: Deeper Learning 4 All, (CC BY-NC 4.0)

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

Is a National Tutoring Corps Affordable?

Tutoring is certainly in the news these days. The December 30 Washington Post asked its journalists to predict what the top policy issues will be for the coming year. In education, Laura Meckler focused her entire prediction on just one issue: Tutoring. In an NPR interview (Kelly, 2020) with John King, U. S. Secretary of Education at the end of the Obama Administration and now President of Education Trust, the topic was how to overcome the losses students are certain to have sustained due to Covid-19 school closures. Dr. King emphasized tutoring, based on its strong evidence base. McKinsey (Dorn et al., 2020) did a report on early information on how much students have lost due to the school closures and what to do about it. “What to do” primarily boiled down to tutoring. Earlier articles in Education Week (e.g., Sawchuk, 2020) have also emphasized tutoring as the leading solution. Two bills introduced in the Senate by Senator Coons (D-Delaware) proposed a major expansion of AmeriCorps, mostly to provide tutoring and school health aides to schools suffering from Covid-19 school closures.

            All of this is heartening, but many of these same sources are warning that all this tutoring is going to be horrifically expensive and may not happen because we cannot afford it. However, most of these estimates are based on a single, highly atypical example. A Chicago study (Cook et al., 2015) of a Saga (or Match Education) math tutoring program for ninth graders estimated a per-pupil cost of one-to-two tutoring all year of $3,600 per student, with an estimate that at scale, the costs could be as low as $2,500 per student. Yet these estimates are unique to this single program in this single study. The McKinsey report applied the lower figure ($2,500 per student) to cost out tutoring for half of all 55 million students in grades k-12. They estimated an annual cost of $66 billion, just for math tutoring!

            Our estimate is that the cost of a robust national tutoring plan would be more like $7.0 billion in 2021-2022. How could these estimates be so different?  First, the Saga study was designed as a one-off demonstration that disadvantaged students in high school could still succeed in math. No one expected that Saga Math could be replicated at a per-pupil cost of $3,600 (or $2,500). In fact, a much less expensive form of Saga Math is currently being disseminated. In fact, there are dozens of cost-effective tutoring programs widely used and evaluated since the 1980s in elementary reading and math. One is our own Tutoring With the Lightning Squad (Madden & Slavin, 2017), which provides tutors in reading for groups of four students and costs about $700 per student per year. There are many proven small-group tutoring programs known to make a substantial difference in reading or math performance, (see Neitzel et al., in press; Nickow et al., 2020; Pellegrini et al., in press). These programs, most of which use teaching assistants as tutors, cost more like $1,500 per student, on average, based on the average cost of five tutoring programs used in Baltimore elementary schools (Tutoring With the Lightning Squad, Reading Partners, mClass Tutoring, Literacy Lab, and Springboard).

            Further, it is preposterous to expect to serve 27.5 million students (half of all students in k-12) all in one year. At 40 students per tutor, this would require hiring 687,500 tutors!

            Our proposal (Slavin et al., 2020) for a National Tutoring Corps proposes hiring 100,000 tutors by September, 2021, to provide proven one-to-one or (mostly) one-to-small group tutoring programs to about 4 million grade 1 to 9 students in Title I schools. This number of tutors would serve about 21% of Title I students in these grades in 2021-2022, at a cost of roughly $7.0 billion (including administrative costs, development, evaluation, and so on). This is less than what the government of England is spending right now on a national tutoring program, a total of £1 billion, which translates to $7.8 billion (accounting for the differences in population).

            Our plan would gradually increase the numbers of tutors over time, so in later years costs could grow, but they would never surpass $10 billion, much less $66 billion just for math, as estimated by McKinsey.

            In fact, even with all the money in the world, it would not be possible to hire, train, and deploy 687,500 tutors any time soon, at least not tutors using programs proven to work. The task before us is not to just throw tutors into schools to serve lots of kids. Instead, it should be to provide carefully selected tutors with extensive professional development and coaching to enable them to implement tutoring programs that have been proven to be effective in rigorous, usually randomized experiments. No purpose is served by deploying tutors in such large numbers so quickly that we’d have to make serious compromises with the amount and quality of training. Poorly-implemented tutoring would have minimal outcomes, at best.

            I think anyone would agree that insisting on high quality at substantial scale, and then growing from success to success as tutoring organizations build capacity, is a better use of taxpayers’ money than starting too large and too fast, with unproven approaches.

            The apparent enthusiasm for tutoring is wonderful. But misplaced dollars will not ensure the outcomes we so desperately need for so many students harmed by Covid-19 school closures. Let’s invest in a plan based on high-quality implementation of proven programs and then grow it as we learn more about what works and what scales in sustainable forms of tutoring.

Photo credit: Deeper Learning 4 All, (CC BY-NC 4.0)

References

Cook, P. J., et al. (2016) Not too late: Improving academic outcomes for disadvantaged youth. Available at https://www.ipr.northwestern.edu/documents/working-papers/2015/IPR-WP-15-01.pdf

Dorn, E., et al. (2020). Covid-19 and learning loss: Disparities grow and students need help. New York: McKinsey & Co.

Kelly, M. L. (2020, December 28). Schools face a massive challenge to make up for learning lost during the pandemic. National Public Radio.

Madden, N. A., & Slavin, R. E. (2017). Evaluations of technology-assisted small-group tutoring for struggling readers. Reading & Writing Quarterly: Overcoming Learning Difficulties, 33(4), 327–334. https://doi.org/10.1080/10573569.2016.1255577

Neitzel, A., Lake, C., Pellegrini, M., & Slavin, R. (in press). A synthesis of quantitative research on programs for struggling readers in elementary schools. Reading Research Quarterly.

Nickow, A. J., Oreopoulos, P., & Quan, V. (2020). The transformative potential of tutoring for pre-k to 12 learning outcomes: Lessons from randomized evaluations. Boston: Abdul Latif Poverty Action Lab.

Pellegrini, M., Neitzel, A., Lake, C., & Slavin, R. (in press). Effective programs in elementary mathematics: A best-evidence synthesis. AERA Open.

Sawchuk, S. (2020, August 26). Overcoming Covid-19 learning loss. Education Week 40(2), 6.

Slavin, R. E., Madden, N. A., Neitzel, A., & Lake, C. (2020). The National Tutoring Corps: Scaling up proven tutoring for struggling students. Baltimore: Johns Hopkins University, Center for Research and Reform in Education.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

Large-Scale Tutoring Could Fail. Here’s How to Ensure It Does Not.

I’m delighted to see that the idea of large-scale tutoring to combat Covid-19 losses has gotten so important in the policy world that it is attracting scoffers and doubters. Michael Goldstein and Bowen Paulle (2020) published five brief commentaries recently in The Gadfly, warning about how tutoring could fail, both questioning the underlying research on tutoring outcomes (maybe just publication bias?) and noting the difficulties of rapid scale up. They also quote without citation a comment by Andy Rotherham, who quite correctly notes past disasters when government has tried and failed to scale up promising strategies: “Ed tech, class size reduction, teacher evaluations, some reading initiatives, and charter schools.” To these, I would add many others, but perhaps most importantly Supplementary Educational Services (SES), a massive attempt to implement all sorts of after school and summer school programs in high-poverty, low-achieving schools, which had near-zero impact, on average.

So if you were feeling complacent that the next hot thing, tutoring, was sure to work, no matter how it’s done, then you have not been paying attention for the past 30 years.

But rather than argue with these observations, I’d like to explain that the plan I’ve proposed, which you will find here, is fundamentally different from any of these past efforts, and if implemented as designed, with adequate funding, is highly likely to work at scale.

1.  Unlike all of the initiatives Rotherham dismisses, unlike SES, unlike just about everything ever used at scale in educational policy, the evidence base for certain specific, well-evaluated programs is solid.  And in our plan, only the proven programs would be scaled.

A little known but crucial fact: Not all tutoring programs work. The details matter. Our recent reviews of research on programs for struggling readers (Neitzel et al., in press) and math (Pellegrini et al., in press) identify individual tutoring programs that do and do not work, as well as types of tutoring that work well and those that do not.

Our scale-up plan would begin with programs that already have solid evidence of effectiveness, but it would also provide funding and third-party, rigorous evaluations of scaled-up programs without sufficient evidence, as well as new programs, designed to add additional options for schools. New and insufficiently evaluated programs would be piloted and implemented for evaluation, but they would not be scaled up unless they have solid evidence of effectiveness in randomized evaluations.

If possible, in fact, we would hope to re-evaluate even the most successful evaluated programs, to make sure they work.

If we stick to repeatedly-proven programs, rigorously evaluated in large randomized experiments, then who cares whether other programs have failed in the past? We will know that the programs being used at scale do work. Also, all this research would add greatly to knowledge about effective and ineffective program components and applications to particular groups of students, so over time, we’d expect the individual programs, and the field as a whole, to gain in the ability to provide proven tutoring approaches at scale.

2.  Scale-up of proven programs can work if we take it seriously. It is true that scale-up has many pitfalls, but I would argue that when scale-up does not occur it is for one of two reasons. First, the programs being scaled were not adequately proven in the first place. Second, the funding provided for scale-up was not sufficient to allow the program developers to scale up under the conditions they know full well are necessary. As examples of the latter, programs that provided well-trained and experienced trainers in their initial studies are often forced by insufficient funding to use trainer-of-trainers models for greatly diminished amounts of training in scale-up. As a result, the programs that worked at small scale failed in large-scale replication. This happens all the time, and this is what makes policy experts conclude that nothing works at scale.

However, the lesson they should have learned instead is just that programs proven to work at small scale can succeed if the key factors that made them work at small scale are implemented with fidelity at large scale. If anything less is done in scale-up, you’re taking big risks.

If well-trained trainers are essential, then it is critical to insist on well-trained trainers. If a certain amount or quality of training is essential, it is critical to insist on it, and make sure it happens in every school using a given program. And so on. There is no reason to skimp on the proven recipe.

But aren’t all these trainers and training days and other elements unsustainable?  This is the wrong question. The right one is, how can we make tutoring as effective as possible, to justify its cost?

Tutoring is expensive, but most of the cost is in the salaries of the tutors themselves. As an analogy, consider horse racing.  Horse owners pay millions for horses with great potential. Having done so, do you think they skimp on trainers or training? Of course not. In the same way, a hundred teaching assistants tutors cost roughly $4 million per year in salaries and benefits alone. Let’s say top-quality training for this group costs $500,000 per year, while crummy training costs $50,000. If these figures are in the ballpark, would it be wise to spend $4,500,000 on a terrific tutoring program, or $4,050,000 on a crummy one?

Successful scale-up takes place all the time in business. How does Starbucks make sure your experience in every single store is excellent? Simple. They have well-researched, well specified, obsessively monitored standards and quality metrics for every part of their operation. Scale-up in education can work just the same way, and in comparison to the costs of front-line personnel, the costs of great are trivially greater than the cost of crummy.

3.  Ongoing research will, in our proposal, formatively evaluate the entire tutoring effort over time, and development and evaluation will continually add new proven programs.  

Ordinarily, big federal education programs start with all kinds of rules and regulations and funding schemes, and these are announced with a lot of hoopla and local and national meetings to explain the new programs to local educators and leaders. Some sort of monitoring and compliance mechanism is put in place, but otherwise the program steams ahead. Several years later, some big research firm gets a huge contract to evaluate the program. On average, the result is almost always disappointing. Then there’s a political fight about just how disappointing the results are, and life goes on.

 The program we have proposed is completely different. First, as noted earlier, the individual programs that are operating at large scale will all be proven effective to begin with, and may be evaluated and proven effective again, using the same methods as those used to validate new programs. Second, new proven programs would be identified and scaled up all the time. Third, numerous studies combining observations, correlational studies, and mini-experiments would be evaluating program variations and impacts with different populations and circumstances, adding knowledge of what is happening at the chalkface and of how and why outcomes vary. This explanatory research would not be designed to decide which programs work and which do not (that would be done in the big randomized studies), but to learn from practice how to improve outcomes for each type of school and application. The idea is to get smarter over time about how to make tutoring as effective as it can be, so when the huge summative evaluation takes place, there will be no surprises. We would already know what is working, and how, and why.

Our National Tutoring Corps proposal is not a big research project, or a jobs program for researchers. The overwhelming focus is on providing struggling students the best tutoring we know how to provide. But using a small proportion of the total allocation would enable us to find out what works, rapidly enough to inform practice. If this were all to happen, we would know more and be able to do more every year, serving more and more struggling students with better and better programs.

So rather than spending a lot of taxpayer money and hoping for the best, we’d make scale-up successful by using evidence at the beginning, middle, and end of the process, to make sure that this time, we really know what we are doing. We would make sure that effective programs remain successful at scale, rather than merely hoping they will.

References

Goldstein, M., & Paulle, B. (2020, Dec. 8) Vaccine-making’s lessons for high-dosage tutoring, Part 1. The Gadfly.

Goldstein, M., & Paulle, B. (2020, Dec. 11). Vaccine-making’s lessons for high-dosage tutoring, Part IV. The Gadfly.

Neitzel, A., Lake, C., Pellegrini, M., & Slavin, R. (in press). A synthesis of quantitative research on programs for struggling readers in elementary schools. Reading Research Quarterly.

Pellegrini, M., Neitzel, A., Lake, C., & Slavin, R. (in press). Effective programs in elementary mathematics: A best-evidence synthesis. AERA Open.

Original photo by Catherine Carusso, Presidio of Monterey Public Affairs

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

The Details Matter. That’s Why Proven Tutoring Programs Work Better than General Guidelines.

When I was in first grade, my beloved teacher, Mrs. Adelson, introduced a new activity. She called it “phonics.”  In “phonics,” we were given tiny pieces of paper with letters on them to paste onto a piece of paper, to make words. It was a nightmare. Being a boy, I could sooner sprout wings and fly than do this activity without smearing paste and ink all over the place. The little slips of paper stuck to my thumb rather than to the paper. This activity taught me no phonics or reading whatsoever, but did engender a longtime hatred of “phonics,” as I understood it.

Much, much later I learned that phonics was essential in beginning reading, so I got over my phonics phobia. And I learned an important lesson. Even if an activity focuses on an essential skill, this does not mean that just any activity with that focus will work. The details matter.

I’ve had reason to reflect on this early lesson many times recently, as I’ve spoken to various audiences about our National Tutoring Corps plan. Often, people will ask why it is important to use specific proven programs. Why not figure out the characteristics of proven programs, and encourage tutors to use those consensus strategies?

The answer is that because the details matter, tutoring according to agreed-upon practices is not going to be as effective as specific proven programs, on average. Mrs. Adelson had a correct understanding of the importance of phonics in beginning reading, but in the classroom, where the paste hits the page, her phonics strategy was awful. In tutoring, we might come to agreement about factors such as group size, qualifications of tutors, amount of PD, and so on, but dozens of details also have to be right. An effective tutoring program has to get right crucial features, such as the nature and quality of tutor training and coaching, student materials and software, instructional strategies, feedback and correction strategies when students make errors, frequency and nature of assessments, means of motivating and recognizing student progress, means of handling student absences, links between tutors and teachers and between tutors and parents, and much more. Getting any of these strategies wrong could greatly diminish the effectiveness of tutoring.

The fact that a proven program has shown positive outcomes in rigorous experiments supports confidence that the program’s particular constellation of strategies is effective. During any program’s development and piloting, developers have had to experiment with solutions to each of the key elements. They have had many opportunities to observe tutoring sessions, to speak with tutors, to look at formative data, and to decide on specific strategies for each of the problems that must be solved. A teacher or local professional developer has not had the opportunity to try out and evaluate specific components, so even if they have an excellent understanding of the main elements of tutoring, they could use or promote key components that are not effective or may even be counterproductive. There are now many practical, ready-to-implement, rigorously evaluated tutoring programs with positive impacts (Neitzel et al., in press). Why should we be using programs whose effects are unknown, when there are many proven alternatives?

Specificity is of particular importance in small-group tutoring, because very effective small group methods superficially resemble much less effective methods (see Borman et al., 2001; Neitzel et al., in press; Pellegrini et al., 2020). For example, one-to-four tutoring might look like traditional Title I pullouts, which are far less effective. Some “tutors” teach a class of four no differently than they would teach a class of thirty. Tutoring methods that incorporate computers may also superficially resemble computer assisted instruction, which is also far less effective. Tutoring derives its unique effectiveness from the ability of the tutor to personalize instruction for each child, to provide unique feedback to the specific problems each student faces. It also depends on close relationships between tutors and students. If the specifics are not carefully trained and implemented with understanding and spirit, small-group tutoring can descend into business-as-usual. Not that ordinary teaching and CAI are ineffective, but to successfully combat the effects of Covid-19 school closures and learning gaps in general, tutoring must be much more effective than similar-looking methods. And it can be, but only if tutors are trained and equipped to provide tutoring that has been proven to be effective.

Individual tutors can and do adapt tutoring strategies to meet the needs of particular students or subgroups, and this is fine if the tutor is starting from a well-specified and proven, comprehensive tutoring program and making modifications for well-justified reasons. But when tutors are expected to substantially invent or interpret general strategies, they may make changes that diminish program effectiveness. All too often, local educators seek to modify proven programs to make them easier to implement, less expensive, or more appealing to various stakeholders, but these modifications may leave out elements essential to program effectiveness.

The national experience of Supplementary Educational Services illustrates how good ideas without an evidence base can go wrong. SES provided mostly after-school programs of all sorts, including various forms of tutoring. But hardly any of these programs had evidence of effectiveness. A review of outcomes of almost 400 local SES grants found reading and math effect sizes near zero, on average (Chappell et al., 2011).

In tutoring, it is essential that every student receiving tutoring gets a program highly likely to measurably improve the student’s reading or mathematics skills. Tutoring is expensive, and tutoring is mostly used with students who are very much at risk. It is critical that we give every tutor and every student the highest possible probability of life-altering improvement. Proven, replicable, well-specified programs are the best way to ensure positive outcomes.

Mrs. Adelson was right about phonics, but wrong about how to teach it. Let’s not make the same mistake with tutoring.

References

Borman, G., Stringfield, S., & Slavin, R.E. (Eds.) (2001).  Title I: Compensatory education at the crossroads.  Mahwah, NJ: Erlbaum.

Chappell, S., Nunnery, J., Pribesh, S., & Hager, J. (2011). A meta-analysis of Supplemental Educational Services (SES) provider effects on student achievement. Journal of Education for Students Placed at Risk, 16(1), 1-23.

Neitzel, A., Lake, C., Pellegrini, M., & Slavin, R. (in press). A synthesis of quantitative research on programs for struggling readers in elementary schools. Reading Research Quarterly.

Pellegrini, M., Neitzel, A., Lake, C., & Slavin, R. (2020). Effective programs in elementary mathematics: A best-evidence synthesis. Available at www.bestevidence.com. Manuscript submitted for publication.

Photo by Austrian National Library on Unsplash

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org