Large-Scale Tutoring Could Fail. Here’s How to Ensure It Does Not.

I’m delighted to see that the idea of large-scale tutoring to combat Covid-19 losses has gotten so important in the policy world that it is attracting scoffers and doubters. Michael Goldstein and Bowen Paulle (2020) published five brief commentaries recently in The Gadfly, warning about how tutoring could fail, both questioning the underlying research on tutoring outcomes (maybe just publication bias?) and noting the difficulties of rapid scale up. They also quote without citation a comment by Andy Rotherham, who quite correctly notes past disasters when government has tried and failed to scale up promising strategies: “Ed tech, class size reduction, teacher evaluations, some reading initiatives, and charter schools.” To these, I would add many others, but perhaps most importantly Supplementary Educational Services (SES), a massive attempt to implement all sorts of after school and summer school programs in high-poverty, low-achieving schools, which had near-zero impact, on average.

So if you were feeling complacent that the next hot thing, tutoring, was sure to work, no matter how it’s done, then you have not been paying attention for the past 30 years.

But rather than argue with these observations, I’d like to explain that the plan I’ve proposed, which you will find here, is fundamentally different from any of these past efforts, and if implemented as designed, with adequate funding, is highly likely to work at scale.

1.  Unlike all of the initiatives Rotherham dismisses, unlike SES, unlike just about everything ever used at scale in educational policy, the evidence base for certain specific, well-evaluated programs is solid.  And in our plan, only the proven programs would be scaled.

A little known but crucial fact: Not all tutoring programs work. The details matter. Our recent reviews of research on programs for struggling readers (Neitzel et al., in press) and math (Pellegrini et al., in press) identify individual tutoring programs that do and do not work, as well as types of tutoring that work well and those that do not.

Our scale-up plan would begin with programs that already have solid evidence of effectiveness, but it would also provide funding and third-party, rigorous evaluations of scaled-up programs without sufficient evidence, as well as new programs, designed to add additional options for schools. New and insufficiently evaluated programs would be piloted and implemented for evaluation, but they would not be scaled up unless they have solid evidence of effectiveness in randomized evaluations.

If possible, in fact, we would hope to re-evaluate even the most successful evaluated programs, to make sure they work.

If we stick to repeatedly-proven programs, rigorously evaluated in large randomized experiments, then who cares whether other programs have failed in the past? We will know that the programs being used at scale do work. Also, all this research would add greatly to knowledge about effective and ineffective program components and applications to particular groups of students, so over time, we’d expect the individual programs, and the field as a whole, to gain in the ability to provide proven tutoring approaches at scale.

2.  Scale-up of proven programs can work if we take it seriously. It is true that scale-up has many pitfalls, but I would argue that when scale-up does not occur it is for one of two reasons. First, the programs being scaled were not adequately proven in the first place. Second, the funding provided for scale-up was not sufficient to allow the program developers to scale up under the conditions they know full well are necessary. As examples of the latter, programs that provided well-trained and experienced trainers in their initial studies are often forced by insufficient funding to use trainer-of-trainers models for greatly diminished amounts of training in scale-up. As a result, the programs that worked at small scale failed in large-scale replication. This happens all the time, and this is what makes policy experts conclude that nothing works at scale.

However, the lesson they should have learned instead is just that programs proven to work at small scale can succeed if the key factors that made them work at small scale are implemented with fidelity at large scale. If anything less is done in scale-up, you’re taking big risks.

If well-trained trainers are essential, then it is critical to insist on well-trained trainers. If a certain amount or quality of training is essential, it is critical to insist on it, and make sure it happens in every school using a given program. And so on. There is no reason to skimp on the proven recipe.

But aren’t all these trainers and training days and other elements unsustainable?  This is the wrong question. The right one is, how can we make tutoring as effective as possible, to justify its cost?

Tutoring is expensive, but most of the cost is in the salaries of the tutors themselves. As an analogy, consider horse racing.  Horse owners pay millions for horses with great potential. Having done so, do you think they skimp on trainers or training? Of course not. In the same way, a hundred teaching assistants tutors cost roughly $4 million per year in salaries and benefits alone. Let’s say top-quality training for this group costs $500,000 per year, while crummy training costs $50,000. If these figures are in the ballpark, would it be wise to spend $4,500,000 on a terrific tutoring program, or $4,050,000 on a crummy one?

Successful scale-up takes place all the time in business. How does Starbucks make sure your experience in every single store is excellent? Simple. They have well-researched, well specified, obsessively monitored standards and quality metrics for every part of their operation. Scale-up in education can work just the same way, and in comparison to the costs of front-line personnel, the costs of great are trivially greater than the cost of crummy.

3.  Ongoing research will, in our proposal, formatively evaluate the entire tutoring effort over time, and development and evaluation will continually add new proven programs.  

Ordinarily, big federal education programs start with all kinds of rules and regulations and funding schemes, and these are announced with a lot of hoopla and local and national meetings to explain the new programs to local educators and leaders. Some sort of monitoring and compliance mechanism is put in place, but otherwise the program steams ahead. Several years later, some big research firm gets a huge contract to evaluate the program. On average, the result is almost always disappointing. Then there’s a political fight about just how disappointing the results are, and life goes on.

 The program we have proposed is completely different. First, as noted earlier, the individual programs that are operating at large scale will all be proven effective to begin with, and may be evaluated and proven effective again, using the same methods as those used to validate new programs. Second, new proven programs would be identified and scaled up all the time. Third, numerous studies combining observations, correlational studies, and mini-experiments would be evaluating program variations and impacts with different populations and circumstances, adding knowledge of what is happening at the chalkface and of how and why outcomes vary. This explanatory research would not be designed to decide which programs work and which do not (that would be done in the big randomized studies), but to learn from practice how to improve outcomes for each type of school and application. The idea is to get smarter over time about how to make tutoring as effective as it can be, so when the huge summative evaluation takes place, there will be no surprises. We would already know what is working, and how, and why.

Our National Tutoring Corps proposal is not a big research project, or a jobs program for researchers. The overwhelming focus is on providing struggling students the best tutoring we know how to provide. But using a small proportion of the total allocation would enable us to find out what works, rapidly enough to inform practice. If this were all to happen, we would know more and be able to do more every year, serving more and more struggling students with better and better programs.

So rather than spending a lot of taxpayer money and hoping for the best, we’d make scale-up successful by using evidence at the beginning, middle, and end of the process, to make sure that this time, we really know what we are doing. We would make sure that effective programs remain successful at scale, rather than merely hoping they will.

References

Goldstein, M., & Paulle, B. (2020, Dec. 8) Vaccine-making’s lessons for high-dosage tutoring, Part 1. The Gadfly.

Goldstein, M., & Paulle, B. (2020, Dec. 11). Vaccine-making’s lessons for high-dosage tutoring, Part IV. The Gadfly.

Neitzel, A., Lake, C., Pellegrini, M., & Slavin, R. (in press). A synthesis of quantitative research on programs for struggling readers in elementary schools. Reading Research Quarterly.

Pellegrini, M., Neitzel, A., Lake, C., & Slavin, R. (in press). Effective programs in elementary mathematics: A best-evidence synthesis. AERA Open.

Original photo by Catherine Carusso, Presidio of Monterey Public Affairs

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

The Details Matter. That’s Why Proven Tutoring Programs Work Better than General Guidelines.

When I was in first grade, my beloved teacher, Mrs. Adelson, introduced a new activity. She called it “phonics.”  In “phonics,” we were given tiny pieces of paper with letters on them to paste onto a piece of paper, to make words. It was a nightmare. Being a boy, I could sooner sprout wings and fly than do this activity without smearing paste and ink all over the place. The little slips of paper stuck to my thumb rather than to the paper. This activity taught me no phonics or reading whatsoever, but did engender a longtime hatred of “phonics,” as I understood it.

Much, much later I learned that phonics was essential in beginning reading, so I got over my phonics phobia. And I learned an important lesson. Even if an activity focuses on an essential skill, this does not mean that just any activity with that focus will work. The details matter.

I’ve had reason to reflect on this early lesson many times recently, as I’ve spoken to various audiences about our National Tutoring Corps plan. Often, people will ask why it is important to use specific proven programs. Why not figure out the characteristics of proven programs, and encourage tutors to use those consensus strategies?

The answer is that because the details matter, tutoring according to agreed-upon practices is not going to be as effective as specific proven programs, on average. Mrs. Adelson had a correct understanding of the importance of phonics in beginning reading, but in the classroom, where the paste hits the page, her phonics strategy was awful. In tutoring, we might come to agreement about factors such as group size, qualifications of tutors, amount of PD, and so on, but dozens of details also have to be right. An effective tutoring program has to get right crucial features, such as the nature and quality of tutor training and coaching, student materials and software, instructional strategies, feedback and correction strategies when students make errors, frequency and nature of assessments, means of motivating and recognizing student progress, means of handling student absences, links between tutors and teachers and between tutors and parents, and much more. Getting any of these strategies wrong could greatly diminish the effectiveness of tutoring.

The fact that a proven program has shown positive outcomes in rigorous experiments supports confidence that the program’s particular constellation of strategies is effective. During any program’s development and piloting, developers have had to experiment with solutions to each of the key elements. They have had many opportunities to observe tutoring sessions, to speak with tutors, to look at formative data, and to decide on specific strategies for each of the problems that must be solved. A teacher or local professional developer has not had the opportunity to try out and evaluate specific components, so even if they have an excellent understanding of the main elements of tutoring, they could use or promote key components that are not effective or may even be counterproductive. There are now many practical, ready-to-implement, rigorously evaluated tutoring programs with positive impacts (Neitzel et al., in press). Why should we be using programs whose effects are unknown, when there are many proven alternatives?

Specificity is of particular importance in small-group tutoring, because very effective small group methods superficially resemble much less effective methods (see Borman et al., 2001; Neitzel et al., in press; Pellegrini et al., 2020). For example, one-to-four tutoring might look like traditional Title I pullouts, which are far less effective. Some “tutors” teach a class of four no differently than they would teach a class of thirty. Tutoring methods that incorporate computers may also superficially resemble computer assisted instruction, which is also far less effective. Tutoring derives its unique effectiveness from the ability of the tutor to personalize instruction for each child, to provide unique feedback to the specific problems each student faces. It also depends on close relationships between tutors and students. If the specifics are not carefully trained and implemented with understanding and spirit, small-group tutoring can descend into business-as-usual. Not that ordinary teaching and CAI are ineffective, but to successfully combat the effects of Covid-19 school closures and learning gaps in general, tutoring must be much more effective than similar-looking methods. And it can be, but only if tutors are trained and equipped to provide tutoring that has been proven to be effective.

Individual tutors can and do adapt tutoring strategies to meet the needs of particular students or subgroups, and this is fine if the tutor is starting from a well-specified and proven, comprehensive tutoring program and making modifications for well-justified reasons. But when tutors are expected to substantially invent or interpret general strategies, they may make changes that diminish program effectiveness. All too often, local educators seek to modify proven programs to make them easier to implement, less expensive, or more appealing to various stakeholders, but these modifications may leave out elements essential to program effectiveness.

The national experience of Supplementary Educational Services illustrates how good ideas without an evidence base can go wrong. SES provided mostly after-school programs of all sorts, including various forms of tutoring. But hardly any of these programs had evidence of effectiveness. A review of outcomes of almost 400 local SES grants found reading and math effect sizes near zero, on average (Chappell et al., 2011).

In tutoring, it is essential that every student receiving tutoring gets a program highly likely to measurably improve the student’s reading or mathematics skills. Tutoring is expensive, and tutoring is mostly used with students who are very much at risk. It is critical that we give every tutor and every student the highest possible probability of life-altering improvement. Proven, replicable, well-specified programs are the best way to ensure positive outcomes.

Mrs. Adelson was right about phonics, but wrong about how to teach it. Let’s not make the same mistake with tutoring.

References

Borman, G., Stringfield, S., & Slavin, R.E. (Eds.) (2001).  Title I: Compensatory education at the crossroads.  Mahwah, NJ: Erlbaum.

Chappell, S., Nunnery, J., Pribesh, S., & Hager, J. (2011). A meta-analysis of Supplemental Educational Services (SES) provider effects on student achievement. Journal of Education for Students Placed at Risk, 16(1), 1-23.

Neitzel, A., Lake, C., Pellegrini, M., & Slavin, R. (in press). A synthesis of quantitative research on programs for struggling readers in elementary schools. Reading Research Quarterly.

Pellegrini, M., Neitzel, A., Lake, C., & Slavin, R. (2020). Effective programs in elementary mathematics: A best-evidence synthesis. Available at www.bestevidence.com. Manuscript submitted for publication.

Photo by Austrian National Library on Unsplash

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

A “Called Shot” for Educational Research and Impact

In the 1932 World Series, Babe Ruth stepped up to the plate and pointed to the center field fence. Everyone there understood: He was promising to hit the next pitch over the fence.

And then he did.

That one home run established Babe Ruth as the greatest baseball player ever. Even though several others have long since beaten his record of 60 home runs, no one else ever promised to hit a home run and then did it.

Educational research needs to execute a “called shot” of its own. We need to identify a clear problem, one that must be solved with some urgency, one that every citizen understands and cares about, one that government is willing and able to spend serious money to solve. And then we need to solve it, in a way that is obvious to all. I think the clear need for intensive services for students whose educations have suffered due to Covid-19 school closures provides an opportunity for our own “called shot.”

In my recent Open Letter to President-Elect Biden, I described a plan to provide up to 300,000 well-trained college-graduate tutors to work with up to 12 million students whose learning has been devastated by the Covid-19 school closures, or who are far below grade level for any reason. There are excellent reasons to do this, including making a rapid difference in the reading and mathematics achievement of vulnerable children, providing jobs to hundreds of thousands of college graduates who may otherwise be unemployed, and starting the best of these non-certified tutors on a path to teacher certification. These reasons more than justify the effort. But in today’s blog, I wanted to explain a fourth rationale, one that in the long run may be the most important of all.

A major tutoring enterprise, entirely focusing on high-quality implementation of proven programs, could be the “called shot” evidence-based education needs to establish its value to the American public.

Of course, the response to the Covid-19 pandemic is already supporting a “called shot” in medicine, the rush to produce a vaccine. At this time we do not know what the outcome will be, but throughout the world, people are closely following the progress of dozens of prominent attempts to create a safe and effective vaccine to prevent Covid-19. If this works as hoped, this will provide enormous benefits for entire populations and economies worldwide. But it could also raise the possibility that we can solve many crucial medical problems much faster than we have in the past, without compromising on strict research standards. The funding of many promising alternatives, and rigorous testing of each before they are disseminated, is very similar to what I and my colleagues have proposed for various approaches to tutoring. In both the medical case and the educational case, the size of the problem justifies this intensive, all-in approach. If all goes well with the vaccines, that will be a “called shot” for medicine, but medicine has long since proven its capability to use science to solve big problems. Curing polio, eliminating smallpox, and preventing measles come to mind as examples. In education, we need to earn this confidence, with a “called shot” of our own.

Think of it. Education researchers and leaders who support them would describe a detailed and plausible plan to solve a pressing problem of education. Then we announce that given X amount of money and Y amount of time, we will demonstrate that struggling students can perform substantially better than they would have without tutoring.

We’d know this would work, because part of the process would be identifying a) programs already proven to be effective, b) programs that already exist at some scale that would be successfully evaluated, and c) newly-designed programs that would successfully be evaluated. In each case, programs would have to meet rigorous evaluation standards before qualifying for substantial scale-up. In addition, in order to obtain funding to hire tutors, schools would have to agree to ensure that tutors use the programs with an amount and quality of training, coaching, and support at least as good as what was provided in the successful studies.

Researchers and policy makers who believe in evidence-based reform could confidently predict substantial gains, and then make good on their promises. No intervention in all of education is as effective as tutoring. Tutoring can be expensive, but it does not require a lengthy, uncertain transformation of the entire school. No sensible researcher or reformer would think that tutoring is all schools should do to improve student outcomes, but tutoring should be one element of any comprehensive plan to improve schools, and it happens to respond to the needs of post-Covid education for something that can have a dramatic, relatively quick, and relatively reliable impact.

If all went well in a large-scale tutoring intervention, the entire field of research could gain new respect, a belief among educators and the public that outcomes could be made much better than they are now by systematic applications of research, development, evaluation, and dissemination.

It is important to note that in order to be perceived to work, the tutoring “called shot” need not be proven effective across the board. By my count, there are 18 elementary reading tutoring programs with positive outcomes in randomized evaluations (see below). Let’s say 12 of them are ready for prime time and are put to the test, and 5 of those work very well at scale. That would be a tremendous success, because if we know which five approaches worked, we could make substantial progress on the problem of elementary reading failure. Just as with Covid-19 vaccines, we shouldn’t care how many vaccines failed. All that matters is that one or more of them succeeds, and can then be widely replicated.

I think it is time to do something bold to capture people’s imaginations. Let’s (figuratively) point to the center field fence, and (figuratively) hit the next pitch over it. The conditions today for such an effort are as good as they will ever be, because of universal understanding that the Covid-19 school closures deserve extraordinary investments in proven strategies. Researchers working closely with educators and political leaders can make a huge difference. We just have to make our case and insist on nothing less than whatever it takes. If a “called shot” works for tutoring, perhaps we could use similar approaches to solve other enduring problems of education.

It worked for the Babe. It should work for us, too, with much greater consequences for our children and our society than a mere home run.

*  *  *

Note: A reader of my previous blog asked what specific tutoring programs are proven effective, according to our standards. I’ve listed below reading and math tutoring programs that meet our standards of evidence. I cannot guarantee that all of these programs would be able to go to scale. We are communicating with program providers to try to assess each program’s capacity and interest in going to scale. But these programs are a good place to start in understanding where things stand today.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

An Open Letter To President-Elect Biden: A Tutoring Marshall Plan To Heal Our Students

Dear President-Elect Biden:

            Congratulations on your victory in the recent election. Your task is daunting; so much needs to be set right. I am writing to you about what I believe needs to be done in education to heal the damage done to so many children who missed school due to Covid-19 closures.

            I am aware that there are many basic things that must be done to improve schools, which have to continue to make their facilities safe for students and cope with the physical and emotional trauma that so many have experienced. Schools will be opening into a recession, so just providing ordinary services will be a challenge. Funding to enable schools to fulfill their core functions is essential, but it is not sufficient.

            Returning schools to the way they were when they closed last spring will not heal the damage students have sustained to their educational progress. This damage will be greatest to disadvantaged students in high-poverty schools, most of whom were unable to take advantage of the remote learning most schools provided. Some of these students were struggling even before schools closed, but when they re-open, millions of students will be far behind.

            Our research center at Johns Hopkins University studies the evidence on programs of all kinds for students who are at risk, especially in reading (Neitzel et al., 2020) and mathematics (Pellegrini et al., 2020). What we and many other researchers have found is that the most effective strategy for struggling students, especially in elementary schools, is one-to-one or one-to-small group tutoring. Structured tutoring programs can make a large difference in a short time, exactly what is needed to help students quickly catch up with grade level expectations.

A Tutoring Marshall Plan

            My colleagues and I have proposed a massive effort designed to provide proven tutoring services to the millions of students who desperately need it. Our proposal, based on a similar idea by Senator Coons (D-Del), would ultimately provide funding to enable as many as 300,000 tutors to be recruited, trained in proven tutoring models, and coached to ensure their effectiveness. These tutors would be required to have a college degree, but not necessarily a teaching certificate. Research has found that such tutors, using proven tutoring models with excellent professional development, can improve the achievement of students struggling in reading or mathematics as much as can teachers serving as tutors.

            The plan we are proposing is a bit like the Marshall Plan after World War II, which provided substantial funding to Western European nations devastated by the war. The idea was to put these countries on their feet quickly and effectively so that within a brief period of years, they could support themselves. In a similar fashion, a Tutoring Marshall Plan would provide intensive funding to enable Title I schools nationwide to substantially advance the achievement of their students who suffered mightily from Covid-19 school closures and related trauma. Effective tutoring is likely to enable these children to advance to the point where they can profit from ordinary grade-level instruction. We fear that without this assistance, millions of children will never catch up, and will show the negative effects of the school closures throughout their time in school and beyond.

            The Tutoring Marshall Plan will also provide employment to 300,000 college graduates, who will otherwise have difficulty entering the job market in a time of recession. These people are eager to contribute to society and to establish professional careers, but will need a first step on that ladder. Ideally, the best of the tutors will experience the joys of teaching, and might be offered accelerated certification, opening a new source of teacher candidates who will have had an opportunity to build and demonstrate their skills in school settings. Like the CCC and WPA programs in the Great Depression, these tutors will not only be helped to survive the financial crisis, but will perform essential services to the nation while building skills and confidence.

            The Tutoring Marshall Plan needs to start as soon as possible. The need is obvious, both to provide essential jobs to college graduates and to provide proven assistance to struggling students.

            Our proposal, in brief, is to ask the U.S. Congress to fund the following activities:

Spring, 2021

  • Fund existing tutoring programs to build capacity to scale up their programs to serve thousands of struggling students. This would include funds for installing proven tutoring programs in about 2000 schools nationwide.
  • Fund rigorous evaluations of programs that show promise, but have not been evaluated in rigorous, randomized experiments.
  • Fund the development of new programs, especially in areas in which there are few proven models, such as programs for struggling students in secondary schools.

Fall, 2021 to Spring, 2022

  • Provide restricted funds to Title I schools throughout the United States to enable them to hire up to 150,000 tutors to implement proven programs, across all grade levels, 1-9, and in reading and mathematics. This many tutors, mostly using small-group methods, should be able to provide tutoring services to about 6 million students each year. Schools should be asked to agree to select from among proven, effective programs. Schools would implement their chosen programs using tutors who have college degrees and experience with tutoring, teaching, or mentoring children (such as AmeriCorps graduates who were tutors, camp counselors, or Sunday school teachers).
  • As new programs are completed and piloted, third-party evaluators should be funded to evaluate them in randomized experiments, adding to capacity to serve students in grades 1-9. Those programs that produce positive outcomes would then be added to the list of programs available for tutor funding, and their organizations would need to be funded to facilitate preparation for scale-up.
  • Teacher training institutions and school districts should be funded to work together to design accelerated certification programs for outstanding tutors.

Fall, 2022-Spring, 2023

  • Title I schools should be funded to enable them to hire a total of 300,000 tutors. Again, schools will select among proven tutoring programs, which will train, coach, and evaluate tutors across the U.S. We expect these tutors to be able to work with about 12 million struggling students each year.
  • Development, evaluation, and scale-up of proven programs should continue to enrich the number and quality of proven programs adapted to the needs of all kinds of Title I schools.

            The Tutoring Marshall Plan would provide direct benefits to millions of struggling students harmed by Covid-19 school closures, in all parts of the U.S. It would provide meaningful work with a future to college graduates who might otherwise be unemployed. At the same time, it could establish a model of dramatic educational improvement based on rigorous research, contributing to knowledge and use of effective practice. If all goes well, the Tutoring Marshall Plan could demonstrate the power of scaling up proven programs and using research and development to improve the lives of children.

References

Neitzel, A., Lake, C., Pellegrini, M., & Slavin, R. (2020). A synthesis of quantitative research on programs for struggling readers in elementary schools. Available at www.bestevidence.org. Manuscript submitted for publication.

Pellegrini, M., Inns, A., Lake, C., & Slavin, R. (2020). Effective programs in elementary mathematics: A best-evidence synthesis. Available at www.bestevidence.com. Manuscript submitted for publication.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

The Case for Optimism

In the July 16 New York Times, columnist Nicholas Kristof wrote an article with a provocative title: “We Interrupt This Gloom to Offer…Hope.”

Kristof’s basic point is that things have gotten so awful in the U.S. that, in response, with any luck, we could soon be able to make progress on many issues that we could never make in normal times. He gives the example of the Great Depression, which made possible Social Security, rural electrification, and much more. And the assassination of John F. Kennedy, which led to the Elementary and Secondary Education Act and the Civil Rights Act.

Could the crises we are going through right now have even more profound and long-lasting consequences? The Covid-19 pandemic is exposing the lack of preparedness and the profound inequities in our health systems that everyone knew about, but that our political systems could not fix. The Black Lives Matter movement is not new, but George Floyd’s killing and many other outrages caught on video are fueling substantial changes in attitudes among people of all races, making genuine progress possible. The shockingly unequal impacts of both Covid itself and its economic impacts are tearing away complacency about the different lives that are possible for rich and poor. The attacks by federal troops on peaceful demonstrators in Washington and Portland are likely to drive Americans to get back to the core principles in our Constitution, ones we too often take for granted. When this is all over, how can we just return to the way things were?

What is happening in education is appalling. Our inept response to the Covid pandemic makes it literally murder to open schools in many parts of the country. Some districts are already announcing that they will not open until January. With schools closed, or only partially open, students will be expected to learn remote, online lessons, which author Doug Lemov aptly describes as “like teaching through a keyhole.”

The statistics say that a tenth or a quarter or a half of students, depending on where they are, are not logging into online learning even once. For disadvantaged students and students in rural areas, this is due in part to a lack of access to equipment or broadband, and school districts are collectively spending billions to increase access to computers. But talk to just about any teacher or parent or student, including the most conscientious students with the best technology and the most supportive parents. They are barely going through the motions. The utter failure of online education in this crisis is a crisis in itself.

The ultimate result of the school closures and the apparent implosion of online teaching is that when schools do open, students will have fallen far behind. Gaps between middle class and disadvantaged students, awful in the best of times, will grow even larger.

So how can I possibly be optimistic?

blog_7-30-20_optimismrainbow_500x333

There are several things that I believe are highly likely to occur in the coming months in our country. First, once students are back in school, we will find out how far behind they have fallen, and we will have to declare an educational emergency, with adequate funding to match the seriousness of the problems. Then the following will have to happen.

  1. Using federal money, states and districts will contract with local agencies to hire an army of tutors to work individually or in small groups with struggling students, especially in elementary reading and mathematics, where there are many proven programs ready to go. Frankly, this is no longer optional. There is nothing nearly as effective as one-to-one or one-to-small group tutoring. Nothing else can be put in place as quickly with as high a likelihood of working. As I’ve reported in previous blogs, England and the Netherlands have already announced national tutoring programs to combat the achievement gaps being caused by school closures. My own state, Maryland, has recently announced a $100 million program to provide tutoring statewide. Millions of recent college graduates will be without jobs in the recession that is certain to come. The best of them will be ideal candidates to serve as tutors.
  2. America is paying a heavy price for ignoring its scientists, and science itself. Although there has been rapid growth in the evidence base and in the availability of proven programs, educational research and proven programs are still paid little attention in school policies and practices. In the education crisis we face, perhaps this will change. Might it be possible that schools could receive incentive funding to enable them to adopt proven programs known to make substantial differences in learning from Pre-K to 12th grade and beyond? In normal times, people can ignore evidence about what works in reading or mathematics or science or social-emotional learning. But these are not normal times. No school should be forced to use any particular program, but government can use targeted funding and encouragement to enable schools to select and effectively implement programs of their choice.
  3. In emergencies, government often accelerates funding for research and development to quickly find solutions for pressing national problems. This is happening now as labs nationwide are racing to develop Covid vaccines and cures, for example. As we declare an education emergency, we should be investing in research and development to respond to high-priority needs. For example, there are several proven programs for elementary students struggling in reading or mathematics. Yet we have few if any proven tutoring programs for middle or high schools. Middle school tutoring methods have been proven effective in England, so we know this can work, but we need to adapt and evaluate English models for the U.S., or evaluate existing U.S. programs that are promising but unevaluated, or develop new models for the U.S. If we are wise, we will do all three of these things. In the education emergency we face, it is not the time to fiddle around the edges. It is time to use our national innovative capacity to identify and solve big problems.

If America does declare a national education emergency, if it does mobilize an army of tutors using proven programs, if it invests in creating and evaluating new, ever more effective programs to solve educational problems and incentivizes schools to use them, an amazing thing will happen. In addition to solving our immediate problems, we will have learned how to make our schools much more effective, even in normal times.

Yes, things will someday get back to normal. But if we do the right things to solve our crises, we will not just be returning to normal. We will be returning to better. Maybe a lot better.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

Are the Dutch Solving the Covid Slide with Tutoring?

For a small country, the Netherlands has produced a remarkable number of inventions. The Dutch invented the telescope, the microscope, the eye test, Wi-Fi, DVD/Blue-Ray, Bluetooth, the stock market, golf, and major improvements in sailboats, windmills, and water management. And now, as they (like every other country) are facing major educational damage due to school closures in the Covid-19 pandemic, it is the Dutch who are the first to apply tutoring on a large scale to help students who are furthest behind. The Dutch government recently announced a plan to allocate the equivalent of $278 million to provide support to all students in elementary, secondary, and vocational schools who need it. Schools can provide the support in different ways (e.g., summer schools, extended school days), but it is likely that a significant amount of the money will be spent on tutoring. The Ministry of Education proposed to recruit student teachers to provide tutoring, who will have to be specially trained for this role.

blog_6-18-20_Dutchclass_500x333The Dutch investment would be equivalent to a U.S. investment of about $5.3 billion, because of our much larger population. That’s a lot of tutors. Including salaries, materials, and training, I’d estimate this much money would support about 150,000 tutors. If each could work in small groups with 50 students a year, they might serve about 7,500,000 students each year, roughly one in every seven American children. That would be a pretty good start.

Where would we get all this money? Because of the recession we are in now, millions of recent college graduates will not be able to find work. Many of these would make great tutors. As in any recession, the federal government will seek to restart the economy by investing in people. In this particular recession, it would be wise to devote part of such investments to support enthusiastic young people to learn and apply proven tutoring approaches coast to coast.

Imagine that we created an American equivalent of the Dutch tutoring program. How could such a huge effort be fielded in time to help the millions of students who need substantial help? The answer would be to build on organizations that already exist and know how to recruit, train, mentor, and manage large numbers of people. The many state-based AmeriCorps agencies would be a great place to begin, and in fact there has already been discussion in the U.S. Congress about a rapid expansion of AmeriCorps for work in health and education roles to heal the damage of Covid-19. The former governor of Tennessee, Bill Haslam, is funding a statewide tutoring plan in collaboration with Boys and Girls Clubs. Other national non-profit organizations such as Big Brothers Big Sisters, City Year, and Communities in Schools could each manage recruitment, training, and management of tutors in particular states and regions.

It would be critical to make certain that the tutoring programs used under such a program are proven to be effective, and are ready to be scaled up nationally, in collaboration with local agencies with proven track records.

All of this could be done. Considering the amounts of money recently spent in the U.S. to shore up the economy, and the essential need both to keep people employed and to make a substantial difference in student learning, $5.3 billion targeted to proven approaches seems entirely reasonable.

If the Dutch can mount such an effort, there is no reason we could not do the same. It would be wonderful to help both unemployed new entrants to the labor force and students struggling in reading or mathematics. A double Dutch treat!

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

Thorough Implementation Saves Lives

In an article in the May 23 Washington Post, Dr. John Barry, a professor of Public Health and Tropical Medicine at Tulane, wrote about lessons from the 1918 influenza epidemic.  Dr. Barry is the author of a book about that long-ago precursor to the epidemic we face today.  I found the article chilling, in light of what is happening right now in the Covid-19 pandemic.

In particular, he wrote about a study of Army training camps in 1918.  Army leaders prescribed strict isolation and quarantine measures, and most camps followed this guidance.  However, some did not.  Most camps that did follow the guidance did so rigorously for a few weeks, but then gradually loosened up.  The study compared the camps that never did anything to the camps that followed the guidelines for a while.  There were no differences in the rates of sickness or death.  However, a third set of camps continued to follow the guidance for a much longer time. These camps saw greatly reduced rates of sickness and death.

blog_6-4-20_1918flu_500x375
Camp Funston at Fort Riley, Kansas, during the 1918 flu pandemic, Armed Forces Institute of Pathology/National Museum of Health and Medicine, distributed via the Associated Press / Public domain

Dr. Barry also gave an example from the SARS epidemic in the early 2000’s.  President George W. Bush wanted to honor the one hospital in the world with the lowest rate of SARS infection among staff.  A study of that hospital found that they were doing exactly what all hospitals were doing, making sure that staff maintained sterile procedures.  The difference was that in this hospital, the hospital administration made sure that these rules were being rigorously followed.  This reminds me of a story by Atul Gawande about the most successful hospital in the world for treating cystic fibrosis. Researchers studied this hospital, an ordinary, non-research hospital in a Minneapolis suburb.  The physician in charge of cystic fibrosis was found to be using the very same procedures and equipment that every other hospital used.  The difference was that he frequently called all of his patients to make sure they were using the equipment and procedures properly.  His patients had markedly higher survival rates than did patients in similar hospitals doing exactly the same (medical) things with less attention to fidelity.

Now consider what is happening in the U.S. in our current pandemic.  Given our late start, we have done a pretty good job reducing rates of disease and death, compared to what might have been.  However, all fifty states are now opening up, to one degree or another.  The basic message: “We have been careful long enough.  Now let’s get sloppy.”

Epidemiologists are watching all of this with horror.  They know full well what is coming.  Leana Wen, Baltimore’s former Health Commissioner, explained the consequences of the choices we are making in a deeply disturbing article in the May 13 Post.

The entire story of what has happened in the Covid-19 crisis, and what is likely to happen now, has a substantial resonance with problems we experience in educational reform.  Our field is full of really good ideas for improving educational outcomes.  However, we have relatively few examples of programs that have been successful even in one-year evaluations, much less over extended time periods at large scale.  The problem is not usually that the ideas turn out not to be so good after all, but that they are rarely implemented with consistency or rigor. Or they are implemented well for a while, but get sloppy over time, or stop altogether.  I am often asked how long innovators must stay connected with schools using their research-proven programs with success.  My answer is, “forever.”  The connection need not be frequent in successful implementers, but someone who knows what the program is supposed to look like needs to check in from time to time to see how things are going, to cheer the school on in its efforts to maintain and constantly improve their implementation, and to help the school identify and solve any problems that have cropped up.

Another thing I am frequently asked is how I can base my argument for evidence-based education on the examples of medicine and other evidence-based fields.  “Taking a pill is not like changing a school.”  This is true.  However, the examples of epidemiology, cystic fibrosis (before the recent cure was announced), dealing with obesity and drug abuse, and many other problems of medicine and public health, actually look quite a bit like the problems of education reform.  In medicine, there is a robust interest in “implementation science,” focused on, among other things, getting people to take their medicine or follow a proven protocol (e.g., “eat more veggies”).  There is growing interest in implementation science in education, too.  Similar problems, similar solutions, in many cases.

Education, public health, and medicine have a lot to learn from each other.  In each case, we are trying to make important differences in whole populations.  It is never easy, but in each of our fields, we are learning how to cost-effectively increase health and education outcomes at scale.  In the current pandemic, I hope science will prevail in both reducing the impact of the disease and in using proven practices, with consistency and rigor, to help schools repair the educational damage children have suffered.

References

Barry, J.M. (2020, May 23).  How to avoid a second wave of infections.  Washington Post.

Wen, L.S. (2020, May 13).  We are retreating to a new strategy on covid-19.  Let’s call it what it is.  Washington Post.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

 

After the Pandemic: Can We Welcome Students Back to Better Schools?

I am writing in March, 2020, at what may be the scariest point in the COVID-19 pandemic in the U.S. We are just now beginning to understand the potential catastrophe, and also to begin taking actions most likely to reduce the incidence of the disease.

One of the most important preventive measures is school closure. At this writing, thirty entire states have closed their schools, as have many individual districts, including Los Angeles. It is clear that school closures will go far beyond this, both in the U.S. and elsewhere.

I am not an expert on epidemiology, but I did want to make some observations about how widespread school closure could affect education, and (ever the optimist) how this disaster could provide a basis for major improvements in the long run.

Right now, schools are closing for a few weeks, with an expectation that after spring break, all will be well again, and schools might re-open. From what I read, this is unlikely. The virus will continue to spread until it runs out of vulnerable people. The purpose of school closures is to reduce the rate of transmission. Children themselves tend not to get the disease, for some reason, but they do transmit the disease, mostly at school (and then to adults). Only when there are few new cases to transmit can schools be responsibly re-opened. No one knows for sure, but a recent article in Education Week predicted that schools will probably not re-open this school year (Will, 2020). Kansas is the first state to announce that schools will be closed for the rest of the school year, but others will surely follow.

Will students suffer from school closure? There will be lasting damage if students lose parents, grandparents, and other relatives, of course. Their achievement may take a dip, but a remarkable study reported by Ceci (1991) examined the impact of two or more years of school closures in the Netherlands in World War II, and found an initial loss in IQ scores that quickly rebounded after schools re-opened after the war. From an educational perspective, the long-term impact of closure itself may not be so bad. A colleague, Nancy Karweit (1989), studied achievement in districts with long teacher strikes, and did not find much of a lasting impact.

In fact, there is a way in which wise state and local governments might use an opportunity presented by school closures. If schools closing now stay closed through the end of the school year, that could leave large numbers of teachers and administrators with not much to do (assuming they are not furloughed, which could happen). Imagine that, where feasible, this time were used for school leaders to consider how they could welcome students back to much improved schools, and to blog_3-26_20_teleconference2_500x334provide teachers with (electronic) professional development to implement proven programs. This might involve local, regional, or national conversations focused on what strategies are known to be effective for each of the key objectives of schooling. For example, a national series of conversations could take place on proven strategies for beginning reading, for middle school mathematics, for high school science, and so on. By design, the conversations would be focused not just on opinions, but on rigorous evidence of what works. A focus on improving health and disease prevention would be particularly relevant to the current crisis, along with implementing proven academic solutions.

Particular districts might decide to implement proven programs, and then use school closure to provide time for high-quality professional development on instructional strategies that meet the ESSA evidence standards.

Of course, all of the discussion and professional development would have to be done using electronic communications, for obvious reasons of public health. But might it be possible to make wise use of school closure to improve the outcomes of schooling using professional development in proven strategies? With rapid rollout of existing proven programs and dedicated funding, it certainly seems possible.

States and districts are making a wide variety of decisions about what to do during the time that schools are closed. Many are moving to e-learning, but this may be of little help in areas where many students lack computers or access to the internet at home. In some places, a focus on professional development for next school year may be the best way to make the best of a difficult situation.

There have been many times in the past when disasters have led to lasting improvements in health and education. This could be one of these opportunities, if we seize the moment.

Photo credit: Liam Griesacker

References

Ceci, S. J. (1991). How much does schooling influence general intelligence and its cognitive components? A reassessment of the evidence. Developmental Psychology, 27(5), 703–722. https://doi.org/10.1037/0012-1649.27.5.703

Karweit, N. (1989). Time and learning: A review. In R. E. Slavin (Ed.), School and Classroom Organization. Hillsdale, NJ: Erlbaum.

Will, M. (2020, March 15). School closure for the coronavirus could extend to the end of the school year, some say. Education Week.

 This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

Getting Schools Excited About Participating in Research

If America’s school leaders are ever going to get excited about evidence, they need to participate in it. It’s not enough to just make school leaders aware of programs and practices. Instead, they need to serve as sites for experiments evaluating programs that they are eager to implement, or at least have friends or peers nearby who are doing so.

The U.S. Department of Education has funded quite a lot of research on attractive programs A lot of the studies they have funded have not shown positive impacts, but many have been found to be effective. Those effective programs could provide a means of engaging many schools in rigorous research, while at the same time serving as examples of how evidence can help schools improve their results.

Here is my proposal. It quite often happens that some part of the U.S. Department of Education wants to expand the use of proven programs on a given topic. For example, imagine that they wanted to expand use of proven reading programs for struggling readers in elementary schools, or proven mathematics programs in Title I middle schools.

Rather than putting out the usual request for proposals, the Department might announce that schools could qualify for funding to implement a qualifying proven program, but in order to participate they had to agree to participate in an evaluation of the program. They would have to identify two similar schools from a district, or from neighboring districts, that would agree to participate if their proposal is successful. One school in each pair would be assigned at random to use a given program in the first year or two, and the second school could start after the one- or two-year evaluation period was over. Schools would select from a list of proven programs and choose one that seems appropriate to their needs.

blog_2-6-20_celebrate_500x334            Many pairs of schools would be funded to use each proven program, so across all schools involved, this would create many large, randomized experiments. Independent evaluation groups would carry out the experiments. Students in participating schools would be pretested at the beginning of the evaluation period (one or two years), and posttested at the end, using tests independent of the developers or researchers.

There are many attractions to this plan. First, large randomized evaluations on promising programs could be carried out nationwide in real schools under normal conditions. Second, since the Department was going to fund expansion of promising programs anyway, the additional cost might be minimal, just the evaluation cost. Third, the experiment would provide a side-by-side comparison of many programs focusing on high-priority topics in very diverse locations. Fourth, the school leaders would have the opportunity to select the program they want, and would be motivated, presumably, to put energy into high-quality implementation. At the end of such a study, we would know a great deal about which programs really work in ordinary circumstances with many types of students and schools. But just as importantly, the many schools that participated would have had a positive experience, implementing a program they believe in and finding out in their own schools what outcomes the program can bring them. Their friends and peers would be envious and eager to get into the next study.

A few sets of studies of this kind could build a constituency of educators that might support the very idea of evidence. And this could transform the evidence movement, providing it with a national, enthusiastic audience for research.

Wouldn’t that be great?

 This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

New Sections on Social Emotional Learning and Attendance in Evidence for ESSA!

We are proud to announce the launch of two new sections of our Evidence for ESSA website (www.evidenceforessa.org): K-12 social-emotional learning and attendance. Funded by a grant from the Bill and Melinda Gates Foundation, the new sections represent our first foray beyond academic achievement.

blog_2-6-20_evidenceessa_500x333

The social-emotional learning section represents the greatest departure from our prior work. This is due to the nature of SEL, which combines many quite diverse measures. We identified 17 distinct measures, which we grouped in four overarching categories, as follows:

Academic Competence

  • Academic performance
  • Academic engagement

Problem Behaviors

  • Aggression/misconduct
  • Bullying
  • Disruptive behavior
  • Drug/alcohol abuse
  • Sexual/racial harassment or aggression
  • Early/risky sexual behavior

Social Relationships

  • Empathy
  • Interpersonal relationships
  • Pro-social behavior
  • Social skills
  • School climate

Emotional Well-Being

  • Reduction of anxiety/depression
  • Coping skills/stress management
  • Emotional regulation
  • Self-esteem/self-efficacy

Evidence for ESSA reports overall effect sizes and ratings for each of the four categories, as well as the 17 individual measures (which are themselves composed of many measures used by various qualifying studies). So in contrast to reading and math, where programs are rated based on the average of all qualifying  reading or math measures, an SEL program could be rated “strong” in one category, “promising” in another, and “no qualifying evidence” or “qualifying studies found no significant positive effects” on others.

Social-Emotional Learning

The SEL review, led by Sooyeon Byun, Amanda Inns, Cynthia Lake, and Liz Kim at Johns Hopkins University, located 24 SEL programs that both met our inclusion standards and had at least one study that met strong, moderate, or promising standards on at least one of the four categories of outcomes.

There is much more evidence at the elementary and middle school levels than at the high school level. Recognizing that some programs had qualifying outcomes at multiple levels, there were 7 programs with positive evidence for pre-K/K, 10 for 1-2, 13 for 3-6, and 9 for middle school. In contrast, there were only 4 programs with positive effects in senior high schools. Fourteen studies took place in urban locations, 5 in suburbs, and 5 in rural districts.

The outcome variables most often showing positive impacts include social skills (12), school climate (10), academic performance (10), pro-social behavior (8), aggression/misconduct (7), disruptive behavior (7), academic engagement (7), interpersonal relationships (7), anxiety/depression (6), bullying (6), and empathy (5). Fifteen of the programs targeted whole classes or schools, and 9 targeted individual students.

Several programs stood out in terms of the size of the impacts. Take the Lead found effect sizes of +0.88 for social relationships and +0.51 for problem behaviors. Check, Connect, and Expect found effect sizes of +0.51 for emotional well-being, +0.29 for problem behaviors, and +0.28 for academic competence. I Can Problem Solve found effect sizes of +0.57 on school climate. The Incredible Years Classroom and Parent Training Approach reported effect sizes of +.57 for emotional regulation, +0.35 for pro-social behavior, and +0.21 for aggression/misconduct. The related Dinosaur School classroom management model reported effect sizes of +0.31 for aggression/misbehavior. Class-Wide Function-Related Intervention Teams (CW-FIT), an intervention for elementary students with emotional and behavioral disorders, had effect sizes of +0.47 and +0.30 across two studies for academic engagement and +0.38 and +0.21 for disruptive behavior. It also reported effect sizes of +0.37 for interpersonal relationships, +0.28 for social skills, and +0.26 for empathy. Student Success Skills reported effect sizes of +0.30 for problem behaviors, +0.23 for academic competence, and +0.16 for social relationships.

In addition to the 24 highlighted programs, Evidence for ESSA lists 145 programs that were no longer available, had no qualifying studies (e.g., no control group), or had one or more qualifying studies but none that met the ESSA Strong, Moderate, or Promising criteria. These programs can be found by clicking on the “search” bar.

There are many problems inherent to interpreting research on social-emotional skills. One is that some programs may appear more effective than others because they use measures such as self-report, or behavior ratings by the teachers who taught the program. In contrast, studies that used more objective measures, such as independent observations or routinely collected data, may obtain smaller impacts. Also, SEL studies typically measure many outcomes and only a few may have positive impacts.

In the coming months, we will be doing analyses and looking for patterns in the data, and will have more to say about overall generalizations. For now, the new SEL section provides a guide to what we know now about individual programs, but there is much more to learn about this important topic.

Attendance

Our attendance review was led by Chenchen Shi, Cynthia Lake, and Amanda Inns. It located ten attendance programs that met our standards. Only three of these reported on chronic absenteeism, which refers to students missing more than 10% of days. Many more focused on average daily attendance (ADA). Among programs focused on average daily attendance, a Milwaukee elementary school program called SPARK had the largest impact (ES=+0.25). This is not an attendance program per se, but it uses AmeriCorps members to provide tutoring services across the school, as well as involving families. SPARK has been shown to have strong effects on reading, as well as its impressive effects on attendance. Positive Action is another schoolwide approach, in this case focused on SEL. It has been found in two major studies in grades K-8 to improve student reading and math achievement, as well as overall attendance, with a mean effect size of +0.20.

The one program to report data on both ADA and chronic absenteeism is called Attendance and Truancy Intervention and Universal Procedures, or ATI-UP. It reported an effect size in grades K-6 of +0.19 for ADA and +0.08 for chronic attendance. Talent Development High School (TDHS) is a ninth grade intervention program that provides interdisciplinary learning communities and “double dose” English and math classes for students who need them. TDHS reported an effect size of +0.17.

An interesting approach with a modest effect size but very modest cost is now called EveryDay Labs (formerly InClass Today). This program helps schools organize and implement a system to send postcards to parents reminding them of the importance of student attendance. If students start missing school, the postcards include this information as well. The effect size across two studies was a respectable +0.16.

As with SEL, we will be doing further work to draw broader lessons from research on attendance in the coming months. One pattern that seems clear already is that effective attendance improvement models work on building close relationships between at-risk students and concerned adults. None of the effective programs primarily uses punishment to improve attendance, but instead they focus on providing information to parents and students and on making it clear to students that they are welcome in school and missed when they are gone.

Both SEL and attendance are topics of much discussion right now, and we hope these new sections will be useful and timely in helping schools make informed choices about how to improve social-emotional and attendance outcomes for all students.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.