Lessons for Educational Research from the COVID-19 Vaccines

Since the beginning of the COVID-19 pandemic, more than 130 biotech companies have launched major efforts to develop and test vaccines. Only four have been approved so far (Pfizer, Moderna, Johnson & Johnson, and AstraZeneca). Among the others, many have outright failed, and others are considered highly unlikely. Some of the failed vaccines are small, fringe companies, but they also include some of the largest and most successful drug companies in the world: Merck (U.S.), Glaxo-Smith-Kline (U.K.), and Sanofi (France).

Kamala Harris gets her vaccine.

Photo courtesy of NIH

If no further companies succeed, the score is something like 4 successes and 126 failures.  Based on this, is the COVID vaccine a triumph of science, or a failure? Obviously, if you believe that even one of the successful programs is truly effective, you would have to agree that this is one of the most extraordinary successes in the history of medicine. In less than one year, companies were able to create, evaluate, and roll out successful vaccines, already saving hundreds of thousands of lives worldwide.

Meanwhile, Back in Education . . .

The example of COVID vaccines contrasts sharply with the way research findings are treated in education. As one example, Borman et al. (2003) reviewed research on 33 comprehensive school reform programs. Only three of these had solid evidence of effectiveness, according to the authors (one of these was our program, Success for All; see Cheung et al., in press). Actually, few of the programs failed; most had just not been evaluated adequately. Yet the response from government and educational leaders was “comprehensive school reform doesn’t work” rather than, “How wonderful! Let’s use the programs proven to work.” As a result, a federal program supporting comprehensive school reform was canceled, use of comprehensive school reform plummeted, and most CSR programs went out of operation (we survived, just barely, but the other two successful programs soon disappeared).

Similarly, the What Works Clearinghouse, and our Evidence for ESSA website (www.evidenceforessa.org), are often criticized because so few of the programs we review turn out to have significant positive outcomes in rigorous studies.

The reality is that in any field in which rigorous experiments are used to evaluate innovations, most of the innovations fail. Mature science-focused fields, like medicine and agriculture, expect this and honor it, because the only way to prevent failures is to do no experiments at all, or only flawed experiments. Without rigorous experiments, we would have no reliable successes.  Also, we learn from failures, as scientists are learning from the findings of the evaluations of all 130 of the COVID vaccines.

Unfortunately, education is not a mature science-focused field, and in our field, failure to show positive effects in rigorous experiments leads to cover-ups, despair, abandonment of proven and promising approaches, or abandonment of rigorous research itself. About 20 years ago, a popular federally-funded education program was found to be ineffective in a large, randomized experiment. Supporters of this program actually got Congress to enact legislation that forbade the use of randomized experiments to evaluate this program!

Research has improved in the past two decades, and acceptance of research has improved as well. Yet we are a long way from medicine, for example, which accepts both success and failure as part of a process of using science to improve health. In our field, we need to commit to broad scale, rigorous evaluations of promising approaches, wide dissemination of programs that work, and learning from experiments that do not (yet) show positive outcomes. In this way, we could achieve the astonishing gains that take place in medicine, and learn how to produce these gains even faster using all the knowledge acquired in experiments, successful or not.

References

Borman, G. D., Hews, G. M., Overman, L. T., & Brown, S. (2003). Comprehensive school reform and achievement: A meta-analysis. Review of Educational Research, 12(2), 125-230.

Cheung, A., Xie, C., Zhang, T. & Slavin, R. E. (in press). Success for All: A quantitative synthesis of evaluations. Journal of Research on Educational Effectiveness.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just enter your email address here.

Avoiding the Errors of Supplemental Educational Services (SES)

“The definition of insanity is doing the same thing over and over again, and expecting different results.” –Albert Einstein

Last Friday, the U.S. Senate and House of Representatives passed a $1.9 trillion recovery bill. Within it is the Learning Recovery Act (LRA). Both the overall bill and the Learning Recovery Act are timely and wonderful. In particular, the LRA emphasizes the importance of using research-based tutoring to help students who are struggling in reading or math. The linking of evidence to large-scale federal education funding began with the 2015 ESSA definition of proven educational programs, and the LRA would greatly increase the importance of evidence-based practices.

But if you sensed a “however” coming, you were right. The “however” is that the LRA requires investments of substantial funding in “school extension programs,” such as “summer school, extended day, or extended school year programs” for vulnerable students.

This is where the Einstein quote comes in. “School extension programs” sound a lot like Supplemental Educational Services (SES), part of No Child Left Behind that offered parents and children an array of services that had to be provided after school or in summer school.

The problem is, SES was a disaster. A meta-analysis of 28 studies of SES by Chappell et al. (2011) found a mean effect size of +0.04 for math and +0.02 for reading. A sophisticated study by Deke et al. (2014) found an effect size of +0.05 for math and -0.03 for reading. These effect sizes are just different flavors of zero. Zero was the outcome whichever way you looked at the evidence, with one awful exception: The lowest achievers, and special education students, actually performed significantly less well in the Deke et al. (2014) study if they were in SES than if they qualified but did not sign up. The effect sizes for these students were around -0.20 for reading and math. Heinrich et al. (2009) also reported that the lowest achievers were least likely to sign up for SES, and least likely to attend regularly if they did. All three major studies found that outcomes did not vary much depending on which type of provider or program they received. Considering that the per-pupil cost was estimated at $1,725 in 2021 dollars, these outcomes are distressing, but more important is the fact that despite the federal government’s willingness to spend quite a lot on them, millions of struggling students in desperate need of effective assistance did not benefit.

Why did SES fail? I have two major explanations. Heinrich et al. (2009), who added questionnaires and observations to find out what was going on, discovered that at least in Milwaukee, attendance in SES after-school programs was appalling (as I reported in my previous blog). In the final year studied, only 16% of eligible students were attending (less than half signed up at all, and of those, average attendance in the remedial program was only 34%). Worse, the students in greatest need were least likely to attend.

From their data and other studies they cite, Heinrich et al. (2010) paint a picture of students doing boring, repetitive worksheets unrelated to what they were doing in their school-day classes. Students were incentivized to sign up for SES services with incentives, such as iPods, gift cards, or movie passes. Students often attended just enough to get their incentives, but then stopped coming. In 2006-2007, a new policy limited incentives to educationally-related items, such as books and museum trips, and attendance dropped further. Restricting SES services to after-school and summertime, when attendance is not mandated and far from universal, means that students who did attend were in school while their friends were out playing. This is hardly a way to engage students’ motivation to attend or to exert effort. Low-achieving students see after school and summertime as their free time, which they are unlikely to give up willingly.

Beyond the problems of attendance and motivation in extended time, there was another key problem with SES. This was that none of the hundreds of programs offered to students in SES were proven to be effective beforehand (or ever) in rigorous evaluations. And there was no mechanism to find out which of them were working well, until very late in the program’s history. As a result, neither schools nor parents had any particular basis for selecting programs according to their likely impact. Program providers probably did their best, but there was no pressure on them to make certain that students benefited from SES services.

As I noted in my previous blog, evaluations of SES do not provide the only evidence that after school and summer school programs rarely work for struggling students. Reviews of summer school programs by Xie et al. (in press) and of after school programs (Dynarski et al., 2002; Kidron & Lindsay, 2014) have found similar outcomes, always for the same reasons: poor attendance and poor motivation of students in school when they would otherwise have free time.

Designing an Effective System of Services for Struggling Students

There are two policies that are needed to provide a system of services capable of substantially improving student achievement. One is to provide services during the ordinary school day and year, not in after school or summer school. The second is to strongly emphasize the use of programs proven to be highly effective in rigorous research.

Educational services provided during the school day are far more likely to be effective than those provided after school or in the summer. During the day, everyone expects students to be in school, including the students themselves. There are attendance problems during the regular school day, of course, especially in secondary schools, but these problems are much smaller than those in non-school time, and perhaps if students are receiving effective, personalized services in school and therefore succeeding, they might attend more regularly. Further, services during the school day are far easier to integrate with other educational services. Principals, for example, are far more likely to observe tutoring or other services if they take place during the day, and to take ownership for ensuring their effectiveness. School day services also entail far fewer non-educational costs, as they do not require changing bus schedules, cleaning and securing schools more hours each day, and so on.

The problem with in-school services is that they can disrupt the basic schedule. However, this need not be a problem. Schools could designate service periods for each grade level spread over the school day, so that tutors or other service providers can be continuously busy all day. Students should not be taken out of reading or math classes, but there is a strong argument that a student who is far below grade level in reading or math needs a reading or math tutor using a proven tutoring model far more than other classes, at least for a semester (the usual length of a tutoring sequence).

If schools are deeply reluctant to interrupt any of the ordinary curriculum, then they might extend their day to offer art, music, or other subjects during the after-school session. These popular subjects might attract students without incentives, especially if students have a choice of which to attend. This could create space for tutoring or other services during the school day. A schedule like this is virtually universal in Germany, which provides all sports, art, music, theater, and other activities after school, so all in-school time is available for academic instruction.

Use of proven programs makes sense throughout the school day. Tutoring should be the main focus of the Learning Recovery Act, because in this time of emergency need to help students recover from Covid school closures, nothing less will do. But in the longer term, adoption of proven classroom programs in reading, math, science, writing, and other subjects should provide a means of helping students succeed in all parts of the curriculum (see www.evidenceforessa.org).

In summer, 2021, there may be a particularly strong rationale for summer school, assuming schools are otherwise able to open.  The evidence is clear that doing ordinary instruction during the summer will not make much of a difference, but summer could be helpful if it is used as an opportunity to provide as many struggling students as possible in-person, one-to-one or one-to-small group tutoring in reading or math.  In the summer, students might receive tutoring more than once a day, every day for as long as six weeks.  This could make a particularly big difference for students who basically missed in-person kindergarten, first, or second grade, a crucial time for learning to read.  Tutoring is especially effective in those grades in reading, because phonics is relatively easy for tutors to teach.  Also, there is a large number of effective tutoring programs for grades K-2.  Early reading failure is very important to prevent, and can be prevented with tutoring, so the summer months may get be just the right time to help these students get a leg up on reading.

The Learning Recovery Act can make life-changing differences for millions of children in serious difficulties. If the LRA changes its emphasis to the implementation of proven tutoring programs during ordinary school times, it is likely to accomplish its mission.

SES served a useful purpose in showing us what not to do. Let’s take advantage of these expensive lessons and avoid repeating the same errors. Einstein would be so proud if we heed his advice.

Correction

My recent blog, “Avoiding the Errors of Supplemental Educational Services,” started with a summary of the progress of the Learning Recovery Act.  It was brought to my attention that my summary was not correct.  In fact, the Learning Recovery Act has been introduced in Congress, but is not part of the current reconciliation proposal moving through Congress and has not become law. The Congressional action cited in my last blog was referring to a non-binding budget resolution, the recent passage of which facilitated the creation of the $1.9 trillion reconciliation bill that is currently moving through Congress. Finally, while there is expected to be some amount of funding within that current reconciliation bill to address the issues discussed within my blog, reconciliation rules will prevent the Learning Recovery Act from being included in the current legislation as introduced.

References

Chappell, S., Nunnery, J., Pribesh, S., & Hager, J. (2011). A meta-analysis of Supplemental Education Services (SES) provider effects on student achievement. Journal of Education for Students Placed at Risk, 16 (1), 1-23.

Deke, J., Gill, B. Dragoset, L., & Bogen, K. (2014). Effectiveness of supplemental educational services. Journal of Research in Educational Effectiveness, 7, 137-165.

Dynarski, M. et al. (2003). When schools stay open late: The national evaluation of the 21st Century Community Learning Centers Programs (First year findings). Washington, DC: U.S. Department of Education.

Heinrich, C. J., Meyer, R., H., & Whitten, G. W. (2010). Supplemental Education Services under No Child Left Behind: Who signs up and what do they gain? Education Evaluation and Policy Analysis, 32, 273-298.

Kidron, Y., & Lindsay, J. (2014). The effects of increased learning time on student academic and nonacademic outcomes: Findings from a meta‑analytic review (REL 2014-015). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Appalachia.

Xie, C., Neitzel, A., Cheung, A., & Slavin, R. E. (2020). The effects of summer programs on K-12 students’ reading and mathematics achievement: A meta-analysis. Manuscript submitted for publication.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

Highlight Tutoring Among Post-Covid Solutions

I recently saw a summary of the education section of the giant, $1.9 trillion proposed relief bill now before Congress. Like all educators, I was delighted to see the plan to provide $130 billion to help schools re-open safely, and to fund efforts to remedy the learning losses so many students have experienced due to school closures.

However, I was disappointed to see that the draft bill suggests that educators can use whatever approaches they like, and it specifically mentioned summer school and after school programs as examples.

Clearly, the drafters of this legislation have not been reading my blogs! On September 10th I wrote a blog reviewing research on summer school and after school programs as well as tutoring and other approaches. More recently, I’ve been doing further research on these recommendations for schools to help struggling students. I put my latest findings into two tables, one for reading and one for math. These appear below.

As you can see, not all supplemental interventions for struggling students are created equal. Proven tutoring models (ones that were successfully evaluated in rigorous experiments) are far more effective than other strategies. The additional successful strategy is our own Success for All whole-school reform approach (Cheung et al., in press), but Success for All incorporates tutoring as a major component.

However, it is important to note that not all tutoring programs are proven to be effective. Programs that do not provide tutors with structured materials and guidance with extensive professional development and in-class coaching, or use unpaid tutors whose attendance may be sporadic, have not produced the remarkable outcomes typical of other tutoring programs.

Tutoring

As Tables 1 and 2 show, proven tutoring programs produce substantial positive effects on reading and math achievement, and nothing else comes close (see Gersten et al., 2020; Neitzel et al., in press; Nickow et al. 2020; Pellegrini et al., 2021; Wanzek et al., 2016).

Tables 1 and 2 only include results from programs that use teaching assistants, AmeriCorps members (who receive stipends), and unpaid volunteer tutors. I did not include programs that use teachers as tutors, because in the current post-Covid crisis, there is a teacher shortage, so it is unlikely that many certified teachers will serve as tutors. Also, research in both reading and math finds little difference in student outcomes between teachers and teaching assistants or AmeriCorps members, so there is little necessity to hire certified teachers as tutors. Unpaid tutors have not been as effective as paid tutors.

Both one-to-one and one-to-small group tutoring by teaching assistants can be effective. One-to-one is somewhat more effective in reading, on average (Neitzel et al., in press), but in math there is no difference in outcomes between one-to-one and one-to-small group (Pellegrini et al., 2021).

Success for All

Success for All is a whole-school reform approach. A recent review of 17 rigorous studies of Success for All found an effect size of +0.51 for students in the lowest 25% of their grades (Cheung et al., in press). However, such students typically receive one-to-one or one-to-small group tutoring for some time period during grades 1 to 3. Success for All also provides all teachers professional development and materials focusing on phonics in grades K-2 and comprehension in grades 2-6, as well as cooperative learning in all grades, parent support, social-emotional learning instruction, and many other elements. So Success for All is not just a tutoring approach, but tutoring plays a central role for the lowest-achieving students.

Summer School

A recent review of research on summer school by Xie et al. (2020) found few positive effects on reading or math achievement. In reading, there were two major exceptions, but in both cases the students were in grades K to 1, and the instruction involved one-to-small group tutoring in phonics. In math, none of the summer school studies involving low-achieving students found positive effects.

After School

A review of research on after-school instruction in reading and math found near-zero impacts in both subjects (Kidron & Lindsay, 2014).

Extended Day

A remarkable study of extended day instruction was carried out by Figlio et al. (2018). Schools were randomly assigned to receive one hour of additional reading instruction for a year, or to serve as a control group. The outcomes were positive but quite modest (ES=+0.09) considering the considerable expense.

Technology

Studies of computer-assisted instruction and other digital approaches have found minimal impacts for struggling students (Neitzel et al., in press; Pellegrini et al., 2021).

Policy Consequences

The evidence is clear that any effort intended to improve the achievement of students struggling in reading or mathematics should make extensive use of proven tutoring programs. Students who have fallen far behind in reading or math need programs known to make a great deal of difference in a modest time period, so struggling students can move toward grade level, where they can profit from ordinary teaching. In our current crisis, it is essential that we follow the evidence to give struggling students the best possible chance of success.

References

Cheung, A., Xie, C., Zhang, T., Neitzel, A., & Slavin, R. E. (in press). Success for All: A quantitative synthesis of evaluations. Journal of Research on Educational Effectiveness.

Figlio, D., Holden, K., & Ozek, U. (2018). Do students benefit from longer school days? Regression discontinuity evidence from Florida’s additional hour of literacy instruction. Economics of Education Review, 67, 171-183.

Gersten, R., Haymond, K., Newman-Gonchar, R., Dimino, J., & Jayanthi, M. (2020). Meta-analysis of the impact of reading interventions for students in the primary grades. Journal of Research on Educational Effectiveness, 13(2), 401–427.

Kidron, Y., & Lindsay, J. (2014). The effects of increased learning time on student academic and nonacademic outcomes: Findings from a meta‑analytic review (REL 2014-015). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Appalachia.

Neitzel, A., Lake, C., Pellegrini, M., & Slavin, R. (in press). A synthesis of quantitative research on programs for struggling readers in elementary schools. Reading Research Quarterly.

Pellegrini, M., Neitzel, A., Lake, C., & Slavin, R. (2021). Effective programs in elementary mathematics: A best-evidence synthesis. AERA Open, 7 (1), 1-29.

Wanzek, J., Vaughn, S., Scammacca, N., Gatlin, B., Walker, M. A., & Capin, P. (2016). Meta-analyses of the effects of tier 2 type reading interventions in grades K-3. Educational Psychology Review, 28(3), 551–576. doi:10.1007/s10648-015-9321-7

Xie, C., Neitzel, A., Cheung, A., & Slavin, R. E. (2020). The effects of summer programs on K-12 students’ reading and mathematics achievement: A meta-analysis. Manuscript submitted for publication.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

Building Back Better

Yesterday, President Joe Biden took his oath of office. He is taking office at one of the lowest points in all of American history. Every American, whatever their political beliefs, should be wishing him well, because his success is essential for the recovery of our nation.

In education, most schools remain closed or partially open, and students are struggling with remote learning. My oldest granddaughter is in kindergarten. Every school day, she receives instruction from a teacher she has never met. She has never seen the inside of “her school.” She is lucky, of course, because she has educators as grandparents (us), but it is easy to imagine the millions of kindergartners who do not even have access to computers, or do not have help in learning to read and learning mathematics. These children will enter first grade with very little of the background they need, in language and school skills as well as in content.

Of course, the problem is not just kindergarten. All students have missed a lot of school, and they will vary widely in their experiences during that time. Think of second graders who essentially missed first grade. Students who missed the year when they are taught biology. Students who missed the fundamentals of creative writing. Students who should be in Algebra 2, except that they missed Algebra 1.

Hopefully, providing vaccines as quickly as possible to school staffs will enable most schools to open this spring. But we have a long, long way to go to get back to normal, especially with disadvantaged students. We cannot just ask students on their first day back to open their math books to the page they were on in March, 2020, when school closed.

Students need to be assessed when they return, and if they are far behind in reading or math, given daily tutoring, one-to-one or one-to-small group. If you follow this blog, you’ve heard me carry on at length about this.

Tutoring services, using tutoring programs proven to be effective, will be of enormous help to students who are far behind grade level (here, here, here). But the recovery from Covid-19 school closures should not be limited to repairing the losses. Instead, I hope the Covid-19 crisis can be an opportunity to reconsider how to rebuild our school system to enhance the school success of all students.

If we are honest with ourselves, we know that schooling in America was ailing long before Covid-19. It wasn’t doing so badly for middle class children, but it was failing disadvantaged students. These very same students have suffered disproportionately from Covid-19. So in the process of bringing these children back into school, let’s not stop with getting back to normal. Let’s figure out how to create schools that use the knowledge we have gained over the past 20 years, and knowledge we can develop in the coming years, to transform learning for our most vulnerable children.

Building Back Better

Obviously, the first thing we have to do this spring is reopen schools and make them as healthy, happy, welcoming, and upbeat as possible. We need to make sure that schools are fully staffed and fully equipped. We do need to “build back” before we can “build back better.” But we cannot stop there. Below, I discuss several things that would greatly transform education for disadvantaged students.

1.  Tutoring

Yes, tutoring is the first thing we have to do to build better. Every child who is significantly below grade level needs daily one-to-small group or one-to-one tutoring, until they reach a pre-established level of performance, depending on grade level, in reading and math.

However, I am not talking about just any tutoring. Not all tutoring works. But there are many programs that have been proven to work, many times. These are the tutoring programs we need to start with as soon as possible, with adequate training resources to ensure student success.

Implementing proven tutoring programs on a massive scale is an excellent “build back” strategy, the most effective and cost-effective strategy we have. However, tutoring should also be the basis for a key “build better” strategy

2.  Establishing success as a birthright and ensuring it using proven programs of all kinds.

We need to establish adequate reading and mathematics achievement as the birthright of every child. We can debate about what that level might be, but we must hold ourselves accountable for the success of every child. And we need to accomplish this not just by using accountability assessments and hoping for the best, but by providing proven programs to all students who need them for as long as they need them.

As I’ve pointed out in many blogs (here, here, here), we now have many programs proven effective in rigorous experiments and known to improve student achievement (see www.evidenceforessa.org). Every child who is performing below level, and every school serving many children below grade level, should have resources and knowledge to adopt proven programs. Teachers and tutors need to be guaranteed sufficient professional development and in-class coaching to enable them to successfully implement proven programs. Years ago, we did not have sufficient proven programs, so policy makers kept coming up with evidence-free policies, which have just not worked as intended. But now, we have many programs ready for widespread dissemination. To build better, we have to use these tools, not return to near universal use of instructional strategies, materials, and technology that have never been successfully evaluated. Instead, we need to use what works, and to facilitate adoption and effective implementation of proven programs.

3.  Invest in development and evaluation of promising programs.

How is it that in a remarkably short time, scientists were able to develop vaccines for Covid-19, vaccines that promise to save millions of lives? Simple. We invested billions in research, development, and evaluations of alternative vaccines. Effective vaccines are very difficult to make, and the great majority failed.  But at this writing, two U.S. vaccines have succeeded, and this is a mighty good start. Now, government is investing massively in rigorous dissemination of these vaccines.

Total spending on all of education research dedicated to creating and evaluating educational innovations is a tiny fraction of what has been spent and will be spent on vaccines. But can you imagine that it is impossible to improve reading, math, science, and other outcomes, with clear goals and serious resources? Of course it could be done. A key element of “building better” could be to substantially scale up use of proven programs we have now, and to invest in new development and evaluation to make today’s best obsolete, replaced by better and better approaches. The research and evaluation of tutoring proves this could happen, and perhaps a successful rollout of tutoring will demonstrate what proven programs can do in education.

4.  Commit to Success

Education goes from fad to fad, mandate to mandate, without making much progress. In order to “build better,” we all need to commit to finding what works, disseminating it broadly, and then finding even better solutions, until all children are succeeding. This must be a long-term commitment, but if we are investing adequately and see that we are improving outcomes each year, then it is clear we can do it.            

With a change of administrations, we are going to hear a lot about hope. Hope is a good start, but it is not a plan. Let’s plan to build back better, and then for the first time in the history of education, make sure our solutions work, for all of our children.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

Is a National Tutoring Corps Affordable?

Tutoring is certainly in the news these days. The December 30 Washington Post asked its journalists to predict what the top policy issues will be for the coming year. In education, Laura Meckler focused her entire prediction on just one issue: Tutoring. In an NPR interview (Kelly, 2020) with John King, U. S. Secretary of Education at the end of the Obama Administration and now President of Education Trust, the topic was how to overcome the losses students are certain to have sustained due to Covid-19 school closures. Dr. King emphasized tutoring, based on its strong evidence base. McKinsey (Dorn et al., 2020) did a report on early information on how much students have lost due to the school closures and what to do about it. “What to do” primarily boiled down to tutoring. Earlier articles in Education Week (e.g., Sawchuk, 2020) have also emphasized tutoring as the leading solution. Two bills introduced in the Senate by Senator Coons (D-Delaware) proposed a major expansion of AmeriCorps, mostly to provide tutoring and school health aides to schools suffering from Covid-19 school closures.

            All of this is heartening, but many of these same sources are warning that all this tutoring is going to be horrifically expensive and may not happen because we cannot afford it. However, most of these estimates are based on a single, highly atypical example. A Chicago study (Cook et al., 2015) of a Saga (or Match Education) math tutoring program for ninth graders estimated a per-pupil cost of one-to-two tutoring all year of $3,600 per student, with an estimate that at scale, the costs could be as low as $2,500 per student. Yet these estimates are unique to this single program in this single study. The McKinsey report applied the lower figure ($2,500 per student) to cost out tutoring for half of all 55 million students in grades k-12. They estimated an annual cost of $66 billion, just for math tutoring!

            Our estimate is that the cost of a robust national tutoring plan would be more like $7.0 billion in 2021-2022. How could these estimates be so different?  First, the Saga study was designed as a one-off demonstration that disadvantaged students in high school could still succeed in math. No one expected that Saga Math could be replicated at a per-pupil cost of $3,600 (or $2,500). In fact, a much less expensive form of Saga Math is currently being disseminated. In fact, there are dozens of cost-effective tutoring programs widely used and evaluated since the 1980s in elementary reading and math. One is our own Tutoring With the Lightning Squad (Madden & Slavin, 2017), which provides tutors in reading for groups of four students and costs about $700 per student per year. There are many proven small-group tutoring programs known to make a substantial difference in reading or math performance, (see Neitzel et al., in press; Nickow et al., 2020; Pellegrini et al., in press). These programs, most of which use teaching assistants as tutors, cost more like $1,500 per student, on average, based on the average cost of five tutoring programs used in Baltimore elementary schools (Tutoring With the Lightning Squad, Reading Partners, mClass Tutoring, Literacy Lab, and Springboard).

            Further, it is preposterous to expect to serve 27.5 million students (half of all students in k-12) all in one year. At 40 students per tutor, this would require hiring 687,500 tutors!

            Our proposal (Slavin et al., 2020) for a National Tutoring Corps proposes hiring 100,000 tutors by September, 2021, to provide proven one-to-one or (mostly) one-to-small group tutoring programs to about 4 million grade 1 to 9 students in Title I schools. This number of tutors would serve about 21% of Title I students in these grades in 2021-2022, at a cost of roughly $7.0 billion (including administrative costs, development, evaluation, and so on). This is less than what the government of England is spending right now on a national tutoring program, a total of £1 billion, which translates to $7.8 billion (accounting for the differences in population).

            Our plan would gradually increase the numbers of tutors over time, so in later years costs could grow, but they would never surpass $10 billion, much less $66 billion just for math, as estimated by McKinsey.

            In fact, even with all the money in the world, it would not be possible to hire, train, and deploy 687,500 tutors any time soon, at least not tutors using programs proven to work. The task before us is not to just throw tutors into schools to serve lots of kids. Instead, it should be to provide carefully selected tutors with extensive professional development and coaching to enable them to implement tutoring programs that have been proven to be effective in rigorous, usually randomized experiments. No purpose is served by deploying tutors in such large numbers so quickly that we’d have to make serious compromises with the amount and quality of training. Poorly-implemented tutoring would have minimal outcomes, at best.

            I think anyone would agree that insisting on high quality at substantial scale, and then growing from success to success as tutoring organizations build capacity, is a better use of taxpayers’ money than starting too large and too fast, with unproven approaches.

            The apparent enthusiasm for tutoring is wonderful. But misplaced dollars will not ensure the outcomes we so desperately need for so many students harmed by Covid-19 school closures. Let’s invest in a plan based on high-quality implementation of proven programs and then grow it as we learn more about what works and what scales in sustainable forms of tutoring.

Photo credit: Deeper Learning 4 All, (CC BY-NC 4.0)

References

Cook, P. J., et al. (2016) Not too late: Improving academic outcomes for disadvantaged youth. Available at https://www.ipr.northwestern.edu/documents/working-papers/2015/IPR-WP-15-01.pdf

Dorn, E., et al. (2020). Covid-19 and learning loss: Disparities grow and students need help. New York: McKinsey & Co.

Kelly, M. L. (2020, December 28). Schools face a massive challenge to make up for learning lost during the pandemic. National Public Radio.

Madden, N. A., & Slavin, R. E. (2017). Evaluations of technology-assisted small-group tutoring for struggling readers. Reading & Writing Quarterly: Overcoming Learning Difficulties, 33(4), 327–334. https://doi.org/10.1080/10573569.2016.1255577

Neitzel, A., Lake, C., Pellegrini, M., & Slavin, R. (in press). A synthesis of quantitative research on programs for struggling readers in elementary schools. Reading Research Quarterly.

Nickow, A. J., Oreopoulos, P., & Quan, V. (2020). The transformative potential of tutoring for pre-k to 12 learning outcomes: Lessons from randomized evaluations. Boston: Abdul Latif Poverty Action Lab.

Pellegrini, M., Neitzel, A., Lake, C., & Slavin, R. (in press). Effective programs in elementary mathematics: A best-evidence synthesis. AERA Open.

Sawchuk, S. (2020, August 26). Overcoming Covid-19 learning loss. Education Week 40(2), 6.

Slavin, R. E., Madden, N. A., Neitzel, A., & Lake, C. (2020). The National Tutoring Corps: Scaling up proven tutoring for struggling students. Baltimore: Johns Hopkins University, Center for Research and Reform in Education.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

How to Make Evidence in Education Make a Difference

By Robert Slavin

I have a vision of how education in the U.S. and the world will begin to make solid, irreversible progress in student achievement. In this vision, school leaders will constantly be looking for the most effective programs, proven in rigorous research to accelerate student achievement. This process of informed selection will be aided by government, which will provide special incentive funds to help schools implement proven programs.

In this imagined future, the fact that schools are selecting programs based on good evidence means that publishers, software companies, professional development companies, researchers, and program developers, as well as government, will be engaged in a constant process of creating, evaluating, and disseminating new approaches to every subject and grade level. As in medicine, developers and researchers will be held to strict standards of evidence, but if they develop programs that meet these high standards, they can be confident that their programs will be widely adopted, and will truly make a difference in student learning.

Discovering and disseminating effective classroom programs is not all we have to get right in education. For example, we also need great teachers, principals, and other staff who are well prepared and effectively deployed. A focus on evidence could help at every step of that process, of course, but improving programs and improving staff are not an either-or proposition. We can and must do both. If medicine, for example, focused only on getting the best doctors, nurses, technicians, other staff, but medical research and dissemination of proven therapies were underfunded and little heeded, then we’d have great staff prescribing ineffective or possibly harmful medicines and procedures. In agriculture, we could try to attract farmers who are outstanding in their fields, but that would not have created the agricultural revolution that has largely solved the problem of hunger in most parts of the world. Instead, decades of research created or identified improvements in seeds, stock, fertilizers, veterinary practices, farming methods, and so on, for all of those outstanding farmers to put into practice.

Back to education, my vision of evidence-based reform depends on many actions. Because of the central role government plays in public education, government must take the lead. Some of this will cost money, but it would be a tiny proportion of the roughly $600 billion we spend on K-12 education annually, at all levels (federal, state, and local). Other actions would cost little or nothing, focusing only on standards for how existing funds are used. Key actions to establish evidence of impact as central to educational decisions are as follows:

  1. Invest substantially in practical, replicable approaches to improving outcomes for students, especially achievement outcomes.

Rigorous, high-quality evidence of effectiveness for educational programs has been appearing since about 2006 at a faster rate than ever before, due in particular to investments by the Institute for Education Sciences (IES), Investing in Innovation/Education Innovation Research (i3/EIR), and the National Science Foundation (NSF) in the U.S., and the Education Endowment Foundation in England, but also other parts of government and private foundations. All have embraced rigorous evaluations involving random assignment to conditions, appropriate measures independent of developers or researchers, and at the higher funding levels, third-party evaluators. These are very important developments, and they have given the research field, educators, and policy makers excellent reasons for confidence that the findings of such research have direct meaning for practice. One problem is that, as is true in every applied field that embraces rigorous research, most experiments do not find positive impacts. Only about 20% of such experiments do find positive outcomes. The solution to this is to learn from successes and failures, so that our success rate improves over time. We also need to support a much larger enterprise of development of new solutions to enduring problems of education, in all subjects and grade levels, and to continue to support rigorous evaluations of the most promising of these innovations. In other words, we should not be daunted by the fact that most evaluations do not find positive impacts, but instead we need to increase the success rate by learning from our own evidence, and to carry out many more experiments. Even 20% of a very big number is a big number.

2. Improve communications of research findings to researchers, educators, policy makers, and the general public.

Evidence will not make a substantial difference in education until key stakeholders see it as a key to improving students’ success. Improving communications certainly includes making it easy for various audiences to find out which programs and practices are truly effective. But we also need to build excitement about evidence. To do this, government might establish large-scale, widely publicized, certain-to-work demonstrations of the use and outcomes of proven approaches, so that all will see how evidence can lead to meaningful change.

I will be writing more on in depth on this topic in future blogs.

3. Set specific standards of evidence, and provide incentive funding for schools to adopt and implement proven practices.

The Every Student Succeeds Act (ESSA) boldly defined “strong,” “moderate,” “promising,” and lower levels of evidence of effectiveness for educational programs, and required use of programs meeting one of these top categories for certain federal funding, especially school improvement funding for low-achieving schools. This certainly increased educators’ interest in evidence, but in practice, it is unclear how much this changed practice or outcomes. These standards need to be made more specific. In addition, the standards need to be applied to funding that is clearly discretionary, to help schools adopt new programs, not to add new evidence requirements to traditional funding sources. The ESSA evidence standards have had less impact than hoped for because they mainly apply to school improvement, a longstanding source of federal funding. As a result, many districts and states have fought hard to have the programs they already have declared “effective,” regardless of their actual evidence base. To make evidence popular, it is important to make proven programs available as something extra, a gift to schools and children rather than a hurdle to continuing existing programs. In coming blogs I’ll write further about how government could greatly accelerate and intensify the process of development, evaluation, communication, and dissemination, so that the entire process can begin to make undeniable improvements in particular areas of critical importance demonstrating how evidence can make a difference for students.

Photo credit: Deeper Learning 4 All/(CC BY-NC 4.0)

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

Are the Dutch Solving the Covid Slide with Tutoring?

For a small country, the Netherlands has produced a remarkable number of inventions. The Dutch invented the telescope, the microscope, the eye test, Wi-Fi, DVD/Blue-Ray, Bluetooth, the stock market, golf, and major improvements in sailboats, windmills, and water management. And now, as they (like every other country) are facing major educational damage due to school closures in the Covid-19 pandemic, it is the Dutch who are the first to apply tutoring on a large scale to help students who are furthest behind. The Dutch government recently announced a plan to allocate the equivalent of $278 million to provide support to all students in elementary, secondary, and vocational schools who need it. Schools can provide the support in different ways (e.g., summer schools, extended school days), but it is likely that a significant amount of the money will be spent on tutoring. The Ministry of Education proposed to recruit student teachers to provide tutoring, who will have to be specially trained for this role.

blog_6-18-20_Dutchclass_500x333The Dutch investment would be equivalent to a U.S. investment of about $5.3 billion, because of our much larger population. That’s a lot of tutors. Including salaries, materials, and training, I’d estimate this much money would support about 150,000 tutors. If each could work in small groups with 50 students a year, they might serve about 7,500,000 students each year, roughly one in every seven American children. That would be a pretty good start.

Where would we get all this money? Because of the recession we are in now, millions of recent college graduates will not be able to find work. Many of these would make great tutors. As in any recession, the federal government will seek to restart the economy by investing in people. In this particular recession, it would be wise to devote part of such investments to support enthusiastic young people to learn and apply proven tutoring approaches coast to coast.

Imagine that we created an American equivalent of the Dutch tutoring program. How could such a huge effort be fielded in time to help the millions of students who need substantial help? The answer would be to build on organizations that already exist and know how to recruit, train, mentor, and manage large numbers of people. The many state-based AmeriCorps agencies would be a great place to begin, and in fact there has already been discussion in the U.S. Congress about a rapid expansion of AmeriCorps for work in health and education roles to heal the damage of Covid-19. The former governor of Tennessee, Bill Haslam, is funding a statewide tutoring plan in collaboration with Boys and Girls Clubs. Other national non-profit organizations such as Big Brothers Big Sisters, City Year, and Communities in Schools could each manage recruitment, training, and management of tutors in particular states and regions.

It would be critical to make certain that the tutoring programs used under such a program are proven to be effective, and are ready to be scaled up nationally, in collaboration with local agencies with proven track records.

All of this could be done. Considering the amounts of money recently spent in the U.S. to shore up the economy, and the essential need both to keep people employed and to make a substantial difference in student learning, $5.3 billion targeted to proven approaches seems entirely reasonable.

If the Dutch can mount such an effort, there is no reason we could not do the same. It would be wonderful to help both unemployed new entrants to the labor force and students struggling in reading or mathematics. A double Dutch treat!

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

Florence Nightingale, Statistician

Everyone knows about Florence Nightingale, whose 200th birthday is this year. You probably know of her courageous reform of hospitals and aid stations in the Crimean War, and her insistence on sanitary conditions for wounded soldiers that saved thousands of lives. You may know that she founded the world’s first school for nurses, and of her lifelong fight for the professionalization of nursing, formerly a refuge for uneducated, often alcoholic young women who had no other way to support themselves. You may know her as a bold feminist, who taught by example what women could accomplish.

But did you know that she was also a statistician? In fact, she was the first woman ever to be admitted to Britain’s Royal Statistical Society, in 1858.

blog_3-12-20_FlorenceNightingale_500x347Nightingale was not only a statistician, she was an innovator among statisticians. Her life’s goal was to improve medical care, public health, and nursing for all, but especially for people in poverty. In her time, landless people were pouring into large, filthy industrial cities. Death rates from unclean water and air, and unsafe working conditions, were appalling. Women suffered most, and deaths from childbirth in unsanitary hospitals were all too common. This was the sentimental Victorian age, and there were people who wanted to help. But how could they link particular conditions to particular outcomes? Opponents of investments in prevention and health care argued that the poor brought the problems on themselves, through alcoholism or slovenly behavior, or that these problems had always existed, or even that they were God’s will. The numbers of people and variables involved were enormous. How could these numbers be summarized in a way that would stand up to scrutiny, but also communicate the essence of the process leading from cause to effect?

As a child, Nightingale and her sister were taught by her brilliant and liberal father. He gave his daughters a mathematics education that few (male) students in the very finest schools could match. She put these skills to work in her work in hospital reform, demonstrating, for example, that when her hospital in the Crimean War ordered reforms such as cleaning out latrines and cesspools, the mortality rate dropped from 42.7 percent to 2.2 percent in a few months. She invented a circular graph that showed changes month by month, as the reforms were implemented. She also made it immediately clear to anyone that deaths due to disease far outnumbered those due to war wounds. No numbers, just colors and patterns, made the situation obvious to the least mathematical of readers.

When she returned from Crimea, Nightingale had a disease, probably spondylitis, that forced her to be bedridden much of the time for the rest of her life. Yet this did not dim her commitment to health reform. In fact, it gave her a lot of time to focus on her statistical work, often published in the top newspapers of the day. From her bedroom, she had a profound effect on the reform of Britain’s Poor Laws, and the repeal of the Contagious Diseases Act, which her statistics showed to be counterproductive.

Note that so far, I haven’t said a word about education. In many ways, the analogy is obvious. But I’d like to emphasize one contribution of Nightingale’s work that has particular importance to our field.

Everyone who works in education cares deeply for all children, and especially for disadvantaged, underserved children. As a consequence of our profound concern, we advocate fiercely for policies and solutions that we believe to be good for children. Each of us comes down on one side or another of controversial policies, and then advocates for our positions, certain that our favored position would be hugely beneficial if it prevails, and disastrous if it does not. The same was true in Victorian Britain, where people had heated, interminable arguments about all sorts of public policy.

What Florence Nightingale did, more than a century ago, was to subject various policies affecting the health and welfare of poor people to statistical analysis. She worked hard to be sure that her findings were correct and that they communicated to readers. Then she advocated in the public arena for the policies that were beneficial, and against those that were counterproductive.

In education, we have loads of statistics that bear on various policies, but we do not often commit ourselves to advocate for the ones that actually work. As one example, there have been arguments for decades about charter schools. Yet a national CREDO (2013) study found that, on average, charter schools made no difference at all on reading or math performance. A later CREDO (2015) study found that effects were slightly more positive in urban settings, but these effects were tiny. Other studies have had similar outcomes, although there are more positive outcomes for “no-excuses” charters such as KIPP, a small percentage of all charter schools.

If charters make no major differences in student learning, I suppose one might conclude that they might be maintained or not maintained based on other factors. Yet neither side can plausibly argue, based on evidence of achievement outcomes, that charters should be an important policy focus in the quest for higher achievement. In contrast, there are many programs that have impacts on achievement far greater than those of charters. Yet use of such programs is not particularly controversial, and is not part of anyone’s political agenda.

The principle that Florence Nightingale established in public health was simple: Follow the data. This principle now dominates policy and practice in medicine. Yet more than a hundred years after Nightingale’s death, have we arrived at that common-sense conclusion in educational policy and practice? We’re moving in that direction, but at the current rate, I’m afraid it will be a very long time before this becomes the core of educational policy or practice.

Photo credit: Florence Nightingale, Illustrated London News (February 24, 1855)

References

CREDO (2013). National charter school study. At http://credo.stanford.edu

CREDO (2015). Urban charter school study. At http://credo.stanford.edu

 This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

Why Can’t Education Progress Like Medicine Does?

I recently saw an end-of-year article in The Washington Post called “19 Good Things That Happened in 2019.” Four of them were medical or public health breakthroughs. Scientists announced a new therapy for cystic fibrosis likely to benefit 90% of people with this terrible disease, incurable for most patients before now. The World Health Organization announced a new vaccine to prevent Ebola. The Bill and Melinda Gates Foundation announced that deaths of children before their fifth birthday have now dropped from 82 per thousand births in 1990 to 37 in 2019. The Centers for Disease Control reported a decline of 5.1 percent in deaths from drug overdoses in just one year, from 2017 to 2018.

Needless to say, breakthroughs in education did not make the list. In fact, I’ll bet there has never been an education breakthrough mentioned on such lists.

blog_1-9-20_kiddoctor_337x500 I get a lot of criticism from all sides for comparing education to medicine and public health. Most commonly, I’m told that it’s ever so much easier to give someone a pill than to change complex systems of education. That’s true enough, but not one of the 2019 medical or public health breakthroughs was anything like “taking a pill.” The cystic fibrosis cure involves a series of three treatments personalized to the genetic background of patients. It took decades to find and test this treatment. A vaccine for Ebola may be simple in concept, but it also took decades to develop. Also, Ebola occurs in very poor countries, where ensuring universal coverage with a vaccine is very complex. Reducing deaths of infants and toddlers took massive coordinated efforts of national governments, international organizations, and ongoing research and development. There is still much to do, of course, but the progress made so far is astonishing. Similarly, the drop in deaths due to overdoses required, and still requires, huge investments, cooperation between government agencies of all sorts, and constant research, development, and dissemination. In fact, I would argue that reducing infant deaths and overdose deaths strongly resemble what education would have to do to, for example, eliminate reading failure or enable all students to succeed at middle school mathematics. No one distinct intervention, no one miracle pill has by itself improved infant mortality or overdose mortality, and solutions for reading and math failure will similarly involve many elements and coordinated efforts among many government agencies, private foundations, and educators, as well as researchers and developers.

The difference between evidence-based reform in medicine/public health and education is, I believe, a difference in societal commitment to solving the problems. The general public, especially political leaders, tend to be rather complacent about educational failures. One of our past presidents said he wanted to help, but said, “We have more will than wallet” to solve educational problems. Another focused his education plans on recruiting volunteers to help with reading. These policies hardly communicate seriousness. In contrast, if medicine or public health can significantly reduce death or disease, it’s hard to be complacent.

Perhaps part of the motivational difference is due to the situations of powerful people. Anyone can get a disease, so powerful individuals are likely to have children or other relatives or friends who suffer from a given disease. In contrast, they may assume that children failing in school have inadequate parents or parents who need improved job opportunities or economic security or decent housing, which will take decades, and massive investments to solve. As a result, governments allocate little money for research, development, or dissemination of proven programs.

There is no doubt in my mind that we could, for example, eliminate early reading failure, using the same techniques used to eliminate diseases: research, development, practical experiments, and planful, rapid scale-up. It’s all a question of resources, political leadership, collaboration among many critical agencies and individuals, and a total commitment to getting the job done. The year reading failure drops to near zero nationwide, perhaps education will make the Washington Post list of “50 Good Things That Happened in 2050.”

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

On Replicability: Why We Don’t Celebrate Viking Day

I was recently in Oslo, Norway’s capital, and visited a wonderful museum displaying three Viking ships that had been buried with important people. The museum had all sorts of displays focused on the amazing exploits of Viking ships, always including the Viking landings in Newfoundland, about 500 years before Columbus. Since the 1960s, most people have known that Vikings, not Columbus, were the first Europeans to land in America. So why do we celebrate Columbus Day, not Viking Day?

Given the bloodthirsty actions of Columbus, easily rivaling those of the Vikings, we surely don’t prefer one to the other based on their charming personalities. Instead, we celebrate Columbus Day because what Columbus did was far more important. The Vikings knew how to get back to Newfoundland, but they were secretive about it. Columbus was eager to publicize and repeat his discovery. It was this focus on replication that opened the door to regular exchanges. The Vikings brought back salted cod. Columbus brought back a new world.

In educational research, academics often imagine that if they establish new theories or demonstrate new methods on a small scale, and then publish their results in reputable journals, their job is done. Call this the Viking model: they got what they wanted (promotions or salt cod), and who cares if ordinary people found out about it? Even if the Vikings had published their findings in the Viking Journal of Exploration, this would have had roughly the same effect as educational researchers publishing in their own research journals.

Columbus, in contrast, told everyone about his voyages, and very publicly repeated and extended them. His brutal leadership ended with him being sent back to Spain in chains, but his discoveries had resounding impacts that long outlived him.

blog_11-21-19_vikingship_500x374

Educational researchers only want to do good, but they are unlikely to have any impact at all unless they can make their ideas useful to educators. Many educational researchers would love to make their ideas into replicable programs, evaluate these programs in schools, and if they are found to be effective, disseminate them broadly. However, resources for the early stages of development and research are scarce. Yes, the Institute of Education Sciences (IES) and Education Innovation Research (EIR) fund a lot of development projects, and Small Business Innovation Research (SBIR) provides small grants for this purpose to for-profit companies. Yet these funders support only a tiny proportion of the proposals they receive. In England, the Education Endowment Foundation (EEF) spends a lot on randomized evaluations of promising programs, but very little on development or early-stage research. Innovations that are funded by government or other funding very rarely end up being evaluated in large experiments, fewer still are found to be effective, and vanishingly few eventually enter widespread use. The exceptions are generally programs crated by large for-profit companies, large and entrepreneurial non-profits, or other entities with proven capacity to develop, evaluate, support, and disseminate programs at scale. Even the most brilliant developers and researchers rarely have the interest, time, capital, business expertise, or infrastructure to nurture effective programs through all the steps necessary to bring a practical and effective program to market. As a result, most educational products introduced at scale to schools come from commercial publishers or software companies, who have the capital and expertise to create and disseminate educational programs, but serve a market that primarily wants attractive, inexpensive, easy-to-use materials, software, and professional development, and is not (yet) willing to pay for programs proven to be effective. I discussed this problem in a recent blog on technology, but the same dynamics apply to all innovations, tech and non-tech alike.

How Government Can Promote Proven, Replicable Programs

There is an old saying that Columbus personified the spirit of research. He didn’t know where he was going, he didn’t know where he was when he got there, and he did it all on government funding. The relevant part of this is the government funding. In Columbus’ time, only royalty could afford to support his voyage, and his grant from Queen Isabella was essential to his success. Yet Isabella was not interested in pure research. She was hoping that Columbus might open rich trade routes to the (east) Indies or China, or might find gold or silver, or might acquire valuable new lands for the crown (all of these things did eventually happen). Educational research, development, and dissemination face a similar situation. Because education is virtually a government monopoly, only government is capable of sustained, sizable funding of research, development, and dissemination, and only the U.S. government has the acknowledged responsibility to improve outcomes for the 50 million American children ages 4-18 in its care. So what can government do to accelerate the research-development-dissemination process?

  1. Contract with “seed bed” organizations capable of identifying and supporting innovators with ideas likely to make a difference in student learning. These organizations might be rewarded, in part, based on the number of proven programs they are able to help create, support, and (if effective) ultimately disseminate.
  2. Contract with independent third-party evaluators capable of doing rigorous evaluations of promising programs. These organizations would evaluate promising programs from any source, not just from seed bed companies, as they do now in IES, EIR, and EEF grants.
  3. Provide funding for innovators with demonstrated capacity to create programs likely to be effective and funding to disseminate them if they are proven effective. Developers may also contract with “seed bed” organizations to help program developers succeed with development and dissemination.
  4. Provide information and incentive funding to schools to encourage them to adopt proven programs, as described in a recent blog on technology.  Incentives should be available on a competitive basis to a broad set of schools, such as all Title I schools, to engage many schools in adoption of proven programs.

Evidence-based reform in education has made considerable progress in the past 15 years, both in finding positive examples that are in use today and in finding out what is not likely to make substantial differences. It is time for this movement to go beyond its early achievements to enter a new phase of professionalism, in which collaborations among developers, researchers, and disseminators can sustain a much faster and more reliable process of research, development, and dissemination. It’s time to move beyond the Viking stage of exploration to embrace the good parts of the collaboration between Columbus and Queen Isabella that made a substantial and lasting change in the whole world.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.