Cherry Picking? Or Making Better Trees?

Everyone knows that cherry picking is bad. Bad, bad, bad, bad, bad, bad, bad. Cherry picking means showing non-representative examples to give a false impression of the quality of a product, an argument, or a scientific finding. In educational research, for example, cherry picking might mean a publisher or software developer showing off a school using their product that is getting great results, without saying anything about all the other schools using their product that are getting mediocre results, or worse. Very bad, and very common. The existence of cherry picking is one major reason that educational leaders should always look at valid research involving many schools to evaluate the likely impact of a program. The existence of cherry picking also explains why preregistration of experiments is so important, to make it difficult for developers to do many experiments and then publicize only the ones that got the best results, ignoring the others.

However, something that looks a bit like cherry picking can be entirely appropriate, and is in fact an important way to improve educational programs and outcomes. This is when there are variations in outcomes among various programs of a given type. The average across all programs of that type is unimpressive, but some individual programs have done very well, and have replicated their findings in multiple studies.

As an analogy, let’s move from cherries to apples. The first delicious apple was grown by a farmer in Iowa in 1880. He happened to notice that fruit from one particular branch or one tree had a beautiful shape and a terrific flavor. The Stark Seed Company was looking for new apple varieties, and they bought his tree They grafted the branch on an ordinary rootstock, and (as apples are wont to do), every apple on the grafted tree looked and tasted like the ones from that one unusual branch.

blog_4-16-20_applepicking_333x500 Had the farmer been hoping to sell his whole orchard, and had he taken potential buyers to see this one tree, and offered potential buyers picked apples from this particular branch, then that would be gross cherry-picking. However, he knew (and the Stark Seed Company knew) all about grafting, so instead of using his exceptional branch to fool anyone (note that I am resisting the urge to mention “graft and corruption”), the farmer and Stark could replicate that amazing branch. The key here is the word “replicate.” If it were impossible to replicate the amazing branch, the farmer would have had a local curiosity at most, or perhaps just a delicious occasional snack. But with replication, this one branch transformed the eating apple for a century.

Now let’s get back to education. Imagine that there were a category of educational programs that generally had mediocre results in rigorous experiments. There is always variation in educational outcomes, so the developers of each program would know of individual schools using their program and getting fantastic results. This would be useful for marketing, but if the program developers are honest, they would make all studies of their program available, rather than claiming that the unusual super-duper schools represent what an average school that adopts their program is likely to obtain.

However, imagine that there is a program that resembles others in its category in most ways, yet time and again gets results far beyond those obtained by similar programs of the same type. Perhaps there is a “secret sauce,” some specific factor that explains the exceptional outcomes, or perhaps the organization that created and/or disseminates the program is exceptionally capable. Either way, any potential user would be missing something if they selected a program based on the mediocre average achievement outcomes for its category. If the outcomes for one or more programs are outstanding (and assuming costs and implementation characteristics are similar), then the average achievement effects for the category should no longer be particularly relevant, because any educator who cares about evidence should be looking for the most effective programs, since no one would want to implement an entire category.

I was thinking about apples and cherries because of our group’s work reviewing research on various tutoring programs (Neitzel et al., 2020). As is typical of reviews, we were computing average effect sizes for achievement impacts of categories. Yet these average impacts were much less than the replicated impacts for particular programs. For example, the mean effect size for one-to-small group tutoring was +0.20. Yet various individual programs had mean effect sizes of +0.31, +0.39, +0.42, +0.43, +0.46, and +0.64. In light of these findings, is the practical impact of small group tutoring truly +0.20, or is it somewhere in the range of +0.31 to +0.64? If educators chose programs based on evidence, they would be looking a lot harder at the programs with the larger impacts, not at the mean of all small-group tutoring approaches

Educational programs cannot be replicated (grafted) as easily as apple trees can. But just as the value to the Stark Seed Company of the Iowa farmer’s orchard could not be determined by averaging ratings of a sampling of all of his apples, the value of a category of educational programs cannot be determined by its average effects on achievement. Rather, the value of the category should depend on the effectiveness of its best, replicated, and replicable examples.

At least, you have to admit it’s a delicious idea!


Neitzel, A., Lake, C., Pellegrini, M., & Slavin, R. (2020). A synthesis of quantitative research on programs for struggling readers in elementary schools. Available at Manuscript submitted for publication.


This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to


Preschool: A Step, Not a Journey

“A journey of a thousand miles begins with a single step.”

So said Lau Tzi (or Lau Tzu), the great Chinese scholar who lived in the 6th century BC.

For many years, especially since the extraordinary long-term outcomes of the Perry Preschool became known, many educators have seen high-quality preschool as an essential “first step” in a quality education. Truly, a first step in a journey of a thousand miles. Further, due to the Perry Preschool findings, educators, researchers, and policy makers have maintained that quality preschool is not only the first step in a quality education, but it is the most important, capable of making substantial differences in the lives of disadvantaged students.

I believe, based on the evidence, that high-quality preschool helps students enter kindergarten and, perhaps, first grade, with important advantages in academic and social skills. It is clear that quality preschool can provide a good start, and for this reason, I’d support investments in providing the best preschool experiences we can afford.

But the claims of most preschool advocates go far beyond benefits through kindergarten. We have been led to expect benefits that last throughout children’s lives.

Would that this were so, but it is not. The problem is that randomized studies rarely find long-term impacts. In such studies, children are randomly assigned to receive specific, high-quality preschool services or to serve in a control group, in which children may remain at home or may receive various daycare or preschool experiences of varying quality. In randomized long-term studies comparing students randomly assigned to preschool or business as usual, the usual pattern of findings shows positive effects on many measures at the end of the preschool year, fading effects at the end of kindergarten, and no differences in later years. One outstanding example is the Tennessee Voluntary Prekindergarten Program (Lipsey, Farran, & Durkin, 2018). A national study of Head Start by Puma, Bell, Cook, & Heid (2010) found the same pattern, as did randomized studies in England (Melhuish et al., 2010) and Australia (Claessens & Garrett, 2014). Reviews of research routinely identify this consistent pattern (Chambers, Cheung, & Slavin, 2017; Camilli et al., 2009; Melhuish et al., 2010).

So why do so many researchers and educators believe that there are long-term positive effects of preschool? There are two answers. One is the Perry Preschool, and the other is the use of matched rather than randomized study designs.


The Perry Preschool study (Schweinhart & Weikart, 1997) did use a randomized design, but it had many features that made it an interesting pilot rather than a conclusive demonstration of powerful and scalable impacts. First, the Perry Preschool study had a very small sample (initially, 123 students in a single school in Ypsilanti, Michigan). It allowed deviations from random assignment, such as assigning children whose mothers worked to the control group. It provided an extraordinary level of services, never intended to be broadly replicable. Further, the long-term effects were never seen on elementary achievement, but only appeared when students were in secondary school. It seems unlikely that powerful impacts could be seen after there were no detectable impacts in all of elementary school. No one can fully explain what happened, but it is important to note that no one has replicated anything like what the Perry Preschool did, in all the years since the program was implemented in 1962-1967.

With respect to matched study designs, which do sometimes find positive longitudinal effects, a likely explanation is that with preschool children, matching fails to adequately control for initial differences. Families that enroll their four-year-olds in preschool tend, on average, to be more positively oriented toward learning and more eager to promote their children’s academic success. Well-implemented matched designs in the elementary and secondary grades invariably control for prior achievement, and this usually does a good job of equalizing matched samples. With four-year-olds, however, early achievement or IQ tests are not very reliable or well-correlated with outcomes, so it is impossible to know how much matching has equalized the groups on key variables.

Preparing for a Journey

Lao Tzi’s observation reminds us that any great accomplishment is composed of many small, simple activities. Representing a student’s educational career as a journey, this fits. One grand intervention at one point in that journey may be necessary, but it is not sufficient to ensure the success of the journey. In the journey of education, it is surely important to begin with a positive experience, one that provides children with a positive orientation toward school, skills needed to get along with teachers and classmates, knowledge about how the world works, a love for books, stories, and drama, early mathematical ideas, and much more. This is the importance of preschool. Yet it is not enough. Major make-or-break objectives lie in the future. In the years after preschool, students must learn to read proficiently, they must learn basic concepts of mathematics, and they must continue to build social-emotional skills for the formal classroom setting. In the upper elementary grades, they must learn to use their reading and math skills to learn to write effectively, and to learn science and social studies. Then they must make a successful transition to master the challenges of secondary school, leading to successful graduation and entry into valued careers or post-secondary education. Each of these accomplishments, along with many others, requires the best teaching possible, and each is as important and as difficult to achieve for every child as is success in preschool.

A journey of a thousand miles may begin with a single step, but what matters is how the traveler negotiates all the challenges between the first step and the last one. This is true of education. We need to find effective and replicable methods to maximize the possibility that every student will succeed at every stage of the learning process. This can be done, and every year our profession finds more and better ways to improve outcomes at every grade level, in every subject. Preschool is only the first of a series of opportunities to enable all children to reach challenging goals. An important step, to be sure, but not the whole journey.

Photo courtesy of Allison Shelley/The Verbatim Agency for American Education: Images of Teachers and Students in Action.


Camilli, G., Vargas, S., Ryan, S., & Barnett, S. (2009). Meta-analysis of the effects of early education interventions on cognitive and social development. Teachers College Record, 112 (3), 579-620.

Chambers, B., Cheung, A., & Slavin, R.E. (2016) Literacy and language outcomes of comprehensive and developmental-constructivist approaches to early childhood education: A systematic review. Educational Research Review, 18, 88-111..

Claessens, A., & Garrett, R. (2014). The role of early childhood settings for 4-5 year old children in early academic skills and later achievement in Australia. Early Childhood Research Quarterly, 29, (4), 550-561.

Lipsey, M., Farran, D., & Durkin, K. (2018). Effects of the Tennessee Prekindergarten Program on children’s achievement and behavior through third grade. Early Childhood Research Quarterly, 45 (4), 155-176.

Melhuish, E., Belsky, J., & Leyland, R. (2010). The impact of Sure Start local programmes on five year olds and their families. London: Department for Education.

Puma, M., Bell, S., Cook, R., & Heid, C. (2010). Head Start impact study: Final report. Washington, DC: U.S. Department of Health and Human Services.

Schweinhart, L. J., & Weikart, D. P. (1997). Lasting differences: The High/Scope Preschool curriculum comparison study through age 23 (Monographs of the High/Scope Educational Research Foundation No. 12) Ypsilanti, MI: High/Scope Press.

 Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Evidence-Based Reform and the Multi-Academy Trust

Recently, I was in England to visit Success for All (SFA) schools there. I saw two of the best SFA schools I’ve ever seen anywhere, Applegarth Primary School in Croyden, south of London, and Houldsworth Primary School in Sussex, southeast of London. Both are very high-poverty schools with histories of poor achievement, violence, and high staff turnover. Applegarth mostly serves the children of African immigrants, and Houldsworth mostly serves White students from very poor homes. Yet I saw every class in each school and in each one, children were highly engaged, excited, and learning like crazy. Both schools were once in the lowest one percent of achievement in England, yet both are now performing at or above national norms.

In my travels, I often see outstanding Success for All schools. However, in this case I learned about an important set of policies that goes beyond Success for All, but could have implications for evidence-based reform more broadly.


Both Applegarth and Houldsworth are in multi-academy trusts (MATs), the STEP Trust and the Unity Trust, respectively. Academies are much like charter schools in the U.S., and multi-academy trusts are organizations that run more than one academy. Academies are far more common in the U.K. than the U.S., constituting 22% of primary (i.e., elementary) schools and 68% of secondary schools. There are 1,170 multi-academy trusts, managing more than 5,000 of Britain’s 32,000 schools, or 16%. Multi-academy trusts can operate within a single local authority (school district) (like Success Academies in New York City) or may operate in many local authorities. Quite commonly, poorly-performing schools in a local authority, or stand-alone academies, may be offered to a successful and capable multi-academy trust, and these hand-overs explain much of the growth in multi-academy trusts in recent years.

What I saw in the STEP and Unity Trusts was something extraordinary. In each case, the exceptional schools I saw were serving as lead schools for the dissemination of Success for All. Staff in these schools had an explicit responsibility to train and mentor future principals, facilitators, and teachers, who spend a year at the lead school learning about SFA and their role in it, and then taking on their roles in a new SFA school elsewhere in the multi-academy trust. Over time, there are multiple lead schools, each of which takes responsibility to mentor new SFA schools other than their own. This cascading dissemination strategy, carried out in close partnership with the national SFA-UK non-profit organization, is likely to produce exceptional implementations.

I’m sure there must be problems with multi-academy trusts that I don’t know about, and in the absence of data on MATs throughout Britain, I would not take a position on them in general. But based on my limited experience with the STEP and Unity Trusts, this policy has particular potential as a means of disseminating very effective forms of programs proven effective in rigorous research.

First, multi-academy trusts have the opportunity and motivation to establish themselves as effective. Ordinary U.S. districts want to do well, of course, but they do not grow (or shrink) because of their success (or lack of it). In contrast, a multi-academy trust in the U.K. is more likely to seek out proven programs and implement them with care and competence, both to increase student success and to establish a “brand” based on their effective use of proven programs. Both STEP and Unity Trusts are building a reputation for succeeding with difficult schools using methods known to be effective. Using cascading professional developing and mentoring from established schools to new ones, a multi-academy trust can build effectiveness and reputation.

Although the schools I saw were using Success for All, any multi-academy trust could use any proven program or programs to create positive outcomes and expand its reach and influence. As other multi-academy trusts see what the pioneers are accomplishing, they may decide to emulate them. One major advantage possessed by multi-academy trusts is that much in contrast to U.S. school districts, especially large, urban ones, multi-academy trusts are likely to remain under consistent leadership for many years. Leaders of multi-academy trusts, and their staff and supporters, are likely to have time to transform practices gradually over time, knowing that they have the stable leadership needed for long-term change.

There is no magic in school governance arrangements, and no guarantee that many multi-academy trusts will use the available opportunities to implement and perfect proven strategies. Yet by their nature, multi-academy trusts have the opportunity to make a substantial difference in the education provided to all students, especially those serving disadvantaged students. I look forward to watching plans unfold in the STEP and Unity Trusts, and to learn more about how the academy movement in the U.K. might provide a path toward widespread and thoughtful use of proven programs, benefiting very large numbers of students. And I’d love to see more U.S. charter networks and traditional school districts use cascading replication to scale up proven, whole-school approaches likely to improve outcomes in disadvantaged schools.

Photo credit: Kindermel [CC BY-SA 4.0 (]

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Achieving Audacious Goals in Education: Amundson and the Fram

On a recent trip to Norway, I visited the Fram Museum in Oslo. The Fram was Roald Amundson’s ship, used to transport a small crew to the South Pole in 1911. The museum is built around the Fram itself, and visitors can go aboard this amazing ship, surrounded by information and displays about polar exploration. What was most impressive about the Fram is the meticulous attention to detail in every aspect of the expedition. Amundson had undertaken other trips to the polar seas to prepare for his trip, and had carefully studied the experiences of other polar explorers. The ship’s hull was special built to withstand crushing from the shifting of polar ice. He carried many huskies to pull sleds over the ice, and trained them to work in teams.. Every possible problem was carefully anticipated in light of experience, and exact amounts of food for men and dogs were allocated and stored. Amundson said that forgetting “a single trouser button” could doom the effort. As it unfolded, everything worked as anticipated, and all the men and dogs returned safely after reaching the South Pole.

From At the South Pole by Roald Amundsen, 1913 [Public domain]
The story of Amundson and the Fram is an illustration of how to overcome major obstacles to achieve audacious goals. I’d like to build on it to return to a topic I’ve touched on in two previous blogs. The audacious goal: Overcoming the substantial gap in elementary reading achievement between students who qualify for free lunch and those who do not, between African American and White students, and between Hispanic and non-Hispanic students. According to the National Assessment of Educational Progress (NAEP), each of these gaps is about one half of a standard deviation, also known as an effect size of +0.50. This is a very large gap, but it has been overcome in a very small number of intensive programs. These programs were able to increase the achievement of disadvantaged students by an effect size of more than +0.50, but few were able to reproduce these gains under normal circumstances. Our goal is to enable thousands of ordinary schools serving disadvantaged students to achieve such outcomes, at a cost of no more than 5% beyond ordinary per-pupil costs.

Educational Reform and Audacious Goals

Researchers have long been creating and evaluating many different approaches to improving reading achievement. This is necessary in the research and development process to find “what works” and build up from there. However, each individual program or practice has a modest effect on key outcomes, and we rarely combine proven programs to achieve an effect large enough to, for example, overcome the achievement gap. This is not what Amundson, or the Wright Brothers, or the worldwide team that achieved eradication of smallpox did. Instead, they set audacious goals and kept at them systematically, using what works, until they were achieved.

I would argue that we should and could do the same in education. The reading achievement gap is the largest problem of educational practice and policy in the U.S. We need to use everything we know how to do to solve it. This means stating in advance that our goal is to find strategies capable of eliminating reading gaps at scale, and refusing to declare victory until this goal is achieved. We need to establish that the goal can be achieved, by ordinary teachers and principals in ordinary schools serving disadvantaged students.

Tutoring Our Way to the Goal

In a previous blog I proposed that the goal of +0.50 could be reached by providing disadvantaged, low-achieving students tutoring in small groups or, when necessary, one-to-one. As I argued there and elsewhere, there is no reading intervention as effective as tutoring. Recent reviews of research have found that well-qualified teaching assistants using proven methods can achieve outcomes as good as those achieved by certified teachers working as tutors, thereby making tutoring much less expensive and more replicable (Inns et al., 2019). Providing schools with significant numbers of well-trained tutors is one likely means of reaching ES=+0.50 for disadvantaged students. Inns et al. (2019) found an average effect size of +0.38 for tutoring by teaching assistants, but several programs had effect sizes of +0.40 to +0.47. This is not +0.50, but it is within striking distance of the goal. However, each school would need multiple tutors in order to provide high-quality tutoring to most students, to extend the known positive effects of tutoring to the whole school.

Combining Intensive Tutoring With Success for All

Tutoring may be sufficient by itself, but research on tutoring has rarely used tutoring schoolwide, to benefit all students in high-poverty schools. It may be more effective to combine widespread tutoring for students who most need it with other proven strategies designed for the whole school, rather than simply extending a program designed for individuals and small groups. One logical strategy to reach the goal of +0.50 in reading might be to combine intensive tutoring with our Success for All whole-school reform model.

Success for All adds to intensive tutoring in several ways. It provides teachers with professional development on proven reading strategies, as well as cooperative learning and classroom management strategies at all levels. Strengthening core reading instruction reduces the number of children at great risk, and even for students who are receiving tutoring, it provides a setting in which students can apply and extend their skills. For students who do not need tutoring, Success for All provides acceleration. In high-poverty schools, students who are meeting reading standards are likely to still be performing below their potential, and improving instruction for all is likely to help these students excel.

Success for All was created in the late 1980s in an attempt to achieve a goal similar to the +0.50 challenge. In its first major evaluation, a matched study in six high-poverty Baltimore elementary schools, Success for All achieved a schoolwide reading effect size of at least +0.50 schoolwide in grades 1-5 on individually administered reading measures. For students in the lowest 25% of the sample at pretest, the effect size averaged +0.75 (Madden et al., 1993). That experiment provided two to six certified teacher tutors per school, who worked one to one with the lowest-achieving first and second graders. The tutors supplemented a detailed reading program, which used cooperative learning, phonics, proven classroom management methods, parent involvement, frequent assessment, distributed leadership, and other elements (as Success for All still does).

An independent follow-up assessment found that the effect maintained to the eighth grade, and also showed a halving of retentions in grade and a halving of assignments to special education, compared to the control group (Borman & Hewes, 2002). Schools using Success for All since that time have rarely been able to afford so many tutors, instead averaging one or two tutors. Many schools using SFA have not been able to afford even one tutor. Still, across 28 qualifying studies, mostly by third parties, the Success for All effect size has averaged +0.27 (Cheung et al., in press). This is impressive, but it is not +0.50. For the lowest achievers, the mean effect size was +0.62, but again, our goal is +0.50 for all disadvantaged students, not just the lowest achievers.

Over a period of years, could schools using Success for All with five or more teaching assistant tutors reach the +0.50 goal? I’m certain of it. Could we go even further, perhaps creating a similar approach for secondary schools or adding in an emphasis on mathematics? That would be the next frontier.

The Policy Importance of +0.50

If we can routinely achieve an effect size of +0.50 in reading in most Title I schools, this would provide a real challenge for policy makers. Many policy makers argue that money does not make much difference in education, or that housing, employment, and other basic economic improvements are needed before major improvements in the education of disadvantaged students will be possible. But what if it became widely known that outcomes in high-poverty schools could be reliably and substantially improved at a modest cost, compared to the outcomes? Policy makers would hopefully focus on finding ways to provide the resources needed if they could be confident in the outcomes.

As Amundson knew, difficult goals can be attained with meticulous planning and high-quality implementation. Every element of his expedition had been tested extensively in real arctic conditions, and had been found to be effective and practical. We would propose taking a similar path to universal success in reading. Each component of a practical plan to reach an effect size of +0.50 or more must be proven to be effective in schools serving many disadvantaged students. Combining proven approaches, we can add sufficiently to the reading achievement of disadvantaged schools to enable them to perform as well as middle class students do. It just takes an audacious goal and the commitment and resources to accomplish it.


Borman, G., & Hewes, G. (2002).  Long-term effects and cost effectiveness of Success for All.  Educational Evaluation and Policy Analysis, 24 (2), 243-266.

Cheung, A., Xie, C., Zhang, T., & Slavin, R. E. (in press). Success for All: A quantitative synthesis of evaluations. Education Research Review.

Inns, A., Lake, C., Pellegrini, M., & Slavin, R. (2019). A synthesis of quantitative research on programs for struggling readers in elementary schools. Available at Manuscript submitted for publication.

Madden, N. A., Slavin, R. E., Karweit, N. L., Dolan, L., & Wasik, B. (1993). Success for All:  Longitudinal effects of a schoolwide elementary restructuring program. American Educational Reseach Journal, 30, 123-148.

Madden, N. A., & Slavin, R. E. (2017). Evaluations of technology-assisted small-group tutoring for struggling readers. Reading & Writing Quarterly, 1-8.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

On Replicability: Why We Don’t Celebrate Viking Day

I was recently in Oslo, Norway’s capital, and visited a wonderful museum displaying three Viking ships that had been buried with important people. The museum had all sorts of displays focused on the amazing exploits of Viking ships, always including the Viking landings in Newfoundland, about 500 years before Columbus. Since the 1960s, most people have known that Vikings, not Columbus, were the first Europeans to land in America. So why do we celebrate Columbus Day, not Viking Day?

Given the bloodthirsty actions of Columbus, easily rivaling those of the Vikings, we surely don’t prefer one to the other based on their charming personalities. Instead, we celebrate Columbus Day because what Columbus did was far more important. The Vikings knew how to get back to Newfoundland, but they were secretive about it. Columbus was eager to publicize and repeat his discovery. It was this focus on replication that opened the door to regular exchanges. The Vikings brought back salted cod. Columbus brought back a new world.

In educational research, academics often imagine that if they establish new theories or demonstrate new methods on a small scale, and then publish their results in reputable journals, their job is done. Call this the Viking model: they got what they wanted (promotions or salt cod), and who cares if ordinary people found out about it? Even if the Vikings had published their findings in the Viking Journal of Exploration, this would have had roughly the same effect as educational researchers publishing in their own research journals.

Columbus, in contrast, told everyone about his voyages, and very publicly repeated and extended them. His brutal leadership ended with him being sent back to Spain in chains, but his discoveries had resounding impacts that long outlived him.


Educational researchers only want to do good, but they are unlikely to have any impact at all unless they can make their ideas useful to educators. Many educational researchers would love to make their ideas into replicable programs, evaluate these programs in schools, and if they are found to be effective, disseminate them broadly. However, resources for the early stages of development and research are scarce. Yes, the Institute of Education Sciences (IES) and Education Innovation Research (EIR) fund a lot of development projects, and Small Business Innovation Research (SBIR) provides small grants for this purpose to for-profit companies. Yet these funders support only a tiny proportion of the proposals they receive. In England, the Education Endowment Foundation (EEF) spends a lot on randomized evaluations of promising programs, but very little on development or early-stage research. Innovations that are funded by government or other funding very rarely end up being evaluated in large experiments, fewer still are found to be effective, and vanishingly few eventually enter widespread use. The exceptions are generally programs crated by large for-profit companies, large and entrepreneurial non-profits, or other entities with proven capacity to develop, evaluate, support, and disseminate programs at scale. Even the most brilliant developers and researchers rarely have the interest, time, capital, business expertise, or infrastructure to nurture effective programs through all the steps necessary to bring a practical and effective program to market. As a result, most educational products introduced at scale to schools come from commercial publishers or software companies, who have the capital and expertise to create and disseminate educational programs, but serve a market that primarily wants attractive, inexpensive, easy-to-use materials, software, and professional development, and is not (yet) willing to pay for programs proven to be effective. I discussed this problem in a recent blog on technology, but the same dynamics apply to all innovations, tech and non-tech alike.

How Government Can Promote Proven, Replicable Programs

There is an old saying that Columbus personified the spirit of research. He didn’t know where he was going, he didn’t know where he was when he got there, and he did it all on government funding. The relevant part of this is the government funding. In Columbus’ time, only royalty could afford to support his voyage, and his grant from Queen Isabella was essential to his success. Yet Isabella was not interested in pure research. She was hoping that Columbus might open rich trade routes to the (east) Indies or China, or might find gold or silver, or might acquire valuable new lands for the crown (all of these things did eventually happen). Educational research, development, and dissemination face a similar situation. Because education is virtually a government monopoly, only government is capable of sustained, sizable funding of research, development, and dissemination, and only the U.S. government has the acknowledged responsibility to improve outcomes for the 50 million American children ages 4-18 in its care. So what can government do to accelerate the research-development-dissemination process?

  1. Contract with “seed bed” organizations capable of identifying and supporting innovators with ideas likely to make a difference in student learning. These organizations might be rewarded, in part, based on the number of proven programs they are able to help create, support, and (if effective) ultimately disseminate.
  2. Contract with independent third-party evaluators capable of doing rigorous evaluations of promising programs. These organizations would evaluate promising programs from any source, not just from seed bed companies, as they do now in IES, EIR, and EEF grants.
  3. Provide funding for innovators with demonstrated capacity to create programs likely to be effective and funding to disseminate them if they are proven effective. Developers may also contract with “seed bed” organizations to help program developers succeed with development and dissemination.
  4. Provide information and incentive funding to schools to encourage them to adopt proven programs, as described in a recent blog on technology.  Incentives should be available on a competitive basis to a broad set of schools, such as all Title I schools, to engage many schools in adoption of proven programs.

Evidence-based reform in education has made considerable progress in the past 15 years, both in finding positive examples that are in use today and in finding out what is not likely to make substantial differences. It is time for this movement to go beyond its early achievements to enter a new phase of professionalism, in which collaborations among developers, researchers, and disseminators can sustain a much faster and more reliable process of research, development, and dissemination. It’s time to move beyond the Viking stage of exploration to embrace the good parts of the collaboration between Columbus and Queen Isabella that made a substantial and lasting change in the whole world.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Proven Programs Can’t Replicate, Just Like Bees Can’t Fly

In the 1930’s, scientists in France announced that based on principles of aerodynamics, bees could not fly. The only evidence to the contrary was observational, atheoretical, quasi-scientific reports that bees do in fact fly.

The widely known story about bees’ ability to fly came up in a discussion about the dissemination of proven programs in education. Many education researchers and policy makers maintain that the research-development-evaluation-dissemination sequence relied upon for decades to create better ways to educate children has failed. Many observers note that few practitioners seek out research when they consider selection of programs intended to improve student learning or other important outcomes. Research Practice Partnerships, in which researchers work in partnership with local educators to solve problems of importance to the educators, is largely based on the idea that educators are unlikely to use programs or practices unless they personally were involved in creating them. Opponents of evidence-based education policies invariably complain that because schools are so diverse, they are unlikely to adopt programs developed and researched elsewhere, and this is why few research-based programs are widely disseminated.

Dissemination of proven programs is in fact difficult, and there is little evidence of how proven programs might be best disseminated. Recognizing these and many other problems, however, it is important to note one small fact in all this doom and gloom: Proven programs are disseminated. Among the 113 reading and mathematics programs that have met the stringent standards of Evidence for ESSA (, most have been disseminated to dozens, hundreds, or thousands of schools. In fact, we do not accept programs that are not in active dissemination (because it is not terribly useful for educators, our target audience, to find out that a proven program is no longer available, or never was). Some (generally newer) programs may only operate in a few schools, but they intend to grow. But most programs, supported by non-profit or commercial organizations, are widely disseminated.

Examples of elementary reading programs with strong, moderate, or promising evidence of effectiveness (by ESSA standards) and wide dissemination include Reading Recovery, Success for All, Sound Partners, Lindamood, Targeted Reading Intervention, QuickReads, SMART, Reading Plus, Spell Read, Acuity, Corrective Reading, Reading Rescue, SuperKids, and REACH. For middle/high, effective and disseminated reading programs include SIM, Read180, Reading Apprenticeship, Comprehension Circuit Training, BARR, ITSS, Passport Reading Journeys, Expository Reading and Writing Course, Talent Development, Collaborative Strategic Reading, Every Classroom Every Day, and Word Generation.

In elementary math, effective and disseminated programs include Math in Focus, Math Expressions, Acuity, FocusMath, Math Recovery, Time to Know, Jump Math, ST Math, and Saxon Math. Middle/high school programs include ASSISTments, Every Classroom Every Day, eMINTS, Carnegie Learning, Core-Plus, and Larson Pre-Algebra.

These are programs that I know have strong, moderate, or promising evidence and are widely disseminated. There may be others I do not know about.

I hope this list convinces any doubters that proven programs can be disseminated. In light of this list, how can it be that so many educators, researchers, and policy makers think that proven educational programs cannot be disseminated?

One answer may be that dissemination of educational programs and practices almost never happens the way many educational researchers wish it did. Researchers put enormous energy into doing research and publishing their results in top journals. Then they are disappointed to find out that publishing in a research journal usually has no impact whatever on practice. They then often try to make their findings more accessible by writing them in plain English in more practitioner-oriented journals. Still, this usually has little or no impact on dissemination.

But writing in journals is rarely how serious dissemination happens. The way it does happen is that the developer or an expert partner (such as a publisher or software company) takes the research ideas and makes them into a program, one that solves a problem that is important to educators, is attractive, professional, and complete, and is not too expensive. Effective programs almost always provide extensive professional development, materials, and software. Programs that provide excellent, appealing, effective professional development, materials, and software become likely candidates for dissemination. I’d guess that virtually every one of the programs I listed earlier took a great idea and made it into an appealing program.

A depressing part of this process is that programs that have no evidence of effectiveness, or even have evidence of ineffectiveness, follow the same dissemination process as do proven programs. Until the 2015 ESSA evidence standards appeared, evidence had a very limited role in the whole development-dissemination process. So far, ESSA has pointed more of a spotlight on evidence of effectiveness, but it is still the case that having strong evidence of effectiveness does not provide a program with a decisive advantage over programs lacking positive evidence. Regardless of their actual evidence bases, most programs today make claims that their programs are “evidence-based” or at least “evidence-informed,” so users can easily be fooled.

However, this situation is changing. First, the government itself is identifying programs with evidence of effectiveness, and may publicize them. Government initiatives such as Investing in Innovation (i3; now called EIR) actually provide funding to proven programs to enable them to begin to scale up their programs. The What Works Clearinghouse (, Evidence for ESSA (, and other sources provide easy access to information on proven programs. In other words, government is starting to intervene to nudge the longstanding dissemination process toward programs proven to work.

blog_10-3-19_Bee_art_500x444Back to the bees, the 1930 conclusion that bees should not be able to fly was overturned in 2005, when American researchers observed what bees actually do when they fly, and discovered that bees do not flap their wings like birds. Instead, they push air forward and back with their wings, creating a low pressure zone above them. This pressure keeps them in the air.

In the same way, educational researchers might stop theorizing about how disseminating proven programs is impossible, but instead, observe several programs that have actually done it. Then we can design government policies to further assist proven programs to build the capital and the organizational capacity to effectively disseminate, and to provide incentives and assistance to help schools in need of proven programs to learn about and adopt them.

Perhaps we could call this Plan Bee.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

The Fabulous 20%: Programs Proven Effective in Rigorous Research

Photo courtesy of Allison Shelley/The Verbatim Agency for American Education: Images of Teachers and Students in Action

Over the past 15 years, governments in the U.S. and U.K. have put quite a lot of money (by education standards) into rigorous research on promising programs in PK-12 instruction. Rigorous research usually means studies in which schools, teachers, or students are assigned at random to experimental or control conditions and then pre- and posttested on valid measures independent of the developers. In the U.S., the Institute for Education Sciences (IES) and Investing in Innovation (i3), now called Education Innovation Research (EIR), have led this strategy, and in the U.K., it’s the Education Endowment Foundation (EEF). Enough research has now been done to enable us to begin to see important patterns in the findings.

One finding that is causing some distress is that the numbers of studies showing significant positive effects is modest. Across all funding programs, the proportion of studies reporting positive, significant findings averages around 20%. It is important to note that most funded projects evaluate programs that have been newly developed and not previously evaluated. The “early phase” or “development” category of i3/EIR is a good example; it provides small grants intended to fund creation or refinement of new programs, so it is not so surprising that these studies are less likely to find positive outcomes. However, even programs that have been successfully evaluated in the past often do not replicate their positive findings in the large, rigorous evaluations required at the higher levels of i3/EIR and IES, and in all full-scale EEF studies. The problem is that positive outcomes may have been found in smaller studies in which hard-to-replicate levels of training or monitoring by program developers may have been possible, or in which measures made by developers or researchers were used, or where other study features made it easier to find positive outcomes.

The modest percentage of positive findings has caused some observers to question the value of all these rigorous studies. They wonder if this is a worthwhile investment of tax dollars.

One answer to this concern is to point out that while the percentage of all studies finding positive outcomes is modest, so many have been funded that the number of proven programs is growing rapidly. In our Evidence for ESSA website (, we have found 111 programs that meet ESSA’s Strong, Moderate, or Promising standards in elementary and secondary reading or math. That’s a lot of proven programs, especially in elementary reading, where there were 62.

The situation is a bit like that in medicine. A very small percentage of rigorous studies of medicines or other treatments show positive effects. Yet so many are done that each year, new proven treatments for all sorts of diseases enter widespread use in medical practice. This dynamic is one explanation for the steady increases in life expectancy taking place throughout the world.

Further, high quality studies that fail to find positive outcomes also contribute to the science and practice of education. Some programs do not meet standards for statistical significance, but nevertheless they show promise overall or with particular subgroups. Programs that do not find clear positive outcomes but closely resemble other programs that do are another category worth further attention. Funders can take this into account in deciding whether to fund another study of programs that “just missed.”

On the other hand, there are programs that show profoundly zero impact, in categories that never or almost never find positive outcomes. I reported recently on benchmark assessments,  with an overall effect size of -0.01 across 10 studies. This might be a good candidate for giving up, unless someone has a markedly different approach unlike those that have failed so often. Another unpromising category is textbooks. Textbooks may be necessary, but the idea that replacing one textbook with another has failed many, many times. This set of negative results can be helpful to schools, enabling them to focus their resources on programs that do work. But giving up on categories of studies that hardly ever work would significantly reduce the 80% failure rate, and save money better spent on evaluating more promising approaches.

The findings of many studies of replicable programs can also reveal patterns that should help current or future developers create programs that meet modern standards of evidence. There are a few patterns I’ve seen across many programs and studies:

  1. I think developers (and funders) vastly underestimate the amount and quality of professional development needed to bring about significant change in teacher behaviors and student outcomes. Strong professional development requires top-quality initial training, including simulations and/or videos to show teachers how a program works, not just tell them. Effective PD almost always includes coaching visits to classrooms to give teachers feedback and new ideas. If teachers fall back into their usual routines due to insufficient training and follow-up coaching, why would anyone expect their students’ learning to improve in comparison to the outcomes they’ve always gotten? Adequate professional development can be expensive, but this cost is highly worthwhile if it improves outcomes.
  2. In successful programs, professional development focuses on classroom practices, not solely on improving teachers’ knowledge of curriculum or curriculum-specific pedagogy. Teachers standing at the front of the class using the same forms of teaching they’ve always used but doing it with more up-to-date or better-aligned content are not likely to significantly improve student learning. In contrast, professional development focused on tutoring, cooperative learning, and classroom management has a far better track record.
  3. Programs that focus on motivation and relationships between teachers and students and among students are more likely to enhance achievement than programs that focus on cognitive growth alone. Successful teaching focuses on students’ hearts and spirits, not just their minds.
  4. You can’t beat tutoring. Few approaches other than one-to-one or one-to-small group tutoring have consistent powerful impacts. There is much to learn about how to make tutoring maximally effective and cost-effective, but let’s start with the most effective and cost-effective tutoring models we have now and build out from there .
  5. Many, perhaps most failed program evaluations involve approaches with great potential (or great success) in commercial applications. This is one reason that so many evaluations fail; they assess textbooks or benchmark assessments or ordinary computer assisted instruction approaches. These often involve little professional development or follow-up, and they may not make important changes in what teachers do. Real progress in evidence-based reform will begin when publishers and software developers come to believe that only proven programs will succeed in the marketplace. When that happens, vast non-governmental resources will be devoted to development, evaluation, and dissemination of well-implemented forms of proven programs. Medicine was once dominated by the equivalent of Dr. Good’s Universal Elixir (mostly good-tasting alcohol and sugar). Very cheap, widely marketed, and popular, but utterly useless. However, as government began to demand evidence for medical claims, Dr. Good gave way to Dr. Proven.

Because of long-established policies and practices that have transformed medicine, agriculture, technology, and other fields, we know exactly what has to be done. IES, i3/EIR, and EEF are doing it, and showing great progress. This is not the time to get cold feet over the 80% failure rate. Instead, it is time to celebrate the fabulous 20% – programs that have succeeded in rigorous evaluations. Then we need to increase investments in evaluations of the most promising approaches.



This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.


The holy grail of science is replication. If a finding cannot be repeated, then it did not happen in the first place. There is a reason that the humor journal in the hard sciences is called the Journal of Irreproducible Results. For scientists, results that are irreproducible are inherently laughable, therefore funny. In many hard science experiments, replication is pretty much guaranteed. If you heat an iron bar, it gets longer. If you cross parents with the same recessive gene, one quarter of their progeny will express the recessive trait (think blue eyes).


In educational research, we care about replication just as much as our colleagues in the lab coats across campus. However, when we’re talking about evaluating instructional programs and practices, replication is a lot harder, because students and schools differ. Positive outcomes obtained in one experiment may or may not replicate in a second trial. Sometimes this is true because the first experiment had features known to contribute to bias: small sample sizes, brief study durations, extraordinary amounts of resources or expert time to help the experimental schools or classes, use of measures made by the developers or researchers or otherwise overaligned with the experimental group (but not the control group), or use of matched rather than randomized assignment to conditions, can all contribute to successful-appearing outcomes in a first experiment. Second or third experiments are more likely to be larger, longer, and more stringent than the first study, and therefore may not replicate. Even when the first study has none of these problems, it may not replicate because of differences in the samples of schools, teachers, or students, or for other, perhaps unknowable problems. A change in the conditions of education may cause a failure to replicate. Our Success for All whole-school reform model has been found to be effective many times, mostly by third party evaluators. However, Success for All has always specified a full-time facilitator and at least one tutor for each school. An MDRC i3 evaluation happened to fall in the middle of the recession, and schools, which were struggling to afford classroom teachers, could not afford facilitators or tutors. The results were still positive on some measures, especially for low achievers, but the effect sizes were less than half of what others had found in many studies. Stuff happens.

Replication has taken on more importance recently because the ESSA evidence standards only require a single positive study. To meet the strong, moderate, or promising standards, programs must have at least one “well-designed and well-implemented” study using randomized (strong), matched (moderate), or correlational (promising) designs and finding significantly positive outcomes. Based on the “well-designed and well-implemented” language, our Evidence for ESSA website requires features of experiments similar to those also required by the What Works Clearinghouse (WWC). These requirements make it difficult to be approved, but they remove many of the experimental design features that typically cause first studies to greatly overstate program impacts: small size, brief durations, overinvolved experimenters, and developer-made measures. They put (less rigorous) matched and correlational studies in lower categories. So one study that meets ESSA or Evidence for ESSA requirements is at least likely to be a very good study. But many researchers have expressed discomfort with the idea that a single study could qualify a program for one of the top ESSA categories, especially if (as sometimes happens) there is one study with a positive outcomes and many with zero or at least nonsignificant outcomes.

The pragmatic problem is that if ESSA had required even two studies showing positive outcomes, this would wipe out a very large proportion of current programs. If research continues to identify effective programs, it should only be a matter of time before ESSA (or its successors) requires more than one study with a positive outcomes.

However, in the current circumstance, there is a way researchers and educators might at least estimate the replicability of given programs when they have only a single study with a significant positive outcomes. This would involve looking at the findings for entire genres of programs. The logic here is that if a program has only one ESSA-qualifying study, but it closely resembles other programs that also have positive outcomes, that program should be taken a lot more seriously than a program that obtained a positive outcome that differs considerably from outcomes of very similar programs.

As one example, there is much evidence from many studies by many researchers indicating positive effects of one-to-one and one-to-small group tutoring, in reading and mathematics. If a tutoring program has only one study, but this one study has significant positive findings, I’d say thumbs up. I’d say the same about cooperative learning approaches, classroom management strategies using behavioral principles, and many others, where a whole category of programs has had positive outcomes.

In contrast, if a program has a single positive outcome and there are few if any similar approaches that obtained positive outcomes, I’d be much more cautious. An example might be textbooks in mathematics, which rarely make any difference because control groups are also likely to be using textbooks, and textbooks considerably resemble each other. In our recent elementary mathematics review (Pellegrini, Lake, Inns, & Slavin, 2018), only one textbook program available in the U.S. had positive outcomes (out of 16 studies). As another example, there have been several large randomized evaluations of the use of interim assessments. Only one of them found positive outcomes. I’d be very cautious about putting much faith in benchmark assessments based on this single anomalous finding.

Looking for findings from similar studies is facilitated by looking at reviews we make available at These consist of reviews of research organized by categories of programs. Looking for findings from similar programs won’t help with the ESSA law, which often determines its ratings based on the findings of a single study, regardless of other findings on the same program or similar programs. However, for educators and researchers who really want to find out what works, I think checking similar programs is not quite as good as finding direct replication of positive findings on the same programs, but perhaps, as we like to say, close enough for social science.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

“But It Worked in the Lab!” How Lab Research Misleads Educators

In researching John Hattie’s meta-meta analyses, and digging into the original studies, I discovered one underlying factor that more than anything explains why he consistently comes up with greatly inflated effect sizes:  Most studies in the meta-analyses that he synthesizes are brief, small, artificial lab studies. And lab studies produce very large effect sizes that have little if any relevance to classroom practice.

This discovery reminds me of one of the oldest science jokes in existence: (One scientist to another): “Your treatment worked very well in practice, but how will it work in the lab?” (Or “…in theory?”)


The point of the joke, of course, is to poke fun at scientists more interested in theory than in practical impacts on real problems. Personally, I have great respect for theory and lab studies. My very first publication as a psychology undergraduate involved an experiment on rats.

Now, however, I work in a rapidly growing field that applies scientific methods to the study and improvement of classroom practice.  In our field, theory also has an important role. But lab studies?  Not so much.

A lab study in education is, in my view, any experiment that tests a treatment so brief, so small, or so artificial that it could never be used all year. Also, an evaluation of any treatment that could never be replicated, such as a technology program in which a graduate student is standing by every four students every day of the experiment, or a tutoring program in which the study author or his or her students provide the tutoring, might be considered a lab study, even if it went on for several months.

Our field exists to try to find practical solutions to practical problems in an applied discipline.  Lab studies have little importance in this process, because they are designed to eliminate all factors other than the variables of interest. A one-hour study in which children are asked to do some task under very constrained circumstances may produce very interesting findings, but cannot recommend practices for real teachers in real classrooms.  Findings of lab studies may suggest practical treatments, but by themselves they never, ever validate practices for classroom use.

Lab studies are almost invariably doomed to success. Their conditions are carefully set up to support a given theory. Because they are small, brief, and highly controlled, they produce huge effect sizes. (Because they are relatively easy and inexpensive to do, it is also very easy to discard them if they do not work out, contributing to the universally reported tendency of studies appearing in published sources to report much higher effects than reports in unpublished sources).  Lab studies are so common not only because researchers believe in them, but also because they are easy and inexpensive to do, while meaningful field experiments are difficult and expensive.   Need a publication?  Randomly assign your college sophomores to two artificial treatments and set up an experiment that cannot fail to show significant differences.  Need a dissertation topic?  Do the same in your third-grade class, or in your friend’s tenth grade English class.  Working with some undergraduates, we once did three lab studies in a single day. All were published. As with my own sophomore rat study, lab experiments are a good opportunity to learn to do research.  But that does not make them relevant to practice, even if they happen to take place in a school building.

By doing meta-analyses, or meta-meta-analyses, Hattie and others who do similar reviews obscure the fact that many and usually most of the studies they include are very brief, very small, and very artificial, and therefore produce very inflated effect sizes.  They do this by covering over the relevant information with numbers and statistics rather than information on individual studies, and by including such large numbers of studies that no one wants to dig deeper into them.  In Hattie’s case, he claims that Visible Learning meta-meta-analyses contain 52,637 individual studies.  Who wants to read 52,637 individual studies, only to find out that most are lab studies and have no direct bearing on classroom practice?  It is difficult for readers to do anything but assume that the 52,637 studies must have taken place in real classrooms, and achieved real outcomes over meaningful periods of time.  But in fact, the few that did this are overwhelmed by the thousands of lab studies that did not.

Educators have a right to data that are meaningful for the practice of education.  Anyone who recommends practices or programs for educators to use needs to be open about where that evidence comes from, so educators can judge for themselves whether or not one-hour or one-week studies under artificial conditions tell them anything about how they should teach. I think the question answers itself.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

What Kinds of Studies Are Likely to Replicate?

Replicated scientists 03 01 18

In the hard sciences, there is a publication called the Journal of Irreproducible Results.  It really has nothing to do with replication of experiments, but is a humor journal by and for scientists.  The reason I bring it up is that to chemists and biologists and astronomers and physicists, for example, an inability to replicate an experiment is a sure indication that the original experiment was wrong.  To the scientific mind, a Journal of Irreproducible Results is inherently funny, because it is a journal of nonsense.

Replication, the ability to repeat an experiment and get a similar result, is the hallmark of a mature science.  Sad to say, replication is rare in educational research, which says a lot about our immaturity as a science.  For example, in the What Works Clearinghouse, about half of programs across all topics are represented by a single evaluation.  When there are two or more, the results are often very different.  Relatively recent funding initiatives, especially studies supported by Investing in Innovation (i3) and the Institute for Education Sciences (IES), and targeted initiatives such as Striving Readers (secondary reading) and the Preschool Curriculum Evaluation Research (PCER), have added a great deal in this regard. They have funded many large-scale, randomized, very high-quality studies of all sorts of programs in the first place, and many of these are replications themselves, or they provide a good basis for replications later.  As my colleagues and I have done many reviews of research in every area of education, pre-kindergarten to grade 12 (see, we have gained a good intuition about what kinds of studies are likely to replicate and what kinds are less likely.

First, let me define in more detail what I mean by “replication.”  There is no value in replicating biased studies, which may well consistently find the same biased results (as when, for example, both the original studies and the replication studies used the same researcher- or developer-made outcome measures that are slanted toward the content the experimental group experienced but not what the control group experienced) (See

Instead, I’d consider a successful replication one that shows positive outcomes both in the original studies and in at least one large-scale, rigorous replication. One obvious way to increase the chances that a program producing a positive outcome in one or more initial studies will succeed in such a rigorous replication evaluation is to use a similar, equally rigorous evaluation design in the first place. I think a lot of treatments that fail to replicate are ones that used weak methods in the original studies. In particular, small studies tend to produce greatly inflated effect sizes (see, which are unlikely to replicate in larger evaluations.

Another factor likely to contribute to replicability is use in the earlier studies of methods or conditions that can be repeated in later studies, or in schools in general. For example, providing teachers with specific manuals, videos demonstrating the methods, and specific student materials all add to the chances that a successful program can be successfully replicated. Avoiding unusual pilot sites (such as schools known to have outstanding principals or staff) may contribute to replication, as these conditions are unlikely to be found in larger-scale studies. Having experimenters or their colleagues or graduate students extensively involved in the early studies diminishes replicability, of course, because those conditions will not exist in replications.

Replications are entirely possible. I wish there were a lot more of them in our field. Showing that programs can be effective in just two rigorous evaluations is way more convincing than just one. As evidence becomes more and more important, I hope and expect that replications, perhaps carried out by states or districts, will become more common.

The Journal of Irreproducible Results is fun, but it isn’t science. I’d love to see a Journal of Replications in Education to tell us what really works for kids.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.