Thorough Implementation Saves Lives

In an article in the May 23 Washington Post, Dr. John Barry, a professor of Public Health and Tropical Medicine at Tulane, wrote about lessons from the 1918 influenza epidemic.  Dr. Barry is the author of a book about that long-ago precursor to the epidemic we face today.  I found the article chilling, in light of what is happening right now in the Covid-19 pandemic.

In particular, he wrote about a study of Army training camps in 1918.  Army leaders prescribed strict isolation and quarantine measures, and most camps followed this guidance.  However, some did not.  Most camps that did follow the guidance did so rigorously for a few weeks, but then gradually loosened up.  The study compared the camps that never did anything to the camps that followed the guidelines for a while.  There were no differences in the rates of sickness or death.  However, a third set of camps continued to follow the guidance for a much longer time. These camps saw greatly reduced rates of sickness and death.

blog_6-4-20_1918flu_500x375
Camp Funston at Fort Riley, Kansas, during the 1918 flu pandemic, Armed Forces Institute of Pathology/National Museum of Health and Medicine, distributed via the Associated Press / Public domain

Dr. Barry also gave an example from the SARS epidemic in the early 2000’s.  President George W. Bush wanted to honor the one hospital in the world with the lowest rate of SARS infection among staff.  A study of that hospital found that they were doing exactly what all hospitals were doing, making sure that staff maintained sterile procedures.  The difference was that in this hospital, the hospital administration made sure that these rules were being rigorously followed.  This reminds me of a story by Atul Gawande about the most successful hospital in the world for treating cystic fibrosis. Researchers studied this hospital, an ordinary, non-research hospital in a Minneapolis suburb.  The physician in charge of cystic fibrosis was found to be using the very same procedures and equipment that every other hospital used.  The difference was that he frequently called all of his patients to make sure they were using the equipment and procedures properly.  His patients had markedly higher survival rates than did patients in similar hospitals doing exactly the same (medical) things with less attention to fidelity.

Now consider what is happening in the U.S. in our current pandemic.  Given our late start, we have done a pretty good job reducing rates of disease and death, compared to what might have been.  However, all fifty states are now opening up, to one degree or another.  The basic message: “We have been careful long enough.  Now let’s get sloppy.”

Epidemiologists are watching all of this with horror.  They know full well what is coming.  Leana Wen, Baltimore’s former Health Commissioner, explained the consequences of the choices we are making in a deeply disturbing article in the May 13 Post.

The entire story of what has happened in the Covid-19 crisis, and what is likely to happen now, has a substantial resonance with problems we experience in educational reform.  Our field is full of really good ideas for improving educational outcomes.  However, we have relatively few examples of programs that have been successful even in one-year evaluations, much less over extended time periods at large scale.  The problem is not usually that the ideas turn out not to be so good after all, but that they are rarely implemented with consistency or rigor. Or they are implemented well for a while, but get sloppy over time, or stop altogether.  I am often asked how long innovators must stay connected with schools using their research-proven programs with success.  My answer is, “forever.”  The connection need not be frequent in successful implementers, but someone who knows what the program is supposed to look like needs to check in from time to time to see how things are going, to cheer the school on in its efforts to maintain and constantly improve their implementation, and to help the school identify and solve any problems that have cropped up.

Another thing I am frequently asked is how I can base my argument for evidence-based education on the examples of medicine and other evidence-based fields.  “Taking a pill is not like changing a school.”  This is true.  However, the examples of epidemiology, cystic fibrosis (before the recent cure was announced), dealing with obesity and drug abuse, and many other problems of medicine and public health, actually look quite a bit like the problems of education reform.  In medicine, there is a robust interest in “implementation science,” focused on, among other things, getting people to take their medicine or follow a proven protocol (e.g., “eat more veggies”).  There is growing interest in implementation science in education, too.  Similar problems, similar solutions, in many cases.

Education, public health, and medicine have a lot to learn from each other.  In each case, we are trying to make important differences in whole populations.  It is never easy, but in each of our fields, we are learning how to cost-effectively increase health and education outcomes at scale.  In the current pandemic, I hope science will prevail in both reducing the impact of the disease and in using proven practices, with consistency and rigor, to help schools repair the educational damage children have suffered.

References

Barry, J.M. (2020, May 23).  How to avoid a second wave of infections.  Washington Post.

Wen, L.S. (2020, May 13).  We are retreating to a new strategy on covid-19.  Let’s call it what it is.  Washington Post.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

 

Ensuring That Proven Programs Stay Effective in Practice

On a recent trip to Scotland, I visited a ruined abbey. There, in what remained of its ancient cloister, was a sign containing a rule from the 1459 Statute of the Strasbourg Stonecutters’ Guild:

If a master mason has agreed to build a work and has made a drawing of the work as it is to be executed, he must not change this original. But he must carry out the work according to the plan that he has presented to the lords, towns, or villages, in such a way that the work will not be diminished or lessened in value.

Although the Stonecutters’ Guild was writing more than five centuries ago, it touched on an issue we face right now in evidence-based reform in education. Providers of educational programs may have excellent evidence that meets ESSA standards and demonstrates positive effects on educational outcomes. That’s terrific, of course. But the problem is that after a program has gone into dissemination, its developers may find that schools are not willing or able to pay for all of the professional development or software or materials used in the experiments that validated the program. So they may provide less, sometimes much less, to make the program cheaper or easier to adopt. This is the problem that concerned the Stonecutters of Strasbourg: Grand plans followed by inadequate construction.

blog_5-17-18_MedBuilding_500x422

In our work on Evidence for ESSA, we see this problem all the time. A study or studies show positive effects for a program. In writing up information on costs, personnel, and other factors, we usually look at the program’s website. All too often, we find that the program on the website provides much less than the program that was evaluated.  The studies might have provided weekly coaching, but the website promises two visits a year. A study of a tutoring program might have involved one-to-two tutoring, but the website sells or licenses the materials in sets of 20 for use with groups of that size. A study of a technology program may have provided laptops to every child and a full-time technology coordinator, while the website recommends one device for every four students and never mentions a technology coordinator.

Whenever we see this, we take on the role of the Stonecutters’ Guild, and we have to be as solid as a rock. We tell developers that we are planning to describe their program as it was implemented in their successful studies. This sometimes causes a ruckus, with vendors arguing that providing what they did in the study would make the program too expensive. “So would you like us to list your program (as it is in your website) as unevaluated?” we say. We are not unreasonable, but we are tough, because we see ourselves as helping schools make wise and informed choices, not helping vendors sell programs that may have little resemblance to the programs that were evaluated.

This is hard work, and I’m sure we do not get it right 100% of the time. And a developer may agree to an honest description but then quietly give discounts and provide less than what our descriptions say. All we can do is state the truth on our website about what was provided in the successful studies as best as we can, and the schools have to insist that they receive the program as described.

The Stonecutters’ Guild, and many other medieval guilds, represented the craftsmen, not the customers. Yet part of their function was to uphold high standards of quality. It was in the collective interest of all members of the guild to create and maintain a “brand,” indicating that any product of the guild’s members met the very highest standards. Someday, we hope publishers, software developers, professional development providers, and others who work with schools will themselves insist on an evidence base for their products, and then demand that providers ensure that their programs continue to be implemented in ways that maximize the probability that they will produce positive outcomes for children.

Stonecutters only build buildings. Educators affect the lives of children, which in turn affect families, communities, and societies. Long after a stonecutter’s work has fallen into ruin, well-educated people and their descendants and communities will still be making a difference. As researchers, developers, and educators, we have to take this responsibility at least as seriously as did the stone masons of long ago.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Implementing Proven Programs

There is an old joke that goes like this. A door-to-door salesman is showing a housewife the latest, fanciest, most technologically advanced vacuum cleaner. “Ma’am,” says the salesman, “this machine will do half your work!”

“Great!” says the housewife. “I’ll take two!”

All too often, when school leaders decide to adopt proven programs, they act like the foolish housewife. The program is going to take care of everything, they think. Or if it doesn’t, it’s the program’s fault, not theirs.

I wish I could tell you that you could just pick a program from our Evidence for ESSA site (launching on February 28! Next week!), wind it up, and let it teach all your kids, sort of the way a Roomba is supposed to clean your carpets. But I can’t.

Clearly, any program, no matter how good the evidence behind it is, has to be implemented with the buy-in and participation of all involved, planning, thoughtfulness, coordination, adequate professional development, interim assessment and data-based adjustments, and final assessment of program outcomes. In reality, implementing proven programs is difficult, but so is implementing ordinary unproven programs. All teachers and administrators go home every day dead tired, no matter what programs they use. The advantage of proven programs is that they hold out promise that this time, teachers’ and administrators’ efforts will pay off. Also, almost all effective programs provide extensive, high-quality professional development, and most teachers and administrators are energized and enthusiastic about engaging professional development. Finally, whole-school innovations, done right, engage the whole staff in common activities, exchanging ideas, strategies, successes, challenges, and insights.

So how can schools implement proven programs with the greatest possible chance of success? Here are a few pointers (from 43 years of experience!).

Get Buy-In. No one likes to be forced to do anything and no one puts in their best effort or imagination for an activity they did not choose.

When introducing a proven program to a school staff, have someone from the program provider’s staff come to explain it to the staff, and then get staff members to vote by secret ballot. Require an 80% majority.

This does several things. First, it ensures that the school staff is on board, willing to give the program their best shot. Second, it effectively silences the small minority in every school that opposes everything. After the first year, additional schools that did not select the program in the first round should be given another opportunity, but by then they will have seen how well the program works in neighboring schools.

Plan, Plan, Plan. Did you ever see the Far Side cartoon in which there is a random pile of horses and cowboys and a sheriff says, “You don’t just throw a posse together, dadgummit!” (or something like that). School staffs should work with program providers to carefully plan every step of program introduction. The planning should focus on how the program needs to be adapted to the specific requirements of this particular school or district, and make best use of human, physical, technological, and financial resources.

Professional Development. Perhaps the most common mistake in implementing proven programs is providing too little on-site, up-front training, and too little on-site, ongoing coaching. Professional development is expensive, especially if travel is involved, and users of proven programs often try to minimize costs by doing less professional development, or doing all or most of it electronically, or using “trainer-of-trainer” models (in which someone from the school or district learns the model and then teaches it to colleagues).

Here’s a dark secret. Developers of proven programs almost never use any of these training models in their own research. Quite the contrary, they are likely to have top-quality coaches swarming all over schools, visiting classes and ensuring high-quality implementation any way they can. Yet when it comes time for dissemination, they keep costs down by providing much, much less than what was needed (which is why they provided it in their studies). This is such a common problem that Evidence for ESSA excludes programs that used a lot of professional development in their research, but today just send an online manual, for example. Evidence for ESSA tries to describe dissemination requirements in terms of what was done in the research, not what is currently offered.

Coaching. Coaching means having experts visit teachers’ classes and give them individual or schoolwide feedback on their quality of implementation.

Coaching is essential because it helps teachers know whether they are on track to full implementation, and enables the project to provide individualized, actionable feedback. If you question the need for feedback, consider how you could learn to play tennis or golf, play the French horn, or act in Shakespearean plays, if no one ever saw you do it and gave you useful and targeted feedback and suggestions for improvement. Yet teaching is much, much more difficult.

Sure, coaching is expensive. But poor implementation squanders not only the cost of the program, but also teachers’ enthusiasm and belief that things can be better.

Feedback. Coaches, building facilitators, or local experts should have opportunities to give regular feedback to schools using proven programs, on implementation as well as outcomes. This feedback should be focused on solving problems together, not on blaming or shaming, but it is essential in keeping schools on track toward goals. At the end of each quarter or at least annually, school staffs need an opportunity to consider how they are doing with a proven program and how they are going to make it better.

Proven programs plus thoughtful, thorough implementation are the most powerful tool we have to make a major difference in student achievement across whole schools and districts. They build on the strengths of schools and teachers, and create a lasting sense of efficacy. A team of teachers and administrators that has organized itself around a proven program, implemented it with pride and creativity, and saw enhanced outcomes, is a force to be reckoned with. A force for good.

Transforming Transformation (and Turning Around Turnaround)

At the very end of the Obama Administration, the Institute for Education Sciences (IES) released the final report of an evaluation of the outcomes of the federal School Improvement Grant program. School Improvement Grants (SIG) are major investments to help schools with the lowest academic achievement in their states to greatly improve their outcomes.

The report, funded by the independent and respected IES and carried out by the equally independent and respected Mathematica Policy Associates, found that SIG grants made essentially no difference in the achievement of the students in schools that received them.

Bummer.

In Baltimore, where I live, we believe that if you spend $7 billion on something, as SIG has so far, you ought to have something to show for it. The disappointing findings of the Mathematica evaluation are bad news for all of the usual reasons. Even if there were some benefits, SIG turned out to be a less-than-compelling use of taxpayers’ funds.  The students and schools that received it really needed major improvement, but improved very little. The findings undermine faith in the ability of very low-achieving schools to turn themselves around.

However, the SIG findings are especially frustrating because they could have been predicted, were in fact predicted by many, and were apparent long before this latest report. There is no question that SIG funds could have made a substantial difference. Had they been invested in proven programs and practices, they would have surely improved student outcomes just as they did in the research that established the effectiveness of the proven programs.

But instead of focusing on programs proven to work, SIG forced schools to choose among four models that had never been tried before and were very unlikely to work.

Three of the four models were so draconian that few schools chose them. One involved closing the school, and another, conversion to a charter school. These models were rarely selected unless schools were on the way to doing these things anyway. Somewhat more popular was “turnaround,” which primarily involved replacing the principal and 50% of the staff. The least restrictive model, “transformation,” involved replacing the principal, using achievement growth to evaluate teachers, using data to inform instruction, and lengthening the school day or year.

The problem is that very low achieving schools are usually in low achieving areas, where there are not long lines of talented applicants for jobs as principals or teachers. A lot of school districts just swapped principals between SIG and non-SIG schools. None of the mandated strategies had a strong research base, and they still don’t. Low achieving schools usually have limited capacity to reform themselves under the best of circumstances, and SIG funding required replacing principals, good or bad, thereby introducing instability in already tumultuous places. Further, all four of the SIG models had a punitive tone, implying that the problem was bad principals and teachers. Who wants to work in a school that is being punished?

What else could SIG have done?

SIG could have provided funding to enable low-performing schools and their districts to select among proven programs. This would have maintained an element of choice while ensuring that whatever programs schools chose would have been proven effective, used successfully in other low-achieving schools, and supported by capable intermediaries willing and able to work effectively in struggling schools.

Ironically, SIG did finally introduce such an option, but it was too little, too late.  In 2015, SIG introduced two additional models, one of which was an Evidence-Based, Whole-School Reform model that would allow schools to utilize SIG funds to adopt a proven whole-school approach. The U.S. Department of Education carefully reviewed the evidence and identified four approaches with strong evidence and the ability to expand that could be utilized under this model. But hardly any schools chose to utilize these approaches because there was little promotion of the new models, and few school, district, or state leaders to this day even know they exist.

The old SIG program is changing under the Every Student Succeeds Act (ESSA). In order to receive school improvement funding under ESSA, schools will have to select from programs that meet the strong, moderate, or promising evidence requirements defined in ESSA. Evidence for ESSA, the free web site we are due to release later this month, will identify more than 90 reading and math programs that meet these requirements.

This is a new opportunity for federal, state, and district officials to promote the use of proven programs and build local capacity to disseminate proven approaches. Instead of being seen as a trip to the woodshed, school improvement funding might be seen as an opportunity for eager teachers and administrators to do cutting edge instruction. Schools using these innovative approaches might become more exciting and fulfilling places to work, attracting and retaining the best teachers and administrators, whose efforts will be reflected in their students’ success.

Perhaps this time around, school improvement will actually improve schools.

Perfect Implementation of Hopeless Methods: The Sinking of the Vasa

If you are ever in Stockholm, you must visit the Vasa Museum. It contains a complete warship launched in 1628 that sank 30 minutes later. Other than the ship itself, the museum contains objects and bones found in the wreck, and carefully analyzed by scientists.

The basic story of the sinking of the Vasa has important analogies to what often happens in education reform.

After the Vasa sank, the king, who commissioned it, Gustav II Adolphe, called together a commission to find out whose fault it was, to punish the guilty.

Yet the commission, after many interviews with survivors, found that no one did anything wrong. 3 ½ centuries later, modern researchers came to the same conclusion. Everything was in order. The skeleton of the helmsman was found still gripping the steering pole, trying heroically to turn the ship’s bow into the wind to keep it from leaning over.

So what went wrong? The ship could never have sailed. It was built too top-heavy, with too much heavy wood and too many heavy guns on the top decks and too little ballast on the bottom. The Vasa was doomed, no matter what the captain and crew did.

In education reform, there is a constant debate about how much is contributed to effectiveness by a program as opposed to quality of implementation. In implementation science, there are occasionally claims that it does not matter what programs schools adopt, as long as they implement them well. But most researchers, developers, and educators agree that success only results from a combination of good programs and good implementation. Think of the relationship as multiplicative:

P X I = A

(Quality of program times quality of implementation equals achievement gain).

The reason the relationship might be multiplicative is that if either P or I is zero, achievement gain is zero. If both are very positive, then achievement gain is very, very positive.

In the case of the Vasa, P=0, so no matter how good implementation was, the Vasa was doomed. In many educational programs, the same is true. For example, programs that are not well worked out, not well integrated into teachers’ schedules and skill sets, or are too difficult to implement, are unlikely to work. One might argue that in order to have positive effects, a program must be very clear about what teachers are expected to do, so that professional development and coaching can be efficiently targeted to helping teachers do those things. Then we have to have evidence that links teachers’ doing certain things to improving student learning. For example, providing teachers with professional development to enhance their content knowledge may not be helpful if teachers are not clear how to put this new knowledge into their daily teaching.

Rigorous research, especially under funding from IES and i3 in the U.S. and from EEF in England, is increasingly identifying proven programs as well as programs that consistently fail to improve student outcomes. The patterns are not perfectly clear, but in general those programs that do make a significant difference are ones that are well-designed, practical, and coherent.

If you think implementation alone will carry the day, keep in mind the skeleton of the heroic helmsman of the Vasa, spending 333 years on the seafloor trying to push the Vasa’s bow into the wind. He did everything right, except for signing on to the wrong ship.

Lessons from Innovators: The National Forum to Accelerate Middle Grades Reform

2013-06-19-HP06_19_2013BlogPicture.jpg

The process of moving an educational innovation from a good idea to widespread effective implementation is far from straightforward, and no one has a magic formula for doing it. The W. T. Grant and Spencer Foundations, with help from the Forum for Youth Investment, have created a community composed of grantees in the federal Investing in Innovation (i3) program to share ideas and best practices. Our Success for All program participates in this community. In this space, I, in partnership with the Forum for Youth Investment, highlight observations from the experiences of i3 grantees other than our own, in an attempt to share the thinking and experience of colleagues out on the front lines of evidence-based reform.

This blog, based on an interview between the Forum for Youth Investment and Debby Kasak, director of the National Forum to Accelerate Middle Grades Reform, shares how school-to-school mentoring is both bringing about substantial improvements and itself serving as an important sustainability strategy.

Mentoring is Good for Mentors and Mentees
Coaches and mentors, whether at the individual or school level, can improve their own practice by helping others. That is what has begun to happen among schools involved in the National Forum’s i3 project. The National Forum is an alliance of educators, researchers, national associations, and officers of professional organizations and foundations committed to promoting the academic performance and healthy development of young adolescents. Through its Schools to Watch (STW) program, the National Forum has developed criteria for identifying high-performing middle-grades schools and created tools to help schools use them.

The National Forum’s i3 development grant is focused on improving 18 low-performing schools in three states using the STW framework and criteria. The goal is for those schools to learn from other STW schools that have been performing well. “We are inspiring schools to change their practice through whole school intervention,” says Kasak. “Each i3 school is matched up with demographically similar STW schools so they can see that it is possible to make change, even with a tough student population. It helps bring the theory to life for them. Given all the things that teachers get confronted with, they really respond when they see other teachers who are getting results.” But it isn’t just the low-performing schools and their teachers and administrators who are benefitting. “Successful schools can be powerful change agents in the lives of schools that need help, but interestingly, we’ve found that those mentor schools are improving their practice too,” reports Kasak. By helping others – coaching and sharing tools and strategies – schools and individuals within them are reminded to shore up their own promising practices.

Building Relationships is Key
“It sounds like a cliché, but one thing we have learned that can’t be underscored enough is that relationships matter,” Kasak shares. “The first six to seven months that we were involved in this project it was really important that we had coaches in the buildings who could form good relationships with teachers and principals. We needed to take the time to nurture those relationships. And as we did that, we saw the culture and climate of those schools changing.” Supportive relationships help schools weather the inevitable transitions that occur at the senior administration level. If teachers and coaches have a strong network and are committed to the work, it is less disruptive when a principal or superintendent leaves. A cadre of advocates for the initiative remains to educate new leaders. According to Kasak, that is exactly what happened in Chicago. “In Chicago, schools across the city are divided into networks. Originally, all of our schools were part of one network and we had a really supportive network leader. When the district administration changed, the networks were reorganized and our schools were no longer in the same network. One of our new network leaders wasn’t as supportive. But in one school, a teacher invited the Mayor to come visit the school; low and behold he did, and he brought the network leader with him. Seeing the school in action, hearing the teachers talk about their experiences, and building that relationship with the school staff made all the difference. He (the network leader) has been much more supportive since.”

Evaluation Can Be a Powerful Tool
Another way the National Forum has built relationships is through evaluation. Although it may sound counterintuitive, Kasak has found that working with the project evaluation team to look at what they are doing in a developmental way has helped them to share more information with schools than they might have otherwise and to build trust and commitment to the effort. “We are finding that in the second year we have gotten much better participation rates – almost 100% of the faculty in our 18 buildings – than we did the first year,” reports Kasak. In speculating why that might be, the National Forum came up with a couple of explanations. “In part, we know this has to do with being in our second year – teachers understand the process better. But we also credit our evaluation team. They regularly give data back to the schools which helps them better understand how all this work is impacting their school culture. Our evaluation team has really helped us to ask: Are doing what we said we would? Is it working? And how can we improve?”

Participating Schools are Part of the Sustainability Pipeline
The National Forum has an innovative approach to scaling their innovation and sustaining those schools where they are already working. Their two networks – low-performing schools supported through the i3 project and higher-performing Schools to Watch schools – create a natural pipeline toward STW status. The goal is to have all of their i3 schools eventually become STW schools who then mentor and support other low-performing schools that may receive funding through additional i3 funding or other sources down the road. Only three years into the i3 initiative, this pipeline is already in action in North Carolina. “We have one rural school in our i3 project that has just been terrific over the past several years. Recently, it applied to be a Schools to Watch school. They were evaluated and received a very high score, so were designated as a STW school. Now they are in a position to mentor other i3 schools in North Carolina. They benefit from the mentoring process itself, and then every three years will have to go through a re-designation process to maintain STW status, ensuring they are always on their game and thinking about how to get better. This school just went from being one site in a project to being part of a sustainable system of reform. We hope to do this with all of our i3 schools.”

Proven Programs Don’t Implement Themselves

One of the criticisms often leveled at evidence-based reform in education is this: Programs may be proven effective in controlled experiments, but on a larger scale, they won’t be implemented with care and therefore won’t work. I have seen many awful implementations of programs that have been successful elsewhere and I agree that this is a problem. Proven programs don’t implement themselves.

How can we ensure widespread, effective, intelligent use of proven programs? After many years of wrestling with this question, I have a set of principles for ensuring high-quality implementations of proven programs, which I will now reveal to one and all.

1. Make sure teachers chose to implement the program. When anyone is forced to do something, they often do a poor job of it. Work with volunteers. If a program works with individual teachers, let them opt in. If it works with schools, let the teachers vote (Success for All* requires 75% in favor). After you demonstrate local success, then come back and offer the program again to those who chose not to do the program, but start with people who are committed and positive about implementation.

2. The school is the unit of change. It’s very hard for isolated teachers to do serious innovation. Schools taking on programs usually get much better results. In secondary schools, departments may take on this function.

3. Make sure the program itself is well specified. Teachers should have a clear idea of what it is, and have manuals, videos, student materials, and other aids to quality implementation.

4. Provide plenty of training and, even more importantly, follow-up support. Real change is hard, and teachers need both top-quality initial training and visits over time from skilled coaches, who give feedback and help teachers stay on track.

5. Assess implementation and student outcomes. Every few months, look at how teachers are implementing the approach and give them friendly feedback. Look at student data to see that kids are benefiting from the program, and share the data with the teachers.

6. Engage implementers with each other. Teachers implementing new programs need opportunities to share ideas, visit each other’s classes, ask each other for help, and take joint responsibility for outstanding outcomes.

7. Plan for the long haul. The change process goes on forever. If you want quality implementation, plan on sticking at it for a long time, to help school staffs continue to grow in sophistication and skill.

A very wise businessman I know lives by the principle that a mediocre plan well implemented always beats a great plan poorly implemented. I don’t know if that applies in education, but I do know that a great plan implemented with care, fidelity, and intelligence is the only thing that makes a difference. Whatever national or local policies we adopt must make sure that proven programs are outstandingly implemented, especially with the kids who need them most.

For the latest on evidence-based education, follow me on twitter: @RobertSlavin

* Robert Slavin is Chairman of the Board of the Success for All Foundation