Implementing Proven Programs

There is an old joke that goes like this. A door-to-door salesman is showing a housewife the latest, fanciest, most technologically advanced vacuum cleaner. “Ma’am,” says the salesman, “this machine will do half your work!”

“Great!” says the housewife. “I’ll take two!”

All too often, when school leaders decide to adopt proven programs, they act like the foolish housewife. The program is going to take care of everything, they think. Or if it doesn’t, it’s the program’s fault, not theirs.

I wish I could tell you that you could just pick a program from our Evidence for ESSA site (launching on February 28! Next week!), wind it up, and let it teach all your kids, sort of the way a Roomba is supposed to clean your carpets. But I can’t.

Clearly, any program, no matter how good the evidence behind it is, has to be implemented with the buy-in and participation of all involved, planning, thoughtfulness, coordination, adequate professional development, interim assessment and data-based adjustments, and final assessment of program outcomes. In reality, implementing proven programs is difficult, but so is implementing ordinary unproven programs. All teachers and administrators go home every day dead tired, no matter what programs they use. The advantage of proven programs is that they hold out promise that this time, teachers’ and administrators’ efforts will pay off. Also, almost all effective programs provide extensive, high-quality professional development, and most teachers and administrators are energized and enthusiastic about engaging professional development. Finally, whole-school innovations, done right, engage the whole staff in common activities, exchanging ideas, strategies, successes, challenges, and insights.

So how can schools implement proven programs with the greatest possible chance of success? Here are a few pointers (from 43 years of experience!).

Get Buy-In. No one likes to be forced to do anything and no one puts in their best effort or imagination for an activity they did not choose.

When introducing a proven program to a school staff, have someone from the program provider’s staff come to explain it to the staff, and then get staff members to vote by secret ballot. Require an 80% majority.

This does several things. First, it ensures that the school staff is on board, willing to give the program their best shot. Second, it effectively silences the small minority in every school that opposes everything. After the first year, additional schools that did not select the program in the first round should be given another opportunity, but by then they will have seen how well the program works in neighboring schools.

Plan, Plan, Plan. Did you ever see the Far Side cartoon in which there is a random pile of horses and cowboys and a sheriff says, “You don’t just throw a posse together, dadgummit!” (or something like that). School staffs should work with program providers to carefully plan every step of program introduction. The planning should focus on how the program needs to be adapted to the specific requirements of this particular school or district, and make best use of human, physical, technological, and financial resources.

Professional Development. Perhaps the most common mistake in implementing proven programs is providing too little on-site, up-front training, and too little on-site, ongoing coaching. Professional development is expensive, especially if travel is involved, and users of proven programs often try to minimize costs by doing less professional development, or doing all or most of it electronically, or using “trainer-of-trainer” models (in which someone from the school or district learns the model and then teaches it to colleagues).

Here’s a dark secret. Developers of proven programs almost never use any of these training models in their own research. Quite the contrary, they are likely to have top-quality coaches swarming all over schools, visiting classes and ensuring high-quality implementation any way they can. Yet when it comes time for dissemination, they keep costs down by providing much, much less than what was needed (which is why they provided it in their studies). This is such a common problem that Evidence for ESSA excludes programs that used a lot of professional development in their research, but today just send an online manual, for example. Evidence for ESSA tries to describe dissemination requirements in terms of what was done in the research, not what is currently offered.

Coaching. Coaching means having experts visit teachers’ classes and give them individual or schoolwide feedback on their quality of implementation.

Coaching is essential because it helps teachers know whether they are on track to full implementation, and enables the project to provide individualized, actionable feedback. If you question the need for feedback, consider how you could learn to play tennis or golf, play the French horn, or act in Shakespearean plays, if no one ever saw you do it and gave you useful and targeted feedback and suggestions for improvement. Yet teaching is much, much more difficult.

Sure, coaching is expensive. But poor implementation squanders not only the cost of the program, but also teachers’ enthusiasm and belief that things can be better.

Feedback. Coaches, building facilitators, or local experts should have opportunities to give regular feedback to schools using proven programs, on implementation as well as outcomes. This feedback should be focused on solving problems together, not on blaming or shaming, but it is essential in keeping schools on track toward goals. At the end of each quarter or at least annually, school staffs need an opportunity to consider how they are doing with a proven program and how they are going to make it better.

Proven programs plus thoughtful, thorough implementation are the most powerful tool we have to make a major difference in student achievement across whole schools and districts. They build on the strengths of schools and teachers, and create a lasting sense of efficacy. A team of teachers and administrators that has organized itself around a proven program, implemented it with pride and creativity, and saw enhanced outcomes, is a force to be reckoned with. A force for good.

Thoughtful Needs Assessments + Proven Programs = Better Outcomes

I’ve been writing lately about our Evidence for ESSA web site, due to be launched at the end of February.  It will make information on programs meeting ESSA evidence standards easy to access and use.  I think it will be a wonderful tool.  But today, I want to talk about what educational leaders need to do to make sure that the evidence they get from Evidence for ESSA will actually make a difference for students.  Knowing what works is essential, but before searching for proven programs, it is important to know the problems you are trying to solve.  The first step in any cycle of instructional improvement is conducting a needs assessment.  You can’t fix a problem you don’t acknowledge and understand.  Most implementation models also ask leaders to do a “root cause” analysis in order to understand what causes the problems that need to be solved.

Needs assessments and root cause analyses are necessary, but they are often, as Monty Python used to say, “a privileged glimpse into the perfectly obvious.”  Any school, district, or state leadership team is likely to sit down with the data and conclude, for example, that on average, low achieving students are from less advantaged homes, or that they have learning disabilities, or that they are limited in English proficiency.  They might dig deeper and find that low achievers or dropouts are concentrated among students with poor attendance, behavior problems, poor social-emotional skills, and low aspirations.

Please raise your hand if any of these are surprising, or if you haven’t been working on them your whole professional life.  Seeing no hands raised, I’ll continue.

The problem with needs assessments and root cause analyses is that they usually do not suggest a pragmatic solution that isn’t already in use or hasn’t already been tried.  And some root causes cannot, as a practical matter, be solved by schools.  For example, if students live in substandard housing, or suffer from lead poisoning or chronic diseases, schools can help reduce the educational impact of these problems but cannot solve them.

Further, needs assessments often lead to solutions that are too narrow, and may therefore not be optimal or cost-effective.  For example, a school improvement team might conclude that one-third of kindergartners are unlikely to reach third grade reading at grade level.  This might lead the school to invest in a one-to-one tutoring program.  Yet few schools can afford one-to-one tutoring for as many as one third of their children.  A more cost-effective approach might be to invest in professional development in proven core instructional strategies for teachers of grades K-3, and then provide proven small-group tutoring for students for whom enhanced classroom reading instruction is not enough, and then provide one-to-one tutoring for the hopefully small number of students still not succeeding despite receiving proven whole-class and then small-group instruction.  The school might check students’ vision and hearing to be sure that problems in these areas are not what is holding back some of the students.

In this example, note that the needs assessment might not lead directly to the best solution.  A needs assessment might conclude that there is a big problem with early reading, and it might note the nature of the students likely to fail.  But the needs assessment might not lead to improving classroom instruction or checking vision and hearing, because these may not seem directly targeted to the original problem.  Improving classroom instruction or checking vision and hearing for all would end up benefitting students who never had a problem with reading.  But so what?  Some solutions, such as professional development for teachers, are so inexpensive (compared to, say, tutoring or special education) that it may be better to invest in the broader solution and let the benefits apply to all or most students rather than focus narrowly on the students with the problems.

An excellent example of this perspective relates to English learners.  In many schools and districts, students who enter school with poor English skills are at particular risk, perhaps throughout their school careers.  A needs assessment involving such schools would of course point to language proficiency as a key factor in students’ likelihood of success, on average.  Yet if you look at the evidence on what works with English learners to improve their learning of English, reading, and other subjects, most solutions take place in heterogeneous classrooms and involve a lot of cooperative learning, where English learners have a lot of opportunities every day to use English in school contexts.  A narrow interpretation of a needs assessment might try to focus on interventions for English learners alone, but alone is the last place they should be.

Needs assessments are necessary, but they should be carried out in light of the practical, proven solutions that are available. For example, imagine that a school leadership team carries out a needs assessment that arrays documented needs, and considers proven solutions, perhaps categorized as expensive (e.g., tutoring, summer school, after school), moderate (e.g., certain technology approaches, professional development with live, on-site coaching, vision and hearing services), or inexpensive (e.g., professional development without live coaching).  The idea would be to bring together data on the problems and the solutions, leading to a systemic approach to change rather than either picking programs off a shelf or picking out needs and choosing solutions narrowly focused on those needs alone.

Doing needs assessments without proven solutions as part of the process from the outset would be like making a wish list of features you’d like in your next car without knowing anything about cars actually on the market, and without considering Consumer Reports ratings of their reliability.  The result could be ending up with a car that breaks down a lot or one that costs a million dollars, or both.

Having easy access to reliable information on the effectiveness of proven programs should greatly change, but not dominate, the conversation about school improvement.  It should facilitate intelligent, informed conversations among caring leaders, who need to build workable systems to use innovation to enhance outcomes.  Those systems need to consider needs, proven programs, and requirements for effective implementation together, and create schools built around the needs of students, schools, and communities.

Evidence for ESSA and the What Works Clearinghouse

In just a few weeks, we will launch Evidence for ESSA, a free web site designed to provide education leaders with information on programs that meet the evidence standards included in the Every Student Succeeds Act (ESSA). As most readers of this blog are aware, ESSA defines standards for strong, moderate, and promising levels of evidence, and it promotes the use of programs and practices that meet those standards.

One question I frequently get about Evidence for ESSA is how it is similar to and different from the What Works Clearinghouse (WWC), the federal service that reviews research on education programs and makes its findings available online. In making Evidence for ESSA, my colleagues and I have assumed that the WWC will continue to exist and do what it has always done. We see Evidence for ESSA as a supplement, not a competitor to the WWC. Evidence for ESSA will have a live link to the WWC for users who want more information. But the WWC was not designed to serve the ESSA evidence standards. Ruth Neild, the recently departed Acting Director of the Institute for Education Sciences (IES) (which oversees the WWC) announced at a November (2016) meeting that the WWC would not try to align itself with the ESSA evidence standards.

Evidence for ESSA, in contrast, is specifically aligned with the ESSA evidence standards. It follows most WWC procedures and standards, using similar or identical methods for searching the literature for potentially qualifying studies, computing effect sizes, averaging effect sizes across studies, and so on.

However, the purpose of the ESSA evidence standards is different from that of the WWC, and Evidence for ESSA is correspondingly different. There are four big differences that have to be taken into account. First, ESSA evidence standards are written for superintendents, principals, teachers and parents, not for experts in research design. The WWC has vast information in it, and my colleagues and I depend on it and use it constantly. But not everyone has the time or inclination to navigate the WWC.

Second, ESSA evidence standards require only a single study with a positive outcome for membership in any given category. For example, to get into the “Strong” category, a program can have just one randomized study that found a significant positive effect, even if there were ten more that found zero impact (although U.S. Department of Education guidance does suggest that one significant negative finding can cancel out a positive one).  Personally, I do not like the one-study rule, but that’s the law. The law does specify that studies must be well-designed and well-implemented, and this allows and even compels us, within the law, to make sure that weak or flawed studies are not accepted as the one study qualifying a program for a given category. More on this in a moment.

Third, ESSA defines three levels of evidence: strong, moderate, and promising. Strong and moderate correspond, roughly, to the WWC “meets standards without reservations” (strong) and the “meets . . . with reservations” (moderate) categories, respectively. But WWC does not have anything corresponding to a “promising” category, so users of the WWC seeking all qualifying programs in a given area under ESSA would miss this crucial category. (There is also a fourth category under ESSA which is sometimes referred to as “evidence-building and under evaluation,” but this category is not well-defined enough to allow current research to be assigned to it.)

Finally, there has been an enormous amount of high-quality research appearing in recent years, and educators seeking proven programs want the very latest information. Recent investments by IES and by the Investing in Innovation (i3) program, in particular, are producing a flood of large, randomized evaluations of a broad range of programs for grades Pre-K to 12. More will be appearing in coming years. Decision makers will want and need up-to-date information on programs that exist today.

Evidence for ESSA was designed with the help of a distinguished technical working group to make common-sense adaptions to satisfy the requirements of the law. In so doing, we needed to introduce a number of technical enhancements to WWC structures and procedures.

 Ease of Use and Interpretation

Evidence for ESSA will be very, very easy to use. From the home page, two clicks will get you to a display of all programs in a given area (e.g., programs for struggling elementary readers, or whole-class secondary math programs). The programs will be listed and color-coded by the ESSA level of evidence they meet, and within those categories, ranked by a combination of effect size, number and quality of studies, and overall sample size.

A third click will take you to a program page describing and giving additional practical details on a particular program.

Three clicks. Done.

Of course, there will be many additional options. You will be able to filter programs by urban/rural, for example, or according to groups studied, or according to program features. You will also see references to the individual studies that caused a program to qualify for an ESSA category.

You will be able to spend as much time on the site as you like, and there will be lots of information if you want it, including answers to “Frequently Asked Questions” that go into as much depth as you desire, including listing our entire “Standards and Procedures” manual. But most users will get where they need to go in three completely intuitive clicks, and can then root around to their hearts’ content.

Focus on High-Quality, Current Studies

We added a few additional requirements on top of the WWC standards to ensure that studies that qualify programs for ESSA categories are meaningful and important to educators. First, we excluded programs that are no longer in active dissemination. Second, we eliminated measures made by researchers, and those that are measures of minor skills or skills taught at other grade levels (such as phonics tests in secondary school). Third, the WWC has a loophole, counting as “meeting standards without reservations” studies that have major flaws but have effect sizes of at least +0.25. We eliminated such studies, which removed studies with sample sizes as small as 14.

Definition of Promising

The WWC does not have a rating corresponding to ESSA’s “Promising” category. Within the confines of the law, we established parameters to put “Promising” into practice. Our parameters include high-quality correlational studies, as well as studies that meet all other inclusion criteria and have statistically significant outcomes at the student level, but not enough clusters (schools, teachers) to find significant outcomes at the cluster level.

 

Rapid Inclusion

Evidence for ESSA will be updated regularly and quickly. Our commitment is to include qualifying studies brought to our attention on the website within two weeks.

Evidence for ESSA and the WWC will exist together, offering educators two complementary approaches to information on effective programs and practices. Over time, we will learn how to maximize the benefits of both facilities and how to coordinate them to make them as useful as possible for all of the audiences we serve. But we have to do this now so that the evidence provisions of ESSA will be meaningful rather than an exercise in minimalist compliance. It may be a long time before our field will have as good an opportunity to put evidence of effectiveness at the core of education policy and practice.

Transforming Transformation (and Turning Around Turnaround)

At the very end of the Obama Administration, the Institute for Education Sciences (IES) released the final report of an evaluation of the outcomes of the federal School Improvement Grant program. School Improvement Grants (SIG) are major investments to help schools with the lowest academic achievement in their states to greatly improve their outcomes.

The report, funded by the independent and respected IES and carried out by the equally independent and respected Mathematica Policy Associates, found that SIG grants made essentially no difference in the achievement of the students in schools that received them.

Bummer.

In Baltimore, where I live, we believe that if you spend $7 billion on something, as SIG has so far, you ought to have something to show for it. The disappointing findings of the Mathematica evaluation are bad news for all of the usual reasons. Even if there were some benefits, SIG turned out to be a less-than-compelling use of taxpayers’ funds.  The students and schools that received it really needed major improvement, but improved very little. The findings undermine faith in the ability of very low-achieving schools to turn themselves around.

However, the SIG findings are especially frustrating because they could have been predicted, were in fact predicted by many, and were apparent long before this latest report. There is no question that SIG funds could have made a substantial difference. Had they been invested in proven programs and practices, they would have surely improved student outcomes just as they did in the research that established the effectiveness of the proven programs.

But instead of focusing on programs proven to work, SIG forced schools to choose among four models that had never been tried before and were very unlikely to work.

Three of the four models were so draconian that few schools chose them. One involved closing the school, and another, conversion to a charter school. These models were rarely selected unless schools were on the way to doing these things anyway. Somewhat more popular was “turnaround,” which primarily involved replacing the principal and 50% of the staff. The least restrictive model, “transformation,” involved replacing the principal, using achievement growth to evaluate teachers, using data to inform instruction, and lengthening the school day or year.

The problem is that very low achieving schools are usually in low achieving areas, where there are not long lines of talented applicants for jobs as principals or teachers. A lot of school districts just swapped principals between SIG and non-SIG schools. None of the mandated strategies had a strong research base, and they still don’t. Low achieving schools usually have limited capacity to reform themselves under the best of circumstances, and SIG funding required replacing principals, good or bad, thereby introducing instability in already tumultuous places. Further, all four of the SIG models had a punitive tone, implying that the problem was bad principals and teachers. Who wants to work in a school that is being punished?

What else could SIG have done?

SIG could have provided funding to enable low-performing schools and their districts to select among proven programs. This would have maintained an element of choice while ensuring that whatever programs schools chose would have been proven effective, used successfully in other low-achieving schools, and supported by capable intermediaries willing and able to work effectively in struggling schools.

Ironically, SIG did finally introduce such an option, but it was too little, too late.  In 2015, SIG introduced two additional models, one of which was an Evidence-Based, Whole-School Reform model that would allow schools to utilize SIG funds to adopt a proven whole-school approach. The U.S. Department of Education carefully reviewed the evidence and identified four approaches with strong evidence and the ability to expand that could be utilized under this model. But hardly any schools chose to utilize these approaches because there was little promotion of the new models, and few school, district, or state leaders to this day even know they exist.

The old SIG program is changing under the Every Student Succeeds Act (ESSA). In order to receive school improvement funding under ESSA, schools will have to select from programs that meet the strong, moderate, or promising evidence requirements defined in ESSA. Evidence for ESSA, the free web site we are due to release later this month, will identify more than 90 reading and math programs that meet these requirements.

This is a new opportunity for federal, state, and district officials to promote the use of proven programs and build local capacity to disseminate proven approaches. Instead of being seen as a trip to the woodshed, school improvement funding might be seen as an opportunity for eager teachers and administrators to do cutting edge instruction. Schools using these innovative approaches might become more exciting and fulfilling places to work, attracting and retaining the best teachers and administrators, whose efforts will be reflected in their students’ success.

Perhaps this time around, school improvement will actually improve schools.