Succeeding Faster in Education

“If you want to increase your success rate, double your failure rate.” So said Thomas Watson, the founder of IBM. What he meant, of course, is that people and organizations thrive when they try many experiments, even though most experiments fail. Failing twice as often means trying twice as many experiments, leading to twice as many failures—but also, he was saying, many more successes.

blog_9-20-18_TJWatson_500x488
Thomas Watson

In education research and innovation circles, many people know this quote, and use it to console colleagues who have done an experiment that did not produce significant positive outcomes. A lot of consolation is necessary, because most high-quality experiments in education do not produce significant positive outcomes. In studies funded by the Institute for Education Sciences (IES), Investing in Innovation (i3), and England’s Education Endowment Foundation (EEF), all of which require very high standards of evidence, fewer than 20% of experiments show significant positive outcomes.

The high rate of failure in educational experiments is often shocking to non-researchers, especially the government agencies, foundations, publishers, and software developers who commission the studies. I was at a conference recently in which a Peruvian researcher presented the devastating results of an experiment in which high-poverty, mostly rural schools in Peru were randomly assigned to receive computers for all of their students, or to continue with usual instruction. The Peruvian Ministry of Education was so confident that the computers would be effective that they had built a huge model of the specific computers used in the experiment and attached it to the Ministry headquarters. When the results showed no positive outcomes (except for the ability to operate computers), the Ministry quietly removed the computer statue from the top of their building.

Improving Success Rates

Much as I believe Watson’s admonition (“fail more”), there is another principle that he was implying, or so I expect: We have to learn from failure, so we can increase the rate of success. It is not realistic to expect government to continue to invest substantial funding in high-quality educational experiments if the success rate remains below 20%. We have to get smarter, so we can succeed more often. Fortunately, qualitative measures, such as observations, interviews, and questionnaires, are becoming required elements of funded research, facilitating finding out what happened so that researchers can find out what went wrong. Was the experimental program faithfully implemented? Were there unexpected responses toward the program by teachers or students?

In the course of my work reviewing positive and disappointing outcomes of educational innovations, I’ve noticed some patterns that often predict that a given program is likely or unlikely to be effective in a well-designed evaluation. Some of these are as follows.

  1. Small changes lead to small (or zero) impacts. In every subject and grade level, researchers have evaluated new textbooks, in comparison to existing texts. These almost never show positive effects. The reason is that textbooks are just not that different from each other. Approaches that do show positive effects are usually markedly different from ordinary practices or texts.
  2. Successful programs almost always provide a lot of professional development. The programs that have significant positive effects on learning are ones that markedly improve pedagogy. Changing teachers’ daily instructional practices usually requires initial training followed by on-site coaching by well-trained and capable coaches. Lots of PD does not guarantee success, but minimal PD virtually guarantees failure. Sufficient professional development can be expensive, but education itself is expensive, and adding a modest amount to per-pupil cost for professional development and other requirements of effective implementation is often the best way to substantially enhance outcomes.
  3. Effective programs are usually well-specified, with clear procedures and materials. Rarely do programs work if they are unclear about what teachers are expected to do, and helped to do it. In the Peruvian study of one-to-one computers, for example, students were given tablet computers at a per-pupil cost of $438. Teachers were expected to figure out how best to use them. In fact, a qualitative study found that the computers were considered so valuable that many teachers locked them up except for specific times when they were to be used. They lacked specific instructional software or professional development to create the needed software. No wonder “it” didn’t work. Other than the physical computers, there was no “it.”
  4. Technology is not magic. Technology can create opportunities for improvement, but there is little understanding of how to use technology to greatest effect. My colleagues and I have done reviews of research on effects of modern technology on learning. We found near-zero effects of a variety of elementary and secondary reading software (Inns et al., 2018; Baye et al., in press), with a mean effect size of +0.05 in elementary reading and +0.00 in secondary. In math, effects were slightly more positive (ES=+0.09), but still quite small, on average (Pellegrini et al., 2018). Some technology approaches had more promise than others, but it is time that we learned from disappointing as well as promising applications. The widespread belief that technology is the future must eventually be right, but at present we have little reason to believe that technology is transformative, and we don’t know which form of technology is most likely to be transformative.
  5. Tutoring is the most solid approach we have. Reviews of elementary reading for struggling readers (Inns et al., 2018) and secondary struggling readers (Baye et al., in press), as well as elementary math (Pellegrini et al., 2018), find outcomes for various forms of tutoring that are far beyond effects seen for any other type of treatment. Everyone knows this, but thinking about tutoring falls into two camps. One, typified by advocates of Reading Recovery, takes the view that tutoring is so effective for struggling first graders that it should be used no matter what the cost. The other, also perhaps thinking about Reading Recovery, rejects this approach because of its cost. Yet recent research on tutoring methods is finding strategies that are cost-effective and feasible. First, studies in both reading (Inns et al., 2018) and math (Pellegrini et al., 2018) find no difference in outcomes between certified teachers and paraprofessionals using structured one-to-one or one-to-small group tutoring models. Second, although one-to-one tutoring is more effective than one-to-small group, one-to-small group is far more cost-effective, as one trained tutor can work with 4 to 6 students at a time. Also, recent studies have found that tutoring can be just as effective in the upper elementary and middle grades as in first grade, so this strategy may have broader applicability than it has in the past. The real challenge for research on tutoring is to develop and evaluate models that increase cost-effectiveness of this clearly effective family of approaches.

The extraordinary advances in the quality and quantity of research in education, led by investments from IES, i3, and the EEF, have raised expectations for research-based reform. However, the modest percentage of recent studies meeting current rigorous standards of evidence has caused disappointment in some quarters. Instead, all findings, whether immediately successful or not, should be seen as crucial information. Some studies identify programs ready for prime time right now, but the whole body of work can and must inform us about areas worthy of expanded investment, as well as areas in need of serious rethinking and redevelopment. The evidence movement, in the form it exists today, is completing its first decade. It’s still early days. There is much more we can learn and do to develop, evaluate, and disseminate effective strategies, especially for students in great need of proven approaches.

References

Baye, A., Lake, C., Inns, A., & Slavin, R. (in press). Effective reading programs for secondary students. Reading Research Quarterly.

Inns, A., Lake, C., Pellegrini, M., & Slavin, R. (2018). Effective programs for struggling readers: A best-evidence synthesis. Paper presented at the annual meeting of the Society for Research on Educational Effectiveness, Washington, DC.

Pellegrini, M., Inns, A., & Slavin, R. (2018). Effective programs in elementary mathematics: A best-evidence synthesis. Paper presented at the annual meeting of the Society for Research on Educational Effectiveness, Washington, DC.

 Photo credit: IBM [CC BY-SA 3.0  (https://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

 

Advertisements

Beyond the Spaghetti Bridge: Why Response to Intervention is Not Enough

I know an engineer at Johns Hopkins University who invented the Spaghetti Bridge Challenge. Teams of students are given dry, uncooked spaghetti and glue, and are challenged to build a bridge over a 500-millimeter gap. The bridge that can support the most weight wins.

blog_9-27-18_spaghettibridge_500x333

Spaghetti Bridge tournaments are now held all over the world, and they are wonderful for building interest in engineering. But I don’t think any engineer would actually build a real bridge based on a winning spaghetti bridge prototype. Much as spaghetti bridges do resemble the designs of real bridges, there are many more factors a real engineer has to take into account: Weight of materials, tensile strength, flexibility (in case of high winds or earthquakes), durability, and so on.

In educational innovation and reform, we have lots of great ideas that resemble spaghetti bridges. That’s because they would probably work great if only their components were ideal. An example like this is Response to Intervention (RTI), or its latest version, Multi-Tiered Systems of Supports (MTSS). Both RTI and MTSS start with a terrific idea: Instead of just testing struggling students to decide whether or not to assign them to special education, provide them with high-quality instruction (Tier 1), supplemented by modest assistance if that is not sufficient (Tier 2), supplemented by intensive instruction if Tier 2 is not sufficient (Tier 3). In law, or at least in theory, struggling readers must have had a chance to succeed in high-quality Tier 1, Tier 2, and Tier 3 instruction before they can be assigned to special education.

The problem is that there is no way to ensure that struggling students truly received high-quality instruction at each tier level. Teachers do their best, but it is difficult to make up effective approaches from scratch. MTSS or RTI is a great idea, but their success depends on the effectiveness of whatever struggling students receive as Tier 1, 2, and 3 instruction.

This is where spaghetti bridges come in. Many bridge designs can work in theory (or in spaghetti), but whether or not a bridge really works in the real world depends on how it is made, and with what materials in light of the demands that will be placed on it.

The best way to ensure that all components of RTI or MTSS policy are likely to be effective is to select approaches for each tier that have themselves been proven to work. Fortunately, there is now a great deal of research establishing the effectiveness of programs, proven effective for struggling students that use whole-school or whole-class methods (Tier 1), one-to-small group tutoring (Tier 2), or one-to-one tutoring (Tier 3). Many of these tutoring models are particularly cost-effective because they successfully provide struggling readers with tutoring from well-qualified paraprofessionals, usually ones with bachelor’s degrees but not teaching certificates. Research on both reading and math tutoring has clearly established that such paraprofessional tutors, using structured models, have tutees who gain at least as much as do tutors who are certified teachers. This is important not only because paraprofessionals cost about half as much as teachers, but also because there are chronic teacher shortages in high-poverty areas, such as inner-city and rural locations, so certified teacher tutors may not be available at any cost.

If schools choose proven components for their MTSS/RTI models, and implement them with thought and care, they are sure to see enhanced outcomes for their struggling students. The concept of MTSS/RTI is sound, and the components are proven. How could the outcomes be less than stellar? And in addition to improved achievement for vulnerable learners, hiring many paraprofessionals to serve as tutors in disadvantaged schools could enable schools to attract and identify capable, caring young people with bachelor’s degrees to offer accelerated certification, enriching the local teaching force.

With a spaghetti bridge, a good design is necessary but not sufficient. The components of that design, its ingredients, and its implementation, determine whether the bridge stands or falls in practice. So it is with MTSS and RTI. An approach based on strong evidence of effectiveness is essential to enable these good designs achieve their goals.

Photo credit: CSUF Photos (CC BY-NC-SA 2.0), via flickr

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

First There Must be Love. Then There Must be Technique.

I recently went to Barcelona. This was my third time in this wonderful city, and for the third time I visited La Sagrada Familia, Antoni Gaudi’s breathtaking church. It was begun in the 1880s, and Gaudi worked on it from the time he was 31 until he died in 1926 at 74. It is due to be completed in 2026.

Every time I go, La Sagrada Familia has grown even more astonishing. In the nave, massive columns branching into tree shapes hold up the spectacular roof. The architecture is extremely creative, and wonders lie around every corner.

blog_7-19-18_Barcelona_333x500

I visited a new museum under the church. At the entrance, it had a Gaudi quote:

First there must be love.

Then there must be technique.

This quote sums up La Sagrada Familia. Gaudi used complex mathematics to plan his constructions. He was a master of technique. But he knew that it all meant nothing without love.

In writing about educational research, I try to remind my readers of this from time to time. There is much technique to master in creating educational programs, evaluating them, and fairly summarizing their effects. There is even more technique in implementing proven programs in schools and classrooms, and in creating policies to support use of proven programs. But what Gaudi reminds us of is just as essential in our field as it was in his. We must care about technique because we care about children. Caring about technique just for its own sake is of little value. Too many children in our schools are failing to learn adequately. We cannot say, “That’s not my problem, I’m a statistician,” or “that’s not my problem, I’m a policymaker,” or “that’s not my problem, I’m an economist.” If we love children and we know that our research can help them, then it’s all of our problems. All of us go into education to solve real problems in real classrooms. That’s the structure we are all building together over many years. Building this structure takes technique, and the skilled efforts of many researchers, developers, statisticians, superintendents, principals, and teachers.

Each of us brings his or her own skills and efforts to this task. None of us will live to see our structure completed, because education keeps growing in techniques and capability. But as Gaudi reminds us, it’s useful to stop from time to time and remember why we do what we do, and for whom.

Photo credit: By Txllxt TxllxT [CC BY-SA 4.0  (https://creativecommons.org/licenses/by-sa/4.0)], from Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

The Mill and The School

 

On a recent trip to Scotland, I visited some very interesting oat mills. I always love to visit medieval mills, because I find it endlessly fascinating how people long ago used natural forces and materials – wind, water, and fire, stone, wood, and metal – to create advanced mechanisms that had a profound impact on society.

In Scotland, it’s all about oat mills (almost everywhere else, it’s wheat). These grain mills date back to the 10th century. In their time, they were a giant leap in technology. A mill is very complicated, but at its heart are two big innovations. In the center of the mill, a heavy millstone turns on top of another. The grain is poured through a hole in the top stone for grinding. The miller’s most difficult task is to maintain an exact distance between the stones. A few millimeters too far apart and no milling happens. A few millimeters too close and the heat of friction can ruin the machinery, possibly causing a fire.

The other key technology is the water wheel (except in windmills, of course). The water mill is part of a system that involves a carefully controlled flow of water from a millpond, which the miller uses to provide exactly the right amount of water to turn a giant wooden wheel, which powers the top millstone.

blog_5-2-18_TheMaidOfTheMill_500x472

The medieval grain mill is not a single innovation, but a closely integrated system of innovations. Millers learned to manage this complex technology in a system of apprenticeship over many years.

Mills enabled medieval millers to obtain far more nutrition from an acre of grain than was possible before. This made it possible for land to support many more people, and the population surged. The whole feudal system was built around the economics of mills, and mills thrived through the 19th century.

What does the mill have to with the school? Mills only grind well-behaved grain into well-behaved flour, while schools work with far more complex children, families, and all the systems that surround them. The products of schools must include joy and discovery, knowledge and skills.

Yet as different as they are, mills have something to teach us. They show the importance of integrating diverse systems that can then efficiently deliver desired outcomes. Neither a mill nor an effective school comes into existence because someone in power tells it to. Instead, complex systems, mills or schools, must be created, tested, adapted to local needs, and constantly improved. Once we know how to create, manage, and disseminate effective mills or schools, policies can be readily devised to support their expansion and improvement.

Important progress in societies and economies almost always comes about from development of complex, multi-component innovations that, once developed, can be disseminated and continuously improved. The same is true of schools. Changes in governance or large-scale policies can enhance (or inhibit) the possibility of change, but the reality of reform depends on creation of complex, integrated systems, from mills to ships to combines to hospitals to schools.

For education, what this means is that system transformation will come only when we have whole-school improvement approaches that are known to greatly increase student outcomes. Whole-school change is necessary because many individual improvements are needed to make big changes, and these must be carefully aligned with each other. Just as the huge water wheel and the tiny millstone adjustment mechanism and other components must work together in the mill, the key parts of a school must work together in synchrony to produce maximum impact, or the whole system fails to work as well as it should.

For example, if you look at research on proven programs, you’ll find effective strategies for school management, for teaching, and for tutoring struggling readers. These are all well and good, but they work so much better if they are linked to each other.

To understand this, first consider tutoring. Especially in the elementary grades, there is no more effective strategy. Our recent review of research on programs for struggling readers finds that well-qualified teaching assistants can be as effective as teachers in tutoring struggling readers, and that while one-to-four tutoring is less effective than one-to-one, it is still a lot more effective than no tutoring. So an evidence-oriented educator might logically choose to implement proven one-to-one and/or one-to-small group tutoring programs to improve school outcomes.

However, tutoring only helps the students who receive it, and it is expensive. A wise school administrator might reason that tutoring alone is not sufficient, but improving the quality of classroom instruction is also essential, both to improve outcomes for students who do not need tutoring and to reduce the number of students who do need tutoring. There is an array of proven classroom methods the principal or district might choose to improve student outcomes in all subjects and grade levels (see www.evidenceforessa.org).

But now consider students who are at risk because they are not attending regularly, or have behavior problems, or need eyeglasses but do not have them. Flexible school-level systems are necessary to ensure that students are in school, eager to learn, well-behaved, and physically prepared to succeed.

In addition, there is a need to have school principals and other leaders learn strategies for making effective use of proven programs. These would include managing professional development, coaching, monitoring implementation and outcomes of proven programs, distributed leadership, and much more. Leadership also requires jointly setting school goals with all school staff and monitoring progress toward these goals.

These are all components of the education “mill” that have to be designed, tested, and (if effective) disseminated to ever-increasing numbers of schools. Like the mill, an effective school design integrates individual parts, makes them work in synchrony, constantly assesses their functioning and output, and adjusts procedures when necessary.

Many educational theorists argue that education will only change when systems change. Ferocious battles rage about charters vs. ordinary public schools, about adopting policies of countries that do well on international tests, and so on. These policies can be important, but they are unlikely to create substantial and lasting improvement unless they lead to development and dissemination of proven whole-school approaches.

Effective school improvement is not likely to come about from let-a-thousand-flowers-bloom local innovation, nor from top-level changes in policy or governance. Sufficient change will not come about by throwing individual small innovations into schools and hoping they will collectively make a difference. Instead, effective improvement will take root when we learn how to reliably create effective programs for schools, implement them in a coordinated and planful way, find them effective, and then disseminate them. Once such schools are widespread, we can build larger policies and systems around their needs.

Coordinated, schoolwide improvement approaches offer schools proven strategies for increasing the achievement and success of their children. There should be many programs of this kind, among which schools and districts can choose. A school is not the same as mill, but the mill provides at least one image of how creating complex, integrated replicable systems can change whole societies and economies. We should learn from this and many other examples of how to focus our efforts to improve outcomes for all children.

Photo credit: By Johnson, Helen Kendrik [Public domain], via Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

How Classroom-Invented Innovations Can Have Broad Impacts

blog_3-8-18_blackboard_500x381When I was in high school, I had an after-school job at a small electronics company that made and sold equipment, mostly to the U.S. Navy. My job was to work with another high school student and our foreman to pack and unpack boxes, do inventories, basically whatever needed doing.

One of our regular tasks was very time-consuming. We had to test solder extractors to be sure they were working. We’d have to heat up each one for several minutes, touch a bit of solder to it, and wipe off any residue.

One day, my fellow high school student and I came up with an idea. We took 20 solder extractors and lined them up on a work table with 20 electrical outlets. We then plugged them in. By the time we’d plugged in #20, #1 was hot, so we could go back and test it, then #2, and so on. An hour-long job was reduced to 10 minutes. We were being paid the princely sum of $1.40 an hour, so we were saving the company big bucks. Our foreman immediately saw the advantages, and he told the main office about our idea.

Up in the main office, far from the warehouse, was a mean, mean man. He wore a permanent scowl. He had a car with mean, mean bumper stickers. I’ll call him Mr. Meanie.

Mr. Meanie hated everyone, but he especially hated the goofy, college-bound high school students in the warehouse. So he had to come see what we were doing, probably to prove that it was dumb idea.

Mr. Meanie came and asked me to show him the solder extractors. I laid them out, same as always, and everything worked, same as always, but due to my anxiety under Mr. Meanie’s scowl, I let one of the cords touch its neighboring solder extractor. It was ruined.

Mr. Meanie looked satisfied (probably thinking, “I knew it was a dumb idea”), and left without a word. But as long as I worked at the company, we never again tested solder extractors one at a time (and never scorched another cord). My guess is that long after we were gone, our method remained in use despite Mr. Meanie. We’d overcome him with evidence that no one could dispute.

In education, we employ some of the smartest and most capable people anywhere as teachers. Teachers innovate, and many of their innovations undoubtedly improve their own students’ outcomes. Yet because most teachers work alone, their innovations rarely spread or stick even within their own schools. When I was a special education teacher long ago, I made up and tested out many innovations for my very diverse, very disabled students. Before heading off for graduate school, I wrote them out in detail for whoever was going to receive my students the following year. Perhaps their next teachers received and paid attention to my notes, but probably not, and they could not have had much impact for very long. More broadly, there is just no mechanism for identifying and testing out teachers’ innovations and then disseminating them to others, so they have little impact beyond the teacher and perhaps his or her colleagues and student teachers, at best.

One place in the education firmament where teacher-level innovation is encouraged, noted, and routinely disseminated is in comprehensive schoolwide approaches, such as our own Success for All (SFA). Because SFA has its own definite structure and materials, promising innovations in any school or classroom may immediately apply to the roughly 1000 schools we work with across the U.S. Because SFA schools have facilitators within each school and coaches from the Success for All Foundation who regularly visit in teachers’ classes, there are many opportunities for teachers to propose innovations and show them off. Those that seem most promising may be incorporated in the national SFA program, or at least mentioned as alternatives in ongoing coaching.

As one small example, SFA constantly has students take turns reading to each other. There used to be arguments and confusion about who goes first. A teacher in Washington, DC noticed this and invented a solution. She appointed one student in each dyad to be a “peanut butter” and the other to be a “jelly.” Then she’d say, “Today, let’s start with the jellies,” and the students started right away without confusion or argument. Now, 1000 schools use this method.

A University of Michigan professor, Don Peurach, studied this very aspect of Success for All and wrote a book about it, called Seeing Complexity in Public Education (Oxford University Press, 2011). He visited dozens of SFA schools, SFA conferences, and professional development sessions, and interviewed hundreds of participants. What he described is an enterprise engaged in sharing evidence-proven practices with schools and at the same time learning from innovations and problem solutions devised in schools and communicating best practices back out to the whole network.

I’m sure that other school improvement networks do the same, because it just makes sense. If you have a school network with common values, goals, approaches, and techniques, how does it keep getting better over time if it does not learn from those who are on the front lines? I’d expect that such very diverse networks as Montessori and Waldorf schools, KIPP and Success Academy, and School Development Program and Expeditionary Learning schools, must do the same. Each of the improvements and innovations contributed by teachers or principals may not be big enough to move the needle on achievement outcomes by themselves, but collectively they keep programs moving forward as learning organizations, solving problems and improving outcomes.

In education, we have to overcome our share of Mr. Meanies trying to keep us from innovating or evaluating promising approaches. Yet we can overcome blockers and doubters if we work together to progressively improve proven programs. We can overwhelm the Mr. Meanies with evidence that no one can dispute.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Education Innovation and Research: Innovating Our Way to the Top

How did America get to be the wealthiest and most powerful country on Earth?

To explain, let me tell you about visiting a remote mountain village in Slovakia. I arrived in the evening, as the ancient central square filled up with people. Every man, woman, and child had a cell phone. Invented in America.

In the local hospital, I’m sure that most medicines were invented in America, which does more medical research than all other nations combined. Local farmers probably planted seeds and used methods developed in the U.S. Everywhere in the world, everyone watches American movies, listens to American music, and on and on.

America’s brand, the source of our wealth, is innovation.

America has long led the world in creating wealth by creating new ideas and putting them in practice. Technology? Medicine? Agriculture? America dominates the world in each of these fields, and many more. The reason is that America innovates, constantly finding new ways to solve problems, cure diseases, grow better crops, and generally do things less expensively. I am often at Johns Hopkins Hospital, where the halls are full of patients from every part of the globe. They come to Johns Hopkins because of its reputation for innovation.

In education, we face daunting problems, especially in educating disadvantaged students. So to solve these problems, you’d naturally expect that we’d turn to the principle that has led to our success in so many fields – innovation.

The Every Student Succeeds Act (ESSA), passed by Congress and signed into law in December, 2015, has taken just this view. In it, for the first time ever, is a definition of the evidence required for a program or practice to be considered “strong,” “moderate,” or “promising.” These definitions encourage educators to adopt proven programs, but for this to work, we have to have a steady stream of proven innovations appearing each year. This function is fulfilled by another part of ESSA, the Education Innovation and Research (EIR) grant program. The EIR provision, which was included in ESSA with bipartisan support, provides a tiered evidence approach to research that will constantly add to the body of programs that meet the ESSA evidence requirements. Proposals are invited for “early phase,” “mid-phase,” and “expansion” grants to support the development, validation, and scale-up of successful innovations that originate at the state and local levels. Based on the U.S. Department of Education’s recent EIR grant application process, it appears (as is expected from a tiered evidence design) that lots of early stage grants of up to $3 million will be made, fewer mid-stage grants of up to $8 million, and very few expansion grants of up to $15 million, all over 5 years. Anyone can apply for an early-stage grant, but applicants must already have some evidence to support their program to get a mid-stage grant, and a lot of very rigorous evidence to apply for an expansion grant. All three types of grants require third-party evaluations – which will serve to improve programs all along the spectrum of effectiveness – but mid-stage and expansion grants require large, randomized evaluations, and expansion grants additionally require national dissemination.

The structure of EIR grants is intended to make the innovation process wide open to educators at all levels of state and local governments, non-profits, businesses, and universities. It is also designed to give applicants the freedom to suggest the nature of the program they want to create, thus allowing for a broad range of field-driven ideas that arise to meet recognized needs. EIR does encourage innovation in rural schools, which must receive at least 25% of the funding, but otherwise there is considerable freedom, drawing diverse innovators to the process.

EIR is an excellent investment. If only a few of the programs it supports end up showing positive outcomes and scaling up to serve many students across the U.S., then EIR funding will make a crucial difference to the educational success of hundreds of thousands or millions of students, improving outcomes on a scale that matters at modest cost.

EIR provides an opportunity for America to solve its education problems just as it has solved problems in many other fields: through innovation. That is what America does when it needs rapid and widespread success, as it so clearly does in education. In every subject and grade level, we can innovate our way to the top. EIR is providing the resources and structure to do it.

This blog is sponsored by the Laura and John Arnold Foundation

Evidence and Freedom

One of the strangest arguments I hear against evidence-based reform in education is that encouraging or incentivizing schools to use programs or practices proven to work in rigorous experiments will reduce the freedom of schools to do what they think is best for their students.

Freedom? Really?

To start with, consider how much freedom schools have now. Many districts and state departments of education have elaborate 100% evidence-free processes of restricting the freedom of schools. They establish lists of approved providers of textbooks, software, and professional development, based perhaps on state curriculum standards but also on current trends, fads, political factors, and preferences of panels of educators and other citizens. Many states have textbook adoption standards that consider paper weight, attractiveness, politically correct language, and other surface factors, but never evidence of effectiveness. Federal policies specify how teachers should be evaluated, how federal dollars should be utilized, and how students should be assessed. I could go on for more pages than anyone wants to read with examples of how teachers’ and principals’ choices are constrained by district, state, and federal policies, very few of which have ever been tested in comparison to control groups. Why do schools use this textbook or that software or the other technology? Because their district or state bought it for them, trained them in its use (perhaps), and gave them no alternative.

The evidence revolution offers the possibility of freedom, if the evidence now becoming widely available is used properly. The minimum principle of evidence-based reform should be this: “If it is proven to work, you are allowed to use it.”

At bare minimum, evidence of effectiveness should work as a “get out of jail free” card to counter whatever rules, restrictions, or lists of approved materials schools have been required to follow.

But permission is not enough, because mandated, evidence-free materials, software, and professional development may eat up the resources needed to implement proven programs. So here is a slightly more radical proposition: “Whenever possible, school staffs should have the right, by majority vote of the staff, to adopt proven programs to replace current programs mandated by the district or state.”

For example, when a district or state requires use of anything, it could make the equivalent in money available to schools to use to select and implement programs proven to be effective in producing the desired outcome. If the district adopts a new algebra text or elementary science curriculum, for instance, it could allow schools to select an alternative with good evidence of effectiveness for algebra or elementary science, as long as the school agrees to implement the program with fidelity and care, achieving levels of implementation like those in the research that validated the program.

The next level of freedom to choose what works would be to provide incentives and support for schools that select proven programs and promise to implement them with fidelity.

“Schools should be able to apply for federal, state, or local funds to implement proven programs of their choice. Alternatively, they may receive competitive preference points on grants if they promise to adopt and effectively implement proven programs.”

This principle exists today in the Every Student Succeeds Act (ESSA), where schools applying for school improvement funding must select programs that meet one of three levels of evidence: strong (at least one randomized experiment with positive outcomes), moderate (at least one quasi-experimental [matched] study with positive outcomes), or promising (at least one correlational study with positive outcomes). In seven other programs in ESSA, schools applying for federal funds receive extra competitive preference points on their applications if they commit to using programs that meet one of those three levels of evidence. The principle in ESSA – that use of proven programs should be encouraged – should be expanded to all parts of government where proven programs exist.

One problem with these principles is that they depend on having many proven programs in each area from which schools can choose. At least in reading and math, grades K-12, this has been accomplished; our Evidence for ESSA website describes approximately 100 programs that meet the top three ESSA evidence standards. More than half of these meet the “strong” standard.

However, we must have a constant flow of new approaches in all subjects and grade levels. Evidence-based policy requires continuing investments in development, evaluation, and dissemination of proven programs. The Institute of Education Sciences (IES), the Investing in Innovation (i3) program, and now the Education Innovation and Research (EIR) grant program, help fulfill this function, and they need to continue to be supported in their crucial work.

So is this what freedom looks like in educational innovation? I would argue that it does. Note that what I did not say is that programs lacking evidence should be forbidden. Mandating use of programs, no matter how well evaluated, is a path to poor implementation and political opposition. Instead, schools should have the opportunity and the funding to adopt proven programs. If they prefer not to do so, that is their choice. But my hope and expectation is that in a political system that encourages and supports use of proven programs, educators will turn out in droves to use better programs, and the schools that might have been reluctant at first will see and emulate the success their neighbors are having.

Freedom to use proven programs should help districts, states, and the federal government have confidence that they can at long last stop trying to micromanage schools. If policymakers know that schools are making good choices and getting good results, why should they want to get in their way?

Freedom to use whatever is proven to enhance student learning. Doesn’t that have a nice ring to it? Like the Liberty Bell?

This blog is sponsored by the Laura and John Arnold Foundation