Improvement by Design


I just read a very interesting book, Improvement by Design: The Promise of Better Schools, by David Cohen, Donald Peurach, Joshua Glazer, Karen Gates, and Simona Goldin. From 1996 to 2008, researchers originally at the University of Michigan studied three of the largest comprehensive school reform models of the time: America’s Choice (AC), Accelerated Schools Plus (ASP), and our own Success for All (SFA). A portion of the study, led by Brian Rowan, compared 115 elementary schools using one of these models to a matched control group and to each other. The quantitative study found that Success for All had strong impacts on reading achievement by third grade, America’s Choice had strong impacts on writing, and there were few impacts of Accelerated Schools Plus.

Improvement by Design tells a different story, based on qualitative studies of the three organizations over a very long time period. Despite sharp differences between the models, all of the organizations had to face a common set of challenges: creating viable models and organizations to support them, dealing with rapid scale-up through the 1990s (especially during the time period from 1997 to 2002 when Obey-Porter Comprehensive School Reform funding was made available to schools), and then managing catastrophe when the George W. Bush Administration ended comprehensive school reform.

The book is straightforward history, comparing and contrasting these substantial reform efforts, and does not directly draw policy conclusions. However, there is much in it that does have direct policy consequences. These are my conclusions, not the authors’, but I think they are consistent with the history.

1. Large-scale change that dramatically changes daily teaching is difficult but not impossible in high-poverty schools. All three models have worked in hundreds of schools, as have several other whole-school reform models.

2. Providing general principles and then leaving schools to create the details for themselves is not a successful strategy. This is what Accelerated Schools Plus tried to do, and the Michigan study not only found that ASP failed to change student outcomes, but also that it failed to have much observable impact on teaching, in contrast to AC and SFA.

3. What (2) implies is that if whole-school “improvement by design” is to succeed in the thousands of Title I schools that need it, large, well-managed, and well-capitalized organizations are necessary to provide high-quality and very specific training, coaching, and materials to implement proven models.

4. Federal policies (at least) need to be consistently hospitable to an environment in which schools and districts are choosing among many proven whole-school models. For example, federal requests for proposals might have a few competitive preference points for schools proposing to use whole-school reform models with strong evidence of effectiveness. This would signal an invitation to adopt such models without forcing schools to do so and risking extensive pushback. Further, federal policies promoting use of proven whole-school models should remain in effect for an extended period. Turmoil introduced by changing federal support for whole-school reform was very damaging to earlier efforts.

Improvement by Design provides a tantalizing glimpse of what could be possible in a system that encourages a diversity of proven, whole-school options to high-poverty schools. This approach to reform has many obstacles to overcome, of course. But for what approach radical enough and scalable enough to potentially reform American education would this not be true?


Use Proven Programs or Manage Using Data? Two Approaches to Evidence-Based Reform


In last week’s blog, I welcomed the good work Results for America (RfA) is doing to promote policy support for evidence-based reform. In reality, RfA is promoting two quite different approaches to evidence-based reform, both equally valid but worth discussing separately.

The first evidence-based strategy is “use proven programs (UPP),” which is what most academics and reformers think of when they think of evidence-based reform. This strategy depends on creation, evaluation, and widespread dissemination of proven programs capable of, for example, teaching beginning reading, algebra, or biology better than current methods do. If you see “programs” as being like the individual baseball players that Oakland A’s manager Billy Beane chose based on statistics, then RfA’s “Moneyball” campaign could be seen as consistent in part with the “use proven programs” approach.

The second strategy might be called “manage using data (MUD).” The idea is for leaders of complex organizations, such as mayors or superintendents, to use data systematically to identify and understand problems and then to test out the solutions, expanding solutions that work and scaling back or abandoning others. This is the approach used in “Geek Cities“ celebrated by RfA.

“Use proven programs” and “manage using data” have many similarities, of course. Both emphasize hard-headed, sophisticated use of data. Advocates of both approaches would be comfortable with the adage, “In God we trust. All others bring data.”

However, there are key differences between the UPP and MUD approaches that have important consequences for policy and practice. UPP emphasizes the creation of relatively universal solutions to widespread problems. In this, they draw from a deep well of experience in medicine, agriculture, technology, and other fields. When an innovator develops a new heart valve, a cow that produces more milk, or a new cell phone, and proves that it produces better outcomes than current solutions, that solution can be used with confidence in a broad range of circumstances, and may have immediate and profound impacts on practice.

In contrast, when a given school district succeeds with the MUD approach (for example, analyzing where school violence is concentrated, placing additional security guards in those areas, and then noting the changes in violence), this success is likely to be valued and acted upon by district leaders, because the data come from a context they understand and are collected by people they employ and trust. However, the success is unlikely to spread to or be easily replicated by other school districts. The MUD district may tout its success, but district leaders are not particularly motivated to tell outsiders about their successes, and usually lack sufficient staff to even write them up. Further, since MUD approaches are not designed for replication, they may or may not work in other places with different contexts. The difficulty of replicating success in a different context also applies to UPP strategies, but after several program evaluations in different contexts, program developers are likely to be able to say where their approach is most and least likely to work.

From a policy perspective, MUD and UPP approaches can and should work together. A district, city, or state that proactively uses data to analyze all aspects of its own functioning and to test out its own innovations or variations in services should also be eager to adopt programs proven to be effective elsewhere, perhaps doing their own evaluations and/or adaptations to local circumstances. If the bottom line is what’s best for children, for example, then a mix of solutions “made and proven here” and those “made and proven elsewhere and replicated or tested here” seems optimal.

For federal policy, however, the two approaches lead in somewhat different directions. The federal government cannot do a great deal to encourage local governments to use their own data wisely. In areas such as education, federal and state governments use accountability schemes of various kinds as a means of motivating districts to use data-driven management, but looking at NAEP scores since accountability took off in the early 1980s, this strategy is not going very well. The federal government could identify well-defined, proven, and replicable “manage using data” methods, but if it did, those MUD models would just become a special case of “use proven programs” (and in fact, the great majority of education programs proven to work use data-driven management in some form as part of their approach).

In contrast, the federal government can do a great deal to promote “use proven programs.” In education, of course, it is doing so with Investing in Innovation (i3), the What Works Clearinghouse (WWC), and most of the research at the Institute of Education Sciences (IES). All of these are building up the number of proven programs ready to be used broadly, and some are helping programs to start or accelerate scale-up. The existence of the proven programs coming from these and other sources creates enormous potential, but is not yet having much impact on federal policies relating, for example, to Title I or School Improvement Grants, but this could be coming.

Ultimately, “use proven programs” and “manage using data” should become a seamless whole, using every tool of policy and practice to see that children are succeeding in school, whatever that takes. But the federal government is wisely taking the lead in building up capacity for replicable “use proven programs” strategies to provide new visions and practical guides to help schools improve themselves.

Results for America: A Welcome Addition to Evidence-Based Reform


Advocating for evidence-based reform in education is often a lonely activity, but over the past couple of years, it’s gotten a lot less lonely due to the arrival of Results for America(RfA), an initiative dedicated to encouraging government at all levels to make more and better use of evidence to make important decisions. RfA is not limited to education, but covers all areas of human services for children and families. RfA is led by Michele Jolin and is part of a broader organization called America Achieves, led by Jon Schnur. Both Jon and Michele have had extensive experience working in government.One constant meme adopted by RfA is “Moneyball for Government,” based on the movie “Moneyball” about legendary Oakland A’s manager Billy Beane. Constrained by limited resources to improve the team, unlike the deep pockets of the Yankees who were able to offer high-salaried contracts to the best players, Beane engaged a statistician to help him find less desirable or lower-cost players who did not fit all his scouts’ requirements but somehow managed to get on base a lot.

As applied to government, Moneyball means using data to identify programs and practices that have been proven to produce valued outcomes. In education, for example, a leader who played Moneyball would actively seek reading, math, science, and other programs that had been rigorously evaluated and found to be effective. A Moneyball approach would also include using data to identify programs or approaches already in place within the district and expand the ones that are working. RfA has even identified “Geek Cities,” where the local government does a particularly good job of using data to make effective and efficient use of resources to serve children and families.

RfA is strictly and wisely nonpartisan, arguing that everyone, liberal or conservative, has an interest in seeing that resources are used sensibly and well. As time goes on, Results for America is making significant inroads in the policy world, helping political leaders of all stripes get comfortable with the idea of heightening the role of evidence in policymaking.