Smart Philanthropy

Americans are very generous. We give more than $300 billion annually to every kind of charity, hoping to do good in the world. All charities have mission statements, stated right on the dozens of calendars they send us, and most claim to be making a difference in some valued outcome. Yet tough-minded, tender-hearted donors want to know more. Is their donation producing a concrete outcome?

Dean Karlan, a Yale economist, has just launched a new organization, called ImpactMatters, to do “impact audits” on nonprofits. These focus primarily on assessing impact of the charities’ services on the outcomes they claim. Karlan’s team examines the evidence, particularly randomized experiments, as well as financial and management issues, and picks out charities that are transparent, well-managed, and making a well-documented difference.

In a December 4th Wall Street Journal article, Karlan introduced his organization and its purpose, and the first four programs identified by ImpactMatters as meeting its standards were named on December 11th on the ImpactMatters website. One was our Success for All reading program, and the others were international charities focused on the ultra-poor in developing countries and healthcare in Nepal.

The appearance of ImpactMatters could make a difference in philanthropy, and that would be terrific in itself. However, its significance goes far beyond philanthropy.

ImpactMatters is one more indication that good intentions are no longer sufficient. Government, philanthropists, and leaders of all kind are increasingly demanding rigorous evidence of impact. We all know where the road paved with good intentions goes. The road paved with good evidence, acted upon with integrity, purpose, and caring, goes straight to heaven. Karlan’s stated purpose is to help people give with their minds, not just their hearts. I hope this will make the difference it is intended to make. Personally, I’d rather get a lot fewer calendars and a lot more impact for my donations. Doesn’t everyone feel the same way?


Show Me the Evidence


This year, evidence-based reform in education got off to a great start with an article by Ron Haskins in the New York Times on December 31 explaining why evidence of effectiveness must become an expected part of the process by which policy ideas are adopted (or not). More recently, I received a book Haskins wrote with colleague Greg Margolis on this topic, Show Me the Evidence. Both the article and the book mentioned our Success for All program as an example of what “proven” looks like in education, but they are a lot more important than that.

Haskins makes a powerful argument for putting all social programs to the test. Those that work should be expanded. Those that don’t should be replaced by other approaches that work better.

The need for evidence should be obvious, but very few federal programs have evidence of effectiveness. Few even have a process for finding out what works and encouraging grantees to use proven approaches, instead of approaches with the same desired outcomes that do not work or whose effects are unknown. Haskins estimates the 75 percent of programs and practices intended to help people do better at school or work have little or no impact. Such programs are well-meaning, but they need to be improved or replaced with equally well-meaning approaches known under well-defined circumstances to have positive impacts.

The importance of Haskins’ article and book lies especially in the importance of Haskins himself. He knows whereof he speaks. Advisor to House Republicans in the 1990s and then an advisor to President George W. Bush on social policy, he was a key architect of the 1996 welfare overhaul. Welfare programs that worked improved peoples’ lives and saved federal and state governments billions of dollars. Those that didn’t were replaced. To this day, those innovations represent the best example of evidence-driven policy.

Haskins is a proud Republican. He wants every dollar of federal expenditure to do what it is intended to do. Is there anyone, of any political persuasion, who does not want the same? This is not a question of ideology. It’s a question of sound governance.

When Ron Haskins and others were starting out, evidence was a pretty risky idea. Today, evidence is showing up throughout government — still not nearly as much as it should, but far more than it did. Sooner or later, government will become more competent and cost-effective at achieving goals we all share. Ron Haskins was there when it mattered. He still is, and it still does.


Teachers as Professionals in Evidence-Based Reform


In a February 2012 op-ed in Education Week, Don Peurach wrote about a 14-year investigation he carried out as part of a large University of Michigan study of comprehensive school reform. In the overall study, our Success for All program and the America’s Choice program did very well in terms of both implementation and outcomes, while an approach in which teachers largely made up their own instructional approaches did not bring about much change in teachers’ behaviors or student learning. Because both Success for All and America’s Choice have well-specified training, teacher’s manuals, and student materials, the findings support the idea that it is important for school-wide reform models to have a well-structured approach.

Peurach’s focus was on Success for All as an organization. He wanted to know how our network of hundreds of schools in 40 states contributes to the development of the approach and to each other’s success. His key finding was that Success for All is not a top-down approach, but is constantly learning from its teachers and principals and then spreading good practices throughout the network.

In our way of thinking, this is the very essence of professionalism. A teacher who does wonderful, innovative things in one class is perhaps benefiting 25 children each year, but one whose ideas scale up to inform the practices of hundreds of thousands of schools is making a real difference. Yet in order for teachers’ ideas and impact to be broadly impactful, it helps a great deal for the teachers to be part of a national or regional network that speaks a common language and has common standards of practice.

Teachers need not be researchers to contribute to their profession. By participating in networks of like-minded educators – implementing, continuously improving and communicating about practical approaches intended to improve outcomes of proven approaches – they play an essential role in the improvement of their profession.

Improvement by Design


I just read a very interesting book, Improvement by Design: The Promise of Better Schools, by David Cohen, Donald Peurach, Joshua Glazer, Karen Gates, and Simona Goldin. From 1996 to 2008, researchers originally at the University of Michigan studied three of the largest comprehensive school reform models of the time: America’s Choice (AC), Accelerated Schools Plus (ASP), and our own Success for All (SFA). A portion of the study, led by Brian Rowan, compared 115 elementary schools using one of these models to a matched control group and to each other. The quantitative study found that Success for All had strong impacts on reading achievement by third grade, America’s Choice had strong impacts on writing, and there were few impacts of Accelerated Schools Plus.

Improvement by Design tells a different story, based on qualitative studies of the three organizations over a very long time period. Despite sharp differences between the models, all of the organizations had to face a common set of challenges: creating viable models and organizations to support them, dealing with rapid scale-up through the 1990s (especially during the time period from 1997 to 2002 when Obey-Porter Comprehensive School Reform funding was made available to schools), and then managing catastrophe when the George W. Bush Administration ended comprehensive school reform.

The book is straightforward history, comparing and contrasting these substantial reform efforts, and does not directly draw policy conclusions. However, there is much in it that does have direct policy consequences. These are my conclusions, not the authors’, but I think they are consistent with the history.

1. Large-scale change that dramatically changes daily teaching is difficult but not impossible in high-poverty schools. All three models have worked in hundreds of schools, as have several other whole-school reform models.

2. Providing general principles and then leaving schools to create the details for themselves is not a successful strategy. This is what Accelerated Schools Plus tried to do, and the Michigan study not only found that ASP failed to change student outcomes, but also that it failed to have much observable impact on teaching, in contrast to AC and SFA.

3. What (2) implies is that if whole-school “improvement by design” is to succeed in the thousands of Title I schools that need it, large, well-managed, and well-capitalized organizations are necessary to provide high-quality and very specific training, coaching, and materials to implement proven models.

4. Federal policies (at least) need to be consistently hospitable to an environment in which schools and districts are choosing among many proven whole-school models. For example, federal requests for proposals might have a few competitive preference points for schools proposing to use whole-school reform models with strong evidence of effectiveness. This would signal an invitation to adopt such models without forcing schools to do so and risking extensive pushback. Further, federal policies promoting use of proven whole-school models should remain in effect for an extended period. Turmoil introduced by changing federal support for whole-school reform was very damaging to earlier efforts.

Improvement by Design provides a tantalizing glimpse of what could be possible in a system that encourages a diversity of proven, whole-school options to high-poverty schools. This approach to reform has many obstacles to overcome, of course. But for what approach radical enough and scalable enough to potentially reform American education would this not be true?

Lessons From Innovators: Calibrating Expectations for i3 Evaluation Results


The process of moving an educational innovation from a good idea to widespread effective implementation is far from straightforward, and no one has a magic formula for doing it. The William T. Grant and Spencer Foundations, with help from the Forum for Youth Investment, have created a community composed of grantees in the federal Investing in Innovation (i3) program to share ideas and best practices. Our Success for All program participates in this community. In this space, I, in partnership with the Forum for Youth Investment, highlight observations from the experiences of i3 grantees other than our own, in an attempt to share the thinking and experience of colleagues out on the front lines of evidence-based reform. This blog post is from Dr. Donald J. Peurach, Assistant Professor of Educational Studies in the University of Michigan’s School of Education. Since 2012, Dr. Peurach has served as an advisor and contributor to the i3 Learning Community. As a researcher who focuses on large-scale educational reform, Dr. Peurach provides his perspective from the front lines.

As a participant-observer in the i3 Learning Community, I have had a front row seat on ambitious efforts by the U.S. Department of Education’s Office of Innovation and Improvement (OII) to revolutionize educational innovation and reform. Others will soon have a glimpse, too, and the fate of the revolution may well rest on how they interpret what they see.

With its Investing in Innovation (i3) program, OII is investing nearly a billion dollars in the development, validation, and scale up of over one hundred diverse reform initiatives, all subject to rigorous, independent evaluations. In coordination with the U.S. Department of Education’s Institute of Education Sciences (IES), results will be reported in the What Works Clearinghouse so that decision makers have high-quality information on which to base school improvement efforts.

For most people, their best glimpse of the i3-funded initiatives will come via these evaluation results. Preliminary reports from two scale-up recipients are largely positive:Reading Recovery and Success for All. This is not surprising. Both are well-established enterprises that have been refined through more than two decades of use in thousands of schools.

Additional evaluation results are soon to follow, from a broad array of initiatives not nearly as well established. History predicts that many of these results will be characterized by variability in implementation and outcomes that cloud efforts to determine what works (and what doesn’t). But this, too, would not be surprising. Both researchers and reformers (including contributors to this blog) have long reported that efforts to establish and evaluate ambitious improvement initiatives have been challenged by interactions among the complex problems to be solved in schools, the uncertain research base on which to draw, and the turbulent environments of U.S. public education.

If historical precedents hold, the effect could be to leave OII’s efforts politically vulnerable, as promises of revolution and equivocal results are not a good mix. For example, barely five years after finding support in federal policy, the comprehensive school reform movement met a quick and quiet death, as lofty promises of “break-the-mold” school improvement collided with equivocal evaluation results to contribute to a rapid erosion of political support. This was the case despite a small number of positive outliers having met high standards for evidence of effectiveness (including Success for All).

Yet new developments provide reasons for hope. Within the i3 Learning Community, reformers are collaborating to develop and manage their enterprises as learning systems that improve and persist in the face of complexity, uncertainty, and turbulence. Doing so includes critically analyzing implementation and outcomes in order to understand, explain, and respond to both successes and struggles. Similar work is underway in the Hewlett Foundation’s “Deeper Learning” initiative.

Moreover, rather than passing summary judgment based on quick glimpses, researchers and policymakers are increasingly recognizing the struggles of reformers as legitimate, and they are interpreting equivocality in evaluation results as a reason to push still-deeper into the challenging work of educational innovation and reform. For example, some researchers are working hard to systematically study variation in program effects to determine what works, where, for whom, and why. With new support from IES, other researchers working inside and outside of the academy are advancing improvement-focused evaluation strategies that have great potential to reduce that variation.

Such efforts mark a great advance beyond a narrow focus on determining what works (and what doesn’t). To be clear: Making that determination is, at some point, absolutely essential. After all, the life chances of many, many students hang in the balance. The advance lies in acknowledging that the road to positive results is far rockier than most realize, and that paving it smooth requires supporting reformers in learning to manage the complexity, uncertainty, and turbulence that have long been their undoing.

Indeed, from my front row seat, the revolution in educational innovation and reform looks to be just beginning, with increasing potential to coordinate new, improvement-focused evaluation strategies with more sophisticated impact evaluation strategies in both supporting and assessing educational innovation. Whether that is, in fact, the case will depend in no small part on what others make of the glimpses provided by forthcoming i3 evaluation results: what they make of outlying successes and failures, certainly; but, more importantly, what they make of (and decide to do about) the great, grey space in the middle.

Lessons from Innovators: STEM Learning Opportunities Providing Equity (SLOPE)

The process of moving an educational innovation from a good idea to widespread effective implementation is far from straightforward, and no one has a magic formula for doing it. The William T. Grant and Spencer Foundations, with help from the Forum for Youth Investment, have created a community composed of grantees in the federal Investing in Innovation (i3) program to share ideas and best practices. Our Success for All program participates in this community. In this space, I, in partnership with the Forum for Youth Investment, highlight observations from the experiences of i3 grantees other than our own, in an attempt to share the thinking and experience of colleagues out on the front lines of evidence-based reform.

2013-12-05-Slope2.jpgThis blog post is based on a conversation between the Forum for Youth Investment and Sharon Twitty, Project Director for the STEM Learning Opportunities Providing Equity (SLOPE) i3 project based at the Alliance for Regional Collaboration to Heighten Educational Success (ARCHES). SLOPE is a development project designed to help students succeed in Algebra in the 8th grade and to prepare for careers in science, technology, engineering, and math (STEM). Throughout the conversation, Twitty reflected on how relationship building and her background in communications have helped her successfully navigate a complex and geographically dispersed effort. Her reflections and advice to others working to implement and evaluate interventions are summarized here.

Set realistic goals
The SLOPE project is currently active in 17 districts around the state of California and is serving close to 3,500 students. Although SLOPE has met all of the participation targets identified in their i3 proposal, and although Twitty feels implementation has been rigorous, she notes that their three-tiered model is quite complex and that it is too early to determine whether the intervention is worthy of further expansion. Her team has learned a lot during the first year of implementation, in particular about what to expect from schools and teachers. “We know from change theory that it takes people 3-5 years to get comfortable with an innovation of this nature. The more traditional your values and the more ‘stand and deliver’ your method, the harder it is to acclimate to an intervention like this. Change is hard and people change slowly.” She suggests that building in a planning and development year for any complex change effort is important because it can help keep expectations realistic and give teachers time to adjust and prepare for new practices and tools. Twitty notes that — especially for complex projects — piloting in the field for refinement prior to implementation in the “study”-type environment is essential.

Relationships are the work
“Don’t let anyone tell you differently. Relationships are the work. You move at the speed of trust. Without relationships, any intervention, no matter how strong, is going to fail,” says Twitty. That is why she did everything she could to nurture and build personal relationships with every school and teacher involved, from big schools in a city to the single rural teacher implementing the intervention by herself. Twitty did a combination of small and big things to foster communication and engagement. She personally does a site visit at every school at least twice a year. Whenever possible, she highlights the good work of schools in newsletters, in local newspapers, and with policymakers. Sometimes she brings congressional delegates with her on site visits to highlight the project and show the schools she values their work. “I am not constantly in their face, but rather I focus on being responsive, respectful, and trying to make it as easy as possible for participants. I let them know that I need them and I thank them regularly,” says Twitty. Twitty has also learned that the more time principals spend in classrooms with SLOPE, the more they learn about the project. One strategy Twitty has used to encourage time in SLOPE classrooms is to send administrators a list of questions they can only answer by visiting classrooms and observing teachers in action. This makes participating teachers feel valued and generates useful anecdotes for communications with other sites, funders, and policymakers.

Be efficient
One concern with a relationship-intensive approach is that it may not be scalable. Twitty isn’t worried. In her opinion this just requires working smarter, not harder. “It is not that hard to build relationships. People just want to feel taken care of. Be deliberate and strategic, and watch for opportunities to do little things.” In fact, she has found that in some cases, just giving out her cell phone number and being responsive on email (which she tries to do within 24 hours), have been enough to garner support. In addition, she is intentional about making sure every school and every member of the team feels they have an important role to play. That’s why every school engaged in the project, whether in a small town or a large district where multiple schools participate, gets a site visit from someone in a leadership role. As the project grows, regional hubs can be established for personal contact and the project director can make their presence felt from a distance — through webinars, email, and Skype or other video-chat services.

Don’t forget the control group
One thing Twitty has learned directing this project is that teachers and schools are not used to participating in projects that include a randomized controlled trial. She has found that it is important to nurture relationships with teachers who were part of the control group as well as the treatment group. She has found this helps to maintain their willingness to participate in the study and provides valuable information about what happens when the intervention group is not in play. “You have to keep them engaged,” Twitty notes. “Just getting a stipend is not enough. I’ve worked hard to build community among the comparison teachers by empowering them to feel good about the project. I send them information about the intervention and how it is part of a national initiative, and explain why it is important to keep their classes ‘uncontaminated.’ I explained the What Works Clearinghouse and why it is a big deal for a little development project like ours to meet their evaluation standards. It is amazing how far a little personal attention and explanation can go.”

The bottom line in this day and age is time. People value their time and want to participate in something that is relevant and significant. Having respect for that concept and building relationships goes a long way.

Lessons from Innovators: Reading Recovery


The process of moving an educational innovation from a good idea to widespread effective implementation is far from straightforward, and no one has a magic formula for doing it. The William T. Grant and Spencer Foundations, with help from the Forum for Youth Investment, have created a community composed of grantees in the federal Investing in Innovation (i3) program to share ideas and best practices. Our Success for All program participates in this community. In this space, I, in partnership with the Forum for Youth Investment, highlight observations from the experiences of i3 grantees other than our own, in an attempt to share the thinking and experience of colleagues out on the front lines of evidence-based reform.

This blog is based on an interview between the Forum for Youth Investment and Jerry D’Agostino, Professor of Education at the Ohio State University and Director of Reading Recovery’s i3 project. A persistent challenge for programs that have scaled up is how to sustain for the long term. In this interview, D’Agostino shares how this long-standing literacy intervention has dealt with the challenge and how it has reinvented itself over the years in order to stay current.

Stay Fresh
Reading Recovery is a research-based, short-term intervention that involves one-to-one teaching for the lowest-achieving first graders. It began in New Zealand in the 1970’s but has been in operation in the United States for 30 years and has spread across the country. Over the years, Reading Recovery has expanded and contracted depending on funding, interest from school districts, and our capacity. Today there are training centers at 19 universities that equip teachers to deliver the intervention and the program has a presence in some 8,000 schools across 49 states. With that kind of scale and longevity, it can be easy to become complacent and assume the intervention speaks for itself. D’Agostino says just the opposite is true. “We know that being the old brand that has been around for a long time can be hard,” he notes. “You have to think about how to keep the brand fresh. Superintendents want the newest hot thing. Teachers have to know it will work with their kids in their classrooms. We have spent time focused on how to adjust the model to offer new features and respond to current education trends such as the Common Core. You always have to show teachers and administrators how the intervention addresses the issue of the day. For example, it isn’t enough that the intervention produces strong effect sizes. For teachers, that is a meaningless number. They want to know that the program will help their third graders achieve the literacy level now required in nearly 40 states to be promoted to 4th grade.”

Be Flexible but Maintain Your Core
Reading Recovery has taken seriously the idea of identifying the intervention’s core elements and also responding to the educational system’s current needs. They know that one-to-one instruction and 30-minute daily lessons are non-negotiable, but they also recognize that adaptations are needed. For example, innovations in the lesson framework have resulted in a design for classroom instruction (Literacy Collaborative), small groups (Comprehensive Intervention Model), and training for special education and ESL teachers (Literacy Lessons). “Our innovations have come as direct requests from schools,” says D’Agostino. “For example, a school says they need something for English Language Learners and we develop something new for that one school that then becomes a part of our overall product line. It allows growth for Reading Recovery and flexibility for schools.” Another non-negotiable is keeping training centralized. Although teacher leaders can receive training at one of the 19 partner universities, there are only a few places where trainers of teacher leaders can get certified. That allows Reading Recovery to maintain some quality control and fidelity over teacher leader training. “I’ve always been impressed with the fidelity of Reading Recovery instruction,” said D’Aogstino. “I’ve seen Reading Recovery lessons in Zanesville, Ohio and Dublin, Ireland. The framework is the same, but each lesson is different in terms of how the teacher interacts with the student to scaffold literacy learning.

Combine Historical Expertise with Fresh Perspective
D’Agostino is quick to note that one of Reading Recovery’s strengths and challenges is the longevity of its founders and senior leadership. Many of the original developers of the intervention are still in leadership positions. This allows for a historical perspective and continuity of purpose that are rare in education these days. It can also hinder innovation. That is why the organization also tries to find leadership positions for newer faculty and teachers with recent teaching and administrative experience who can bring fresh ideas and a willingness to push for some of the new adjustments to the model that schools are requesting.

Adapt, Adjust, and Meet Schools Where They Are
D’Agostino emphasizes that Reading Recovery’s current success and long history is no reason to sit back and relax. “We have survived a lot of changes over the years. We’ve grown, we’ve shrunk, we’ve survived major threats to our program from other national initiatives. Right now with our i3 grant, we are in a great position. We are going to reach our goal of training 3,700 teachers and producing good effects. But I don’t know that that will position us well for the future. In fact, I won’t be happy if we just reach our goals.” Sustaining an effective intervention and bringing it to more schools and students around the country means innovating, moving, pushing to the next level…and spreading the word. “Schools don’t necessarily hear about government funded initiatives that achieve high evidence standards according to the What Works Clearinghouse,” muses D’Agostino. “They hear from hundreds of vendors each year citing their effectiveness, so how do we distinguish ourselves? We can’t just assume success in our i3 grant will lead to sustainability. Sustainability is all about results. For example, we know that the outcomes are remarkable – most of the lowest-achieving first graders accelerate with Reading Recovery and reach the average of their cohort – but we also know from our annual evaluation that there’s a great deal of variation across schools and teachers. So right now we want to know, what do effective Reading Recovery teachers do and how is that different from less effective Reading Recovery teachers? Knowing more about that black box of teaching will help the intervention overall. And understanding how to foster local ownership will give the intervention its real staying power.”