Smart Philanthropy

Americans are very generous. We give more than $300 billion annually to every kind of charity, hoping to do good in the world. All charities have mission statements, stated right on the dozens of calendars they send us, and most claim to be making a difference in some valued outcome. Yet tough-minded, tender-hearted donors want to know more. Is their donation producing a concrete outcome?

Dean Karlan, a Yale economist, has just launched a new organization, called ImpactMatters, to do “impact audits” on nonprofits. These focus primarily on assessing impact of the charities’ services on the outcomes they claim. Karlan’s team examines the evidence, particularly randomized experiments, as well as financial and management issues, and picks out charities that are transparent, well-managed, and making a well-documented difference.

In a December 4th Wall Street Journal article, Karlan introduced his organization and its purpose, and the first four programs identified by ImpactMatters as meeting its standards were named on December 11th on the ImpactMatters website. One was our Success for All reading program, and the others were international charities focused on the ultra-poor in developing countries and healthcare in Nepal.

The appearance of ImpactMatters could make a difference in philanthropy, and that would be terrific in itself. However, its significance goes far beyond philanthropy.

ImpactMatters is one more indication that good intentions are no longer sufficient. Government, philanthropists, and leaders of all kind are increasingly demanding rigorous evidence of impact. We all know where the road paved with good intentions goes. The road paved with good evidence, acted upon with integrity, purpose, and caring, goes straight to heaven. Karlan’s stated purpose is to help people give with their minds, not just their hearts. I hope this will make the difference it is intended to make. Personally, I’d rather get a lot fewer calendars and a lot more impact for my donations. Doesn’t everyone feel the same way?

Evidence and the ESSA

The U.S. House of Representatives last week passed the new and vastly improved version of what is now being called the Every Student Succeeds Act (ESSA), the successor to No Child Left Behind (NCLB) and the Elementary and Secondary Education Act (ESEA). For people (such as me) who believe that evidence will provide salvation for education in our country, the House and Senate ESSA conference bill has a lot to like, especially in comparison to the earlier draft.

ESSA defines four categories of evidence based on their strength:

  1. “strong evidence” meaning supported by at least one randomized study;
  2. “moderate evidence” meaning supported by at least one quasi-experimental study;
  3. “promising evidence” meaning at least one correlational study with pretests as covariates; and
  4. programs with a rationale based on high-quality research or a positive evaluation that are likely to improve student or other relevant outcomes and that are undergoing evaluation, often referred to as “strong theory” (though the bill does not use that term).

The top three categories effectively constitute proven programs, as I read the law. For example, seven competitive funding programs would give preference points to applications with evidence meeting one of those categories, and a replacement for School Improvement Grants requires local educational agencies to include “evidence-based interventions” in their comprehensive support and improvement plans.

One good thing about this definition is that for the first time, it unequivocally conveys government recognition that not all forms of evaluation are created equal. Another is that it plants the idea that educators should be looking for proven programs, as defined by rigorous, sharp-edged standards. This is not new to readers of this blog, but is very new to most educators and policy makers.

Another positive feature of ESSA as far as evidence is concerned is that it includes a new tiered-evidence provision called Education Innovation Research (EIR) that would effectively replace the Investing in Innovation (i3) program. Like i3, it is a tiered grant program that will support the development, evaluation and scale-up of local, innovative education programs based on the level of evidence supporting the programs, but without the limitation of program priorities established by the U.S. Department of Education. It is a real relief to see Congress value continued development and evaluation of innovations in education.

Of course, there are also some potential problems, depending on how ESSA is administered. First, the definition for “evidence-based” includes correlational studies, and these are of lower quality than experiments. Worse, if “strong theory” is widely used, then the whole evidence effort may turn out to make no difference, as any program on Earth can be said to have “strong theory.”

A strong theme throughout ESSA is moving away from federal control of education toward state and local control. Philosophically, I have no problem with this, but it could cause trouble in the evidence movement, which has been largely focused on policy in Washington. These developments create a strong rationale for the evidence movement to expand its focus to state and local leaders, not just federal, and that would be a positive development in itself.

In education policy, it’s easy for well-meaning language to be watered down or disregarded in practice. Early on in NCLB, for example, evidence fans were excited by the 110 mentions of “scientifically-based research,” but “scientifically-based” was so loosely defined that it ended up changing very little in school practice (though it did lead to the creation of the Institute for Education Sciences, which mattered a great deal).

So recognizing that things could still go terribly wrong, I think it is nevertheless important to celebrate the potentially monumental achievement represented by ESSA. The evidence parts of the Act were certainly aided by the tireless efforts of numerous organizations that worked collectively to create scrupulously bipartisan coalitions in the House and Senate to support evidence in government. Just seeing both sides of the aisle and both sides of the Capitol collaborate in this crucial effort gives me hope that even in our polarized times, bipartisanship and bicameralism is still possible when children are involved. Congratulations to all who were responsible for this achievement.

Permanent Innovations

My wife Nancy and I were recently in Barcelona, a beautiful and fascinating city famous for its innovations in architecture, art, and design. We were speaking to various groups about evidence-based reform in education and about cooperative learning.

The people we spoke to on that topic in Barcelona were mostly innovators, risk takers, moving away from traditional teaching to give students more autonomy, engagement, and opportunity to collaborate. It was exciting to hear their ideas and their questions.

However, what was ironic in this experience was that here we were again talking about cooperative learning as an innovation, at about the same point on the cutting edge as it was in the 1980s. Nancy and I reflected afterwards, and not for the first time, that cooperative learning has truly become a permanent innovation.

A permanent innovation is my term for a popular, widely supported practice in education that never really prevails but never disappears. Almost everyone in the field supports it, and some actually implement it, but in reality the practice is honored more in the breach of observance than in the actuality.

I think permanent innovations are rare in the hard sciences, where sooner or later, an innovation either works and becomes commonplace, or it doesn’t and it dies.

In the case of cooperative learning, surveys over the years have routinely found that extraordinary proportions of teachers claim to use cooperative learning frequently. Yet observational studies find it to be a lot less common in practice, and many of the teachers who do use it merely allow students to work or discuss content in groups without any particular structure, a practice that has not been found to have positive impacts on learning. The proven forms of cooperative learning, which include group goals and individual accountability, have been known and popular for decades. They neither become standard practice nor disappear, but remain forever as permanent innovations.

Permanent innovations exist in many areas of curriculum. In science, it is inquiry teaching. In math, it is problem solving-based instruction. In writing, it is writing process models. All are extremely popular, and educators at conferences will rarely admit they do not use them (or if they do, they have good stories about external blockers, such as accountability schemes, resource limitations, and conservative school boards).

In principle, I favor these permanent innovations, but I’d feel a lot better if there were many programs that provided specific guidance in how to make effective use of them.

Each of the permanent innovations in education takes a positive view of children and aims to make classrooms joyful, engaged, and creative environments for students. There is no reason that specific programs consistent with these goals cannot be developed, evaluated, found to be effective, and disseminated broadly, and this has in fact happened with several forms of cooperative learning. Proven programs can and do embrace Dewey and Vygotsky. However, it always matters exactly how Deweyan and Vygotskyan principles are put into practice. Some day, I hope there will be many approaches that are both proven to be effective in rigorous experiments and consistent with constructivist, engaging, and prosocial principles. Then perhaps permanent innovations will no longer be called innovations. They’ll be called state-of-the-art.