How Many Education Innovators Does it Take to Change a Light Bulb?

How many innovative companies does it take to change a light bulb? Only one, if there’s a prize involved.
lightbulb 3.JPGRecently, the U.S. Department of Energy announced the winner of the $10 million L-Prize, a competition to create a light bulb capable of producing as much light as a 60 watt incandescent bulb but using less than 10 watts of energy, an 83 percent reduction in energy use. The Department of Energy estimates that if Americans replaced all of their current 60 watt bulbs with the winning entry from Philips, we’d save $3.9 billion in energy costs per year. The Department of Energy has partnered with over 30 regionally based utility companies and organizations to help make this a reality.

It is worth reading about the competition requirements, as described by the Department of Energy. For example,

“The L Prize competition includes technical specifications to ensure compliance with the general requirements outlined in the legislation, with additional details specified for quality, performance, and mass manufacturing. The competition also includes a rigorous evaluation process for proposed products, designed to detect and address product weaknesses before market introduction, to avoid problems with long-term market acceptance.”

This sounds strikingly similar to Jim Shelton’s post on Wednesday on the meaning of true innovations: developments that can both produce dramatic impact AND be brought to scale. Education innovation may not be as easy as light bulbs, because high-quality implementation is always needed. Education innovators, especially non-profits, do not have the capital to compete for a large prize in hopes of making back the money in prizes or sales. Yet the basic idea could easily transfer to education. Imagine the federal government set out standards of “performance (learning), quality, cost, and availability” for beginning reading, or middle school math, or high school biology. They might support alternative proposals to solve the problems, funding the most innovative, experienced, and capable developers to create and pilot solutions, ultimately to be independently evaluated on widely accepted measures. Through partnerships with education non-profits, state education agencies, and foundations, solutions that work can then be disseminated to schools throughout the U.S.
Investing in Innovation (i3), the Obama Administration’s investment in the whole innovation pipeline from development to scale-up, is a bit like the L-Prize, in that programs that obtain positive achievement outcomes in rigorous, third-party evaluations may qualify for large grants to help them disseminate their programs. Making i3 a permanent part of ESEA, as is being discussed, or instituting other competitions like ARPA-Ed proposed by the Department of Education, that inspire inventive solutions might create the same dynamic of bringing forward innovators ready and willing to solve the core longstanding problems of education.
When government and private funders can specify exactly what they want, and reward developers who can accomplish accepted goals, innovators rise to the occasion. Prizes may or may not be the right model for innovation in education, but the basic concept would certainly expand and fortify the current pipeline of evidence based, scalable innovations in education. Not every problem of education can be solved this way, but for those that can…why not an E-Prize?

Education Innovation: What It Is and Why We Need More of It

NOTE: This is a guest post from Jim Shelton, Assistant Deputy Secretary of the Office of Innovation and Improvement at the U.S. Department of Education.

Whether for reasons of economic growth, competitiveness, social justice or return on tax-payer investment, there is little rational argument over the need for significant improvement in U.S. educational outcomes. Further, it is irrefutable that the country has made limited improvement on most educational outcomes over the last several decades, especially when considered in the context of the increased investment over the same period. In fact, the total cost of producing each successful high school and college graduate has increased substantially over time instead of decreasing – creating what some argue is an inverted learning curve.

This analysis stands in stark contrast to the many anecdotes of teachers, schools and occasionally whole systems “beating the odds” by producing educational outcomes well beyond “reasonable” expectations. And, therein lies the challenge and the rationale for a very specific definition of educational innovation.

Education not only needs new ideas and inventions that shatter the performance expectations of today’s status quo; to make a meaningful impact, these new solutions must also “scale”, that is grow large enough, to serve millions of students and teachers or large portions of specific under-served populations. True educational innovations are those products, processes, strategies and approaches that improve significantly upon the status quo and reach scale.

Shelton graphic.JPG

Systems and programs at the local, state and national level, in their quest to improve, should be in the business of identifying and scaling what works. Yet, we traditionally have lacked the discipline, infrastructure, and incentives to systematically identify breakthroughs, vet them and support their broad adoption – a process referred to as field scans. Programs like the Department of Education’s Investing in Innovation Fund (i3) are designed as field scans; but i3 is tiny in comparison to both the need and the opportunity. To achieve our objectives, larger funding streams will need to drive the identification, evaluation, and adoption of effective educational innovations.

Field scans are only one of three connected pathways to education innovation, and they build on the most recognized pathway – basic and applied research. The time to produce usable tools and resources from this pathway can be long – just as in medicine where development and approval of new drugs and devices can take 12-15 years – but, with more and better leveraged resources, more focus, and more discipline, this pathway can accelerate our understanding of teaching and learning and production of performance enhancing practices and tools.

The third pathway focuses specifically on accelerating transformational breakthroughs, which require a different approach – directed development. Directed development processes identify cutting edge research and technology (technology generically, not specifically referring to software or hardware) and use a uniquely focused approach to accelerate the pace at which specific game changing innovations reach learners and teachers. Directed development within the federal government is most associated with DARPA (the Defense Advanced Research Projects Agency), which used this unique and aggressive model of R&D to produce technologies that underlie the Internet, GPS, and the unmanned aircraft (drone). Education presents numerous opportunities for such work. For example: (1) providing teachers with tools that identify each student’s needs and interests and match them to the optimal instructional resources or (2) cost-effectively achieving the 2 standard deviations of improvement that one-to-one human tutors generate. In 2010, the President’s Council of Advisors on Science and Technology recommended the creation of an ARPA for Education to pursue directed development in these and other areas of critical need and opportunity.

Each of these pathways -the field scan, basic and applied research and directed development – will be essential to improving and ultimately transforming learning from cradle through career. If done well, we will redefine “the possible” and reclaim American educational leadership while addressing inequity at home and abroad. At that point, we may be able to rely on a simpler definition of innovation:

“An innovation is one of those things that society looks at and says, if we make this part of the way we live and work, it will change the way we live and work.”

-Dean Kamen

-Jim Shelton

Note: The Office of Innovation and Improvement at the U.S. Department of Education administers more than 25 discretionary grant programs, including the Investing in Innovation Program, Charter Schools Program, and Technology in Education.


How Education “Miracles” Mislead

If you read media reports about education, a lot of the stories you see make extraordinary claims about remarkable, heart-warming turnarounds in student achievement, which are often debunked some time later. This cycle of enthusiasm-debunking-disappointment gets us nowhere in improving outcomes for kids. Genuine miracles–dramatic turnarounds in formerly low-achieving schools–are just as likely in education as they are in any other field. That is, not very likely at all. In fact, most miracles in education turn out on inspection to be due to a change in the students served (as when a new charter or magnet school attracts higher performing students) or changes in demographics (as when school catchment areas are gentrifying). Apparent miracles may be due to changes in tests (as when an entire state gains in one year due to a change to an easier test), or due to other redefinitions of outcomes (as when districts reduce their standards for high school graduation and graduation rates increase). All too often “miracles” never happened at all, as when “turned around” schools deliver poor scores or graduation rates, or when large changes occur for one year but reverse in the following year, or when schools improve on one measure but all other indicators are poor.
When data on individual schools do in fact show dramatic improvement and cannot be explained by demographic or test changes, there remains a question about whether the so-called “miracle” is replicable anywhere else. Were the gains due to an extraordinary level of funding? An extraordinary principal? Other unusual, never-to-be-repeated conditions?
When false “miracles” are reported and believed, they condition the public and policymakers to expect dramatic outcomes that cannot possibly be produced at scale. They invite debunking, which distracts from the real conversation and undermines faith in the entire reform process.
The antidote to false miracles is good research. In studies that compare many schools using a particular program over the course of a year or more to similar schools that continue business as usual, especially when schools are assigned at random to program or control groups, the outcomes are more likely to be believable, and chances are the program can be disseminated successfully to other schools. The outcomes from high-quality studies are usually much more modest than those reported for high-profile “miracles,” but they represent a more realistic idea of what could be achieved more broadly.
Newspapers hate to report on actual research in education because they consider it complicated and boring. Yet government is increasingly pointing to high-quality research, and the public may be receptive to hearing about exciting new developments and appealing examples of schools using proven programs.
If you want miracles, go to Lourdes, but if you want better schools for America’s children, ask for the evidence. There really is solid evidence for a wide variety of proven, replicable programs, but this evidence is routinely ignored while attention is focused on the making and debunking of implausible claims. This focus on “miracles” adds heat but not light to educational debates.

What Do We Mean by “Proven” Programs in Education?

One of the greatest impediments to policies promoting the use of proven programs is a lack of agreement about the criteria for “proven.” Among policy makers, the likelihood that they will have to preside over endless battles among academics on this question makes them want to forget about the whole thing.

Fortunately, a consensus definition of “proven” is beginning to solidify. It is best stated in the standards for scale-up grants under the U.S. Department of Education’s Investing in Innovation (i3) initiative, which demands at least one large, randomized evaluation showing positive effects of a program on outcomes of importance, or at least two large matched studies. That is, students in schools using given programs and very similar schools using ordinary methods are pre-tested and post-tested to see if students in schools using the program make greater gains. In randomized studies, schools or teachers are assigned at random to use the program or not (the “Gold Standard” of experimental design), and in matched studies, schools or teachers who chose to use the program are compared to similar ones who did not. Standards being developed by the Annie E. Casey Foundation add to these common-sense standards requirements about acceptable study durations, measures, and other features, and similar standards are used by our Best Evidence Encyclopedia (

What is important about these standards is that they are relatively straightforward to apply, and similar standards are being used in all areas of children’s services (e.g., delinquency prevention, social-emotional learning, and parent training). Not that every academic agrees with these standards; many reject out of hand the entire idea that any quantitative education measure measures anything. Yet there is enough consensus among those academics concerned with policy to make the standards useful.

Although evidence standards are sure to be debated and to evolve over time, there is enough agreement today to make it possible for government to encourage schools to use programs that meet these standards and to focus on helping program developers meet them. Improvements in educational programs and student learning can’t wait while we argue about exactly what standards to use, and now they don’t have to.

NOTE: Robert Slavin is co-founder of the Success for All Foundation, which received a $49 million i3 scale up grant from the Department of Education in 2010.

A Commitment to Research Yields Improvements in Charter Network

Note: This is a guest post by Richard Barth, CEO and President of the KIPP FoundationMathematica
In his inaugural post for this blog, Robert Slavin wrote, “We did not manage our way to the moon, we invented our way to the moon.” I hear echoes of this statement throughout my work. Like other national charter school leaders, I am committed to making sure innovation can blossom and spread, throughout our own network and public schools nationwide.

But along with innovation we must insist on research and results. Across the 31 KIPP regions nationally, for example, we give schools autonomy to innovate as they see fit, as long as they can demonstrate that they are producing results for our students.

So how does a charter network like ours make sure schools are producing results? Not only do we assess our own schools on a regular basis, with publications like our yearly Report Card, but we also make a practice of inviting independent researchers to evaluate our results.

By building a solid body of evidence for what works–including independent reports about student achievement in our schools–we are able to set and maintain a high bar for achievement in our schools. The evidence then helps us build on what is working and to make adjustments where the
research has identified areas where we need to improve. For example, a study by Mathematica found that KIPP middle schools students make statistically significant gains in math and reading, even though students enter KIPP with lower average test scores than their neighboring peers in district schools. The same Mathematica report also found that KIPP schools are serving fewer special-education and Limited English Proficient (LEP), students than the average for neighboring district schools. This is a challenge for many charter schools and something we are making a priority throughout our network. So where we find we are doing well in both numbers of students served and their results -like the KIPP Academy Lynn near Boston, Mass., which is highlighted in a 2010 working paper from the National Bureau of Economic Research–we have an opportunity to zero in on what’s working and spread this news to our network and charter schools nationwide.

As more of our students move on to college, research can also help us keep tabs on how they are faring. We are just starting to examine the college completion rates of our students. In April we released our first-ever College Completion Report, which looked at the college graduation rates of KIPP’s earliest graduates from the mid-1990’s. Thirty-three percent of these KIPP students had finished college by their mid-twenties which is above the national average and four times the rate of their peers from low-income communities. This is far short of our goal of 75 percent, which is the average college completion rate for kids from affluent families.

By sharing these results we hope to encourage a national dialogue about how to improve college completion rates in America, especially among low income students. But we need school districts and charter school to start publicly reporting college completion rates fully–including those of eighth grade graduates, not just high school graduates or college freshmen, a practice that fails to give us a true picture.

This process of improvement is hard work; there’s no question. But by committing to research and accountability, we can set off a more vigorous and transparent conversation among public educators across the country about what we need to do to ensure success for all of our schools and students.


-Richard Barth

KIPP, the Knowledge Is Power Program, is a national network of free, open-enrollment, college-preparatory public charter schools. There are currently 109 KIPP schools in 20 states and the District of Columbia serving more than 32,000 students.

Eyeglasses: Peering Into Educational Dysfunction

If you wear reading glasses, please take them off for a moment and continue reading this blog.

You can’t? You won’t? Well, now put yourself in the position of a child in a high-poverty school who needs eyeglasses but does not have them.

In the richest country in the world, it is shocking, but it is a fact that a very, very large number of disadvantaged children who need glasses don’t have them. A New York City study of middle school children found that 28 percent of them needed glasses, and less than 3 percent had them. Studies in Baltimore— including the Baltimore Vision Screening Project in the 1990s–and many other places find the same.

The eyeglasses story varies from place to place, but here’s how it works. In most schools, the health department screens for gross vision problems. If children are found to have problems, parents are asked to take the child to an eye care professional for more testing, a prescription, and then glasses. In middle class families this usually works, but in disadvantaged families, plenty goes wrong. Overburdened health departments may not actually do screening, or may do it at rare intervals. Parents may not be able to afford eyeglasses; Medicaid provides funding for them, but this takes a lot of paperwork. Parents may not follow up, finding it difficult to take off from work. Even if they do get glasses, kids being kids lose or break them, and the whole process begins again, or doesn’t. Schools with particularly relentless staff focused on this issue can get much better percentages of kids with glasses, but it’s a struggle.

Because kids’ vision is more flexible than that of adults, most kids can see an eye chart. However, for many, focusing on text takes more effort and concentration than it does for kids who have or don’t need glasses, especially as text gets smaller past the primary years. The result is that these kids lose motivation, or come to think they are stupid. As an adult who wears reading glasses, I can read most type without my glasses, but I don’t want to do it very long. However, I know I can read and how to fix my problem. A kid in Chicago or Biloxi may not know that others can see, or that he or she could be learning to read.

The failure to get eyeglasses on disadvantaged kids illustrates broader, uncomfortable truths about education policy. This is a really, really simple problem. A pair of glasses can cost $20. Yet schools do not see eyeglasses as their problem. They see it as the Health Department’s problem, or the parent’s problem, or both. Yet schools are ultimately responsible for kids’ reading, and even in the narrowest economic analysis schools are spending vast resources on tutoring, remediation, and special education for kids who can’t read, some proportion of whom merely need $20 glasses.

There are simple and cost-effective solutions to these problems. Making schools (rather than health departments) responsible for vision and funding them for this purpose, or at least letting them use Title I funds for eyeglasses, could help a great deal. Schools could keep sets of glasses for use by children who need them. Proactive screening and relentless follow-up could make the current system work better. But what’s lacking is a sense of outrage and real accountability. Here are children failing for an entirely preventable reason. If you took off your reading glasses at the beginning of this blog, put them back on and read it again. Doesn’t this make you see red?

America’s Strength: An Innovation Economy

In a September 11 article in The New York Times called “China’s Rise Isn’t Our Demise,” Vice President Joe Biden wrote a cogent summary of America’s advantage in the world economy that has enormous implications for innovation in education.

“The United States is hard-wired for innovation. Competition is in the very fabric of our society. It has enabled each generation of Americans to give life to world-changing ideas—from the cotton gin to the airplane, the microchip, the Internet. We owe our strength to our political and economic system and to the way we educate our children—not merely to accept established orthodoxy but to challenge and improve it… Our universities remain the ultimate destination for the world’s students and scholars.”

Nothing in Biden’s op-ed is new or surprising. Every American understands that our success in the world economy depends on education and innovation.

So why do we devote so little attention to innovation in education? The very orientations and investments Vice President Biden cites as the basis of our success in other fields are rarely applied to improving education itself. Instead of inventing our way to success, as we do in so many other fields, we keep trying to improve education through changes in governance, regulations, and rules, which never produce change in core classroom practices and outcomes. Every state’s textbook adoption requirements specify paperweight, but never mention the weight of evidence behind the use of the book. Special education regulations specify that children be placed in the “least restrictive environment” but never the “most effective environment.” Title I has reams of regulations about how funds can or can’t be spent, but not a word suggesting that they be spent on programs proven to work.

The Obama administration has invested more than any other in history in education innovation, especially through its Investing in Innovation (i3) program. Yet evidence and innovation continue to play an extremely small role in Title I, ESEA, special education, and other federal programs, much less in state and local programs. Vice President Biden’s article is a ringing endorsement of innovation, evidence, and education. Can we now apply it to education itself?

What Would Evidence-Based Policy Look Like in Education?

Note: Steve Fleischman, deputy executive officer at Education Northwest, writes this guest post.

Is evidence-based policy an oxymoron? Is it possible to have evidence serve as a guide rather than merely as a justification for policy? I think there are two ways in which evidence can play a key role in school improvement.

The first is that evidence can help us identify high-leverage problems that create policy priorities. These are problems that, if solved, would reduce a large percentage of the variance between good and bad outcomes for kids. In 2004, for example, Paul Barton suggested a list of 14 factors correlated to high student achievement, including out-of-school factors such as hunger and nutrition, reading to children, and student mobility. He also listed in-school factors such as teacher quality, rigor of curriculum, and school safety. Barton noted that low-income and minority children are at a disadvantage in most of these areas.

Evidence-based policy would dictate that we concentrate on these, or other similar high-leverage conditions, to improve education–particularly for our most disadvantaged students. Imagine if instead of spending billions of dollars over the next 10 years on a thousand different efforts we concentrated policy on making sure that students master reading early, have successful transitions from middle to high school, and stay in school at least through high school graduation. After all, evidence from a 2010 study by the Annie E. Casey Foundation points to the “make or break” nature of mastering reading by grade 3 for children’s future educational development; ACT provides evidence that the level of academic achievement attained by eighth-grade students “has a larger impact on their college and career readiness by the time they graduate than anything that happens academically in high school;” and the Everyone Graduates Center describes the devastating consequences that dropping out of high school has on both individuals and society.

Second, evidence-based policymakers can also insist that we judge proposed solutions to high-leverage problems against proof that they can get the job done. Let’s say we are talking about the goal of having all students reading by grade 3. Policymakers should not care whether it is charters, vouchers, homeschooling, non-union or unionized schools, reading programs, or professional development that achieves the desired goal. They should only care about demonstrated results. I agree with Rick Hess when he argues that, “The proper measure of whether proposals are consistent with public schooling ought not be whether power, politics, or finances shift, but whether we are doing a better job of educating all children so they master essential knowledge and skills, develop their gifts, and are prepared for the duties of citizenship.”

I was struck several years ago while reading “Polio: An American Story” how, led by science and the commitments of policy leaders, our entire nation was mobilized in a multi-decade effort to eradicate the dreaded disease. Even Lucy and Desi and other celebrities of the 1950s were engaged in the cause. Today, by combining science and policy, the Bill & Melinda Gates Foundation has extended the fight against polio around the world. Imagine if evidence-based policy could similarly mobilize our entire nation to accomplish a few critical educational outcomes: all children reading by grade 3, successful transitions to high school, and significant reductions in dropout rates. What would our education system look like then?
Steve Fleischman
Education Northwest (, a nonprofit headquartered in Portland, Ore., conducts research, evaluation, technical assistance, training, and strategic communications activities to promote evidence-informed education policy and practice.

The Unmet Promise of Education Technology

In the mid-2000s, the U.S. Department of Education commissioned a large, randomized evaluation of the most widely used computer-assisted instruction (CAI) programs in elementary reading and middle and high school math. Schools were randomly assigned to use one of several CAI programs. The results (published here and here) were dismal. In both subjects and all grade levels, achievement levels were virtually identical for the students who experienced CAI and those who did not. This finding was consistent with the conclusions of recent reviews of research on CAI in reading and math, which find that the higher the quality of the research (e.g. random assignment of large samples), the lower the estimate of CAI effects. In a nation that worships technology and spends billions on technology in schools, the study should have been a wake-up call, but it was hardly covered by the general press (and not so much by the education press). Earlier this month, Matt Richtel of The New York Times did a good job of describing the pressure and appeal for school leaders to adopt technologies that promise improved instruction, efficiency, and results but don’t necessarily have the data to justify the cost.

How could modern CAI programs fail to make much difference in student learning? One clue in the federal study is in the fact that children did not spend many hours on the computers over the course of the year. Perhaps a bigger dose would have a larger effect, although studies of this possibility do not generally find a dosage effect.

Another explanation for both the limited hours of use of CAI (also found in many studies) and the limited impact may be that traditional CAI is just inconsistent with traditional teaching, and is therefore not valued by teachers or integrated very well into daily teaching.

Whatever the explanation, the modest impact of modern CAI programs creates a paradox. On one hand, it is clear that breakthroughs in educational practice (and therefore policy) will involve technology. Excellent professional development can help teachers get better results, but I believe that outstanding, sustainable improvements in daily teaching are going to depend on the extraordinary capabilities of technology. Yet I’d be the first to admit that the track record for technology as it has been used in schools so far is not so great.

I think the greatest promise for innovation in teaching using technology is in applications that fully integrate the two. Currently, some of the best evidence for modern technologies is for programs that cycle children through integrated activities, both with and without technology. Evidence also supports the use of embedded multimedia, where teachers use brief bits of video integrated into their lessons to build motivation and understanding with powerful visuals. Computer-assisted tutoring and small-group tutorials similarly combine the strengths of teachers and technology, and have shown very positive outcomes. A recent study in England showed that use of self-paced electronic response devices to provide immediate feedback to students and teachers increased math learning.

It’s time to rethink the role of technology in instruction, to ask not how technology can mimic (and replace) what good teachers do, but how it can support good teaching. Kids, especially elementary kids, learn in large part because they want to please valued adults, and they learn in collaboration with each other. No computer can replace a teacher’s empathy, enthusiasm, or ability to understand and respond to students’ interests and needs. Yet research in reading and math suggests that technology has enormous potential to add interest, visual images, organization, and assessment to teachers’ lessons and to cooperative interactions among students. The task is to figure out how teachers, peers, and technology can all work together to create effective classrooms.

Research and Innovation: The Way Forward in Education

Fifty-four years ago, America was galvanized when the Soviet Union put a satellite into space. We responded as we always do when we have a national consensus on an important goal: We innovated. We invested heavily not only in rockets, but also in education, to prepare our entire nation to be second to none. I’m old enough to remember how exciting it was to feel a part of the national response to Sputnik. We knew that America would regain its leadership, and it was all up to us kids!

In education today, we wait in vain for the “Sputnik moment,” the time when our leaders decide that falling behind our international peers in academic achievement is no longer acceptable. Instead of investing in research and innovation, as we did in the wake of Sputnik, our leaders today try to solve our educational problems by fiddling with management solutions, governance solutions, and assessment solutions that do not fundamentally change what happens between teachers and students. These policies may be beneficial, but they don’t scare the Finns or the Chinese or even the Canadians who outperform our students. The reason it was Neil Armstrong and not Nikolai Armstronganoff who landed on the moon was that we invested in targeted, relentless research and development. We did not manage our way to the moon, we invented our way to the moon. Dramatic improvements in medicine, agriculture, and technology happened the same way. And so it must be in education.

Sputnik: Advancing Education through Innovation and Evidence is a new blog dedicated to disseminating news and information on research and development in education that could transform teaching and improve student outcomes on a scale that matters. In addition to reporting on research itself, it will focus on policy developments relevant to research and innovation in education. Guest bloggers will present their perspectives on how research and innovation can play a greater role in policy and practice.

This is an exciting time for those who share a belief in research and innovation as the way forward in education. I hope you’ll join me in exploring the outer limits of education reform.