Back in 1993, Carl Kaestle memorably wrote about the “awful reputation of educational research.” At the time, he was right. But that was 23 years ago. In the interim, educational research has made extraordinary advances. It is now admired by researchers in many other fields and by policy makers in many areas of government. As indicated by the importance of evidence in the Every Student Succeeds Act (ESSA), evidence is starting to make more of a difference in policy and practice. There is still a long, long way to go, but the trend is hugely positive.
In a recent article for the Brookings Institution, Ruth Curran Neild, acting director of the Institute of Education Sciences (IES), argued that educational research is on the right track. The one thing it lacks, she says, is adequate funding. I totally agree. Of course there are improvements that could be made to education policies and practices, but the part of the education field working on using science to improve outcomes for children is very much going in the right direction. Many are frustrated that it is not getting there fast enough, but we need more wind in our sails, not a change of course.
I was listening recently to an NPR broadcast about a new center for research on immunological treatments for cancer. The interviewer asked how their center could possibly make much difference with a grant of only $250 million. The director sheepishly agreed this was a problem, but hoped they could nevertheless make a contribution. If only we in education had conversations like this – ever!
What has radically changed over the past 15 years is that there is now far more support than there once was for randomized evaluations of replicable programs and practices, and as a result we are collectively building a strong set of studies that use the kinds of designs common in medicine and agriculture but not, until recently, in education. My colleagues and I constantly update reviews of research on educational interventions in the main areas of practice at the Best Evidence Encyclopedia website. Where once randomized studies were rare, they are becoming the norm. We recently published a review of research on early childhood programs, in which we located 32 studies of 22 different programs. Twenty-nine of the studies used randomized designs, thanks primarily to funding and leadership from a federal investment called Preschool Curriculum Evaluation Research (PCER). We are working on a review of research on secondary reading programs. Due to the federal Striving Readers program, which invested in evaluations of a wide variety of school interventions, our review is now dominated by randomized studies. Studies of programs for struggling elementary readers are now overwhelmingly randomized. The Investing in Innovation (i3) program requires randomized evaluations in its validation and scale-up grants and encourages them in its development grants, and this is increasing the prevalence of randomized studies across all studies of programs for students from grades pre-K to 12. The National Science Foundation has begun to fund scale-up projects that require random assignment, as have a few private foundations.
Random assignment is the hallmark of rigorous science. From a methodological standpoint, random assignment is crucial because only when students, teachers, or schools are randomly assigned to treatment or control conditions can readers be sure that any differences observed at posttest are truly the result of the treatments, and not of self-selection or other bias. But more than this, use of random assignment establishes a field as serious about its science. Studies that use random assignment are called “gold standard,” because there is no better design in existence. Yes, there are better and worse randomized studies, better and worse measures, and so on. Mixed methods studies can usefully add insight to the numbers. Replication is very important in establishing effectiveness. And there are certainly circumstances in which randomization is impossible or impractical, and a well-done quasi-experiment will do. But all this being said, the use of randomization moves the science of education forward and gives educational leaders reliable information on which to make decisions.
The most telling criticism of randomized experiments is that they are expensive. Yes, they can be. Encouragement and funding from IES and the Laura and John Arnold Foundation is increasing the use of inexpensive experiments in situations in which treatments and (usually) measures are already being paid for by government or other sources, so only funding for the evaluation is needed. But these experiments are only possible in special circumstances. In others, someone has to come up with serious funding to support randomized designs.
This brings us back to Ruth Neild’s main point. We know what needs to be done in educational research. We need to develop a wide variety of promising innovations, subject them to rigorous, ultimately randomized experiments, and then disseminate those programs found to be effective. We have systems in place to do all of these things. We just need a lot more funding to do them faster and better.
I don’t know if the increases in the quality of research in education are understood by policy makers, or how much this quality matters for funding. But education now has a case to make that it deserves much greater funding. Educational research is no longer just of interest to the academics who do it. It is producing answers that matter for children, and that should justify funding in line with our field’s new, wonderful reputation.