Science of Reading: Can We Get Beyond Our 30-Year Pillar Fight?

How is it possible that the “reading wars” are back on? The reading wars primarily revolve around what are often called the five pillars of early reading: phonemic awareness, phonics, comprehension, vocabulary, and fluency. Actually, there is little debate about the importance of comprehension, vocabulary, or fluency, so the reading wars are mainly about phonemic awareness and phonics. Diehard anti-phonics advocates exist, but in all of educational research, there are few issues that have been more convincingly settled by high-quality evidence. The National Reading Panel (2000), the source of the five pillars, has been widely cited as conclusive evidence that success in the early stages of reading depends on ensuring that students are all successful in phonemic awareness, phonics, and the other pillars. I was invited to serve on that panel, but declined, because I thought it was redundant. Just a short time earlier, the National Research Council’s Committee on the Prevention of Reading Difficulties (Snow, Burns, & Griffin, 1998) had covered essentially the same ground and came to essentially the same conclusion, as had Marilyn Adams’ (1990) Beginning to Read, and many individual studies. To my knowledge, there is little credible evidence to the contrary. Certainly, then and now there have been many students who learn to read successfully with or without a focus on phonemic awareness and phonics. However, I do not think there are many students who could succeed with non-phonetic approaches but cannot learn to read with phonics-emphasis methods. In other words, there is little if any evidence that phonemic awareness or phonics cause harm, but a great deal of evidence that for perhaps more than half of students, effective instruction emphasizing phonemic awareness and phonics are essential.  Since it is impossible to know in advance which students will need phonics and which will not, it just makes sense to teach using methods likely to maximize the chances that all children (those who need phonics and those who would succeed with or without them) will succeed in reading.

However…

The importance of the five pillars of the National Reading Panel (NRP) catechism are not in doubt among people who believe in rigorous evidence, as far as I know. The reading wars ended in the 2000s and the five pillars won. However, this does not mean that knowing all about these pillars and the evidence behind them is sufficient to solve America’s reading problems. The NRP pillars describe essential elements of curriculum, but not of instruction.

blog_3-19-20_readinggroup_333x500Improving reading outcomes for all children requires the five pillars, but they are not enough. The five pillars could be extensively and accurately taught in every school of education, and this would surely help, but it would not solve the problem. State and district standards could emphasize the five pillars and this would help, but would not solve the problem. Reading textbooks, software, and professional development could emphasize the five pillars and this would help, but it would not solve the problem.

The reason that such necessary policies would still not be sufficient is that teaching effectiveness does not just depend on getting curriculum right. It also depends on the nature of instruction, classroom management, grouping, and other factors. Teaching reading without teaching phonics is surely harmful to large numbers of students, but teaching phonics does not guarantee success.

As one example, consider grouping. For a very long time, most reading teachers have used homogeneous reading groups. For example, the “Stars” might contain the highest-performing readers, the “Rockets” the middle readers, and the “Planets” the lowest readers. The teacher calls up groups one at a time. No problem there, but what are the students doing back at their desks? Mostly worksheets, on paper or computers. The problem is that if there are three groups, each student spends two thirds of reading class time doing, well, not much of value. Worse, the students are sitting for long periods of time, with not much to do, and the teacher is fully occupied elsewhere. Does anyone see the potential for idle hands to become the devil’s playground? The kids do.

There are alternatives to reading groups, such as the Joplin Plan (cross-grade grouping by reading level), forms of whole-class instruction, or forms of cooperative learning. These provide active teaching to all students all period. There is good evidence for these alternatives (Slavin, 1994, 2017). My main point is that a reading strategy that follows NRP guidelines 100% may still succeed or fail based on its grouping strategy. The same could be true of the use of proven classroom management strategies or motivational strategies during reading periods.

To make the point most strongly, imagine that a district’s teachers have all thoroughly mastered all five pillars of science of reading, which (we’ll assume) are strongly supported by their district and state. In an experiment, 40 teachers of grades 1 to 3 are selected, and 20 of these are chosen at random to receive sufficient tutors to work with their lowest-achieving 33% of students in groups of four, using a proven model based on science of reading principles. The other 20 schools just use their usual materials and methods, also emphasizing science of reading curricula and methods.

The evidence from many studies of tutoring (Inns et al., 2020), as well as common sense, tell us what would happen. The teachers supported by tutors would produce far greater achievement among their lowest readers than would the other equally science-of-reading-oriented teachers in the control group.

None of these examples diminish the importance of science of reading. But they illustrate that knowing science of reading is not enough.

At www.evidenceforessa.org, you can find 65 elementary reading programs of all kinds that meet high standards of effectiveness. Almost all of these use approaches that emphasize the five pillars. Yet Evidence for ESSA also lists many programs that equally emphasize the five pillars and yet have not found positive impacts. Rather than re-starting our thirty-year-old pillar fight, don’t you think we might move on to advocating programs that not only use the right curricula, but are also proven to get excellent results for kids?

References

Adams, M.J. (1990).  Beginning to read:  Thinking and learning about print.  Cambridge, MA:  MIT Press.

Inns, A., Lake, C., Pellegrini, M., & Slavin, R. (2020). A synthesis of quantitative research on programs for struggling readers in elementary schools. Available at www.bestevidence.org. Manuscript submitted for publication.

National Reading Panel (2000).  Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction.  Rockville, MD: National Institute of Child Health and Human Development.

Slavin, R. E. (1994). School and classroom organization in beginning reading:  Class size, aides, and instructional grouping. In R. E. Slavin, N. L. Karweit, and B. A. Wasik (Eds.), Preventing early school failure. Boston:  Allyn and Bacon.

Slavin, R. E. (2017). Instruction based on cooperative learning. In R. Mayer & P. Alexander (Eds.), Handbook of research on learning and instruction. New York: Routledge.

Snow, C.E., Burns, S.M., & Griffin, P. (Eds.) (1998).  Preventing reading difficulties in young children.  Washington, DC: National Academy Press.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

 

Getting Schools Excited About Participating in Research

If America’s school leaders are ever going to get excited about evidence, they need to participate in it. It’s not enough to just make school leaders aware of programs and practices. Instead, they need to serve as sites for experiments evaluating programs that they are eager to implement, or at least have friends or peers nearby who are doing so.

The U.S. Department of Education has funded quite a lot of research on attractive programs A lot of the studies they have funded have not shown positive impacts, but many have been found to be effective. Those effective programs could provide a means of engaging many schools in rigorous research, while at the same time serving as examples of how evidence can help schools improve their results.

Here is my proposal. It quite often happens that some part of the U.S. Department of Education wants to expand the use of proven programs on a given topic. For example, imagine that they wanted to expand use of proven reading programs for struggling readers in elementary schools, or proven mathematics programs in Title I middle schools.

Rather than putting out the usual request for proposals, the Department might announce that schools could qualify for funding to implement a qualifying proven program, but in order to participate they had to agree to participate in an evaluation of the program. They would have to identify two similar schools from a district, or from neighboring districts, that would agree to participate if their proposal is successful. One school in each pair would be assigned at random to use a given program in the first year or two, and the second school could start after the one- or two-year evaluation period was over. Schools would select from a list of proven programs and choose one that seems appropriate to their needs.

blog_2-6-20_celebrate_500x334            Many pairs of schools would be funded to use each proven program, so across all schools involved, this would create many large, randomized experiments. Independent evaluation groups would carry out the experiments. Students in participating schools would be pretested at the beginning of the evaluation period (one or two years), and posttested at the end, using tests independent of the developers or researchers.

There are many attractions to this plan. First, large randomized evaluations on promising programs could be carried out nationwide in real schools under normal conditions. Second, since the Department was going to fund expansion of promising programs anyway, the additional cost might be minimal, just the evaluation cost. Third, the experiment would provide a side-by-side comparison of many programs focusing on high-priority topics in very diverse locations. Fourth, the school leaders would have the opportunity to select the program they want, and would be motivated, presumably, to put energy into high-quality implementation. At the end of such a study, we would know a great deal about which programs really work in ordinary circumstances with many types of students and schools. But just as importantly, the many schools that participated would have had a positive experience, implementing a program they believe in and finding out in their own schools what outcomes the program can bring them. Their friends and peers would be envious and eager to get into the next study.

A few sets of studies of this kind could build a constituency of educators that might support the very idea of evidence. And this could transform the evidence movement, providing it with a national, enthusiastic audience for research.

Wouldn’t that be great?

 This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

New Sections on Social Emotional Learning and Attendance in Evidence for ESSA!

We are proud to announce the launch of two new sections of our Evidence for ESSA website (www.evidenceforessa.org): K-12 social-emotional learning and attendance. Funded by a grant from the Bill and Melinda Gates Foundation, the new sections represent our first foray beyond academic achievement.

blog_2-6-20_evidenceessa_500x333

The social-emotional learning section represents the greatest departure from our prior work. This is due to the nature of SEL, which combines many quite diverse measures. We identified 17 distinct measures, which we grouped in four overarching categories, as follows:

Academic Competence

  • Academic performance
  • Academic engagement

Problem Behaviors

  • Aggression/misconduct
  • Bullying
  • Disruptive behavior
  • Drug/alcohol abuse
  • Sexual/racial harassment or aggression
  • Early/risky sexual behavior

Social Relationships

  • Empathy
  • Interpersonal relationships
  • Pro-social behavior
  • Social skills
  • School climate

Emotional Well-Being

  • Reduction of anxiety/depression
  • Coping skills/stress management
  • Emotional regulation
  • Self-esteem/self-efficacy

Evidence for ESSA reports overall effect sizes and ratings for each of the four categories, as well as the 17 individual measures (which are themselves composed of many measures used by various qualifying studies). So in contrast to reading and math, where programs are rated based on the average of all qualifying  reading or math measures, an SEL program could be rated “strong” in one category, “promising” in another, and “no qualifying evidence” or “qualifying studies found no significant positive effects” on others.

Social-Emotional Learning

The SEL review, led by Sooyeon Byun, Amanda Inns, Cynthia Lake, and Liz Kim at Johns Hopkins University, located 24 SEL programs that both met our inclusion standards and had at least one study that met strong, moderate, or promising standards on at least one of the four categories of outcomes.

There is much more evidence at the elementary and middle school levels than at the high school level. Recognizing that some programs had qualifying outcomes at multiple levels, there were 7 programs with positive evidence for pre-K/K, 10 for 1-2, 13 for 3-6, and 9 for middle school. In contrast, there were only 4 programs with positive effects in senior high schools. Fourteen studies took place in urban locations, 5 in suburbs, and 5 in rural districts.

The outcome variables most often showing positive impacts include social skills (12), school climate (10), academic performance (10), pro-social behavior (8), aggression/misconduct (7), disruptive behavior (7), academic engagement (7), interpersonal relationships (7), anxiety/depression (6), bullying (6), and empathy (5). Fifteen of the programs targeted whole classes or schools, and 9 targeted individual students.

Several programs stood out in terms of the size of the impacts. Take the Lead found effect sizes of +0.88 for social relationships and +0.51 for problem behaviors. Check, Connect, and Expect found effect sizes of +0.51 for emotional well-being, +0.29 for problem behaviors, and +0.28 for academic competence. I Can Problem Solve found effect sizes of +0.57 on school climate. The Incredible Years Classroom and Parent Training Approach reported effect sizes of +.57 for emotional regulation, +0.35 for pro-social behavior, and +0.21 for aggression/misconduct. The related Dinosaur School classroom management model reported effect sizes of +0.31 for aggression/misbehavior. Class-Wide Function-Related Intervention Teams (CW-FIT), an intervention for elementary students with emotional and behavioral disorders, had effect sizes of +0.47 and +0.30 across two studies for academic engagement and +0.38 and +0.21 for disruptive behavior. It also reported effect sizes of +0.37 for interpersonal relationships, +0.28 for social skills, and +0.26 for empathy. Student Success Skills reported effect sizes of +0.30 for problem behaviors, +0.23 for academic competence, and +0.16 for social relationships.

In addition to the 24 highlighted programs, Evidence for ESSA lists 145 programs that were no longer available, had no qualifying studies (e.g., no control group), or had one or more qualifying studies but none that met the ESSA Strong, Moderate, or Promising criteria. These programs can be found by clicking on the “search” bar.

There are many problems inherent to interpreting research on social-emotional skills. One is that some programs may appear more effective than others because they use measures such as self-report, or behavior ratings by the teachers who taught the program. In contrast, studies that used more objective measures, such as independent observations or routinely collected data, may obtain smaller impacts. Also, SEL studies typically measure many outcomes and only a few may have positive impacts.

In the coming months, we will be doing analyses and looking for patterns in the data, and will have more to say about overall generalizations. For now, the new SEL section provides a guide to what we know now about individual programs, but there is much more to learn about this important topic.

Attendance

Our attendance review was led by Chenchen Shi, Cynthia Lake, and Amanda Inns. It located ten attendance programs that met our standards. Only three of these reported on chronic absenteeism, which refers to students missing more than 10% of days. Many more focused on average daily attendance (ADA). Among programs focused on average daily attendance, a Milwaukee elementary school program called SPARK had the largest impact (ES=+0.25). This is not an attendance program per se, but it uses AmeriCorps members to provide tutoring services across the school, as well as involving families. SPARK has been shown to have strong effects on reading, as well as its impressive effects on attendance. Positive Action is another schoolwide approach, in this case focused on SEL. It has been found in two major studies in grades K-8 to improve student reading and math achievement, as well as overall attendance, with a mean effect size of +0.20.

The one program to report data on both ADA and chronic absenteeism is called Attendance and Truancy Intervention and Universal Procedures, or ATI-UP. It reported an effect size in grades K-6 of +0.19 for ADA and +0.08 for chronic attendance. Talent Development High School (TDHS) is a ninth grade intervention program that provides interdisciplinary learning communities and “double dose” English and math classes for students who need them. TDHS reported an effect size of +0.17.

An interesting approach with a modest effect size but very modest cost is now called EveryDay Labs (formerly InClass Today). This program helps schools organize and implement a system to send postcards to parents reminding them of the importance of student attendance. If students start missing school, the postcards include this information as well. The effect size across two studies was a respectable +0.16.

As with SEL, we will be doing further work to draw broader lessons from research on attendance in the coming months. One pattern that seems clear already is that effective attendance improvement models work on building close relationships between at-risk students and concerned adults. None of the effective programs primarily uses punishment to improve attendance, but instead they focus on providing information to parents and students and on making it clear to students that they are welcome in school and missed when they are gone.

Both SEL and attendance are topics of much discussion right now, and we hope these new sections will be useful and timely in helping schools make informed choices about how to improve social-emotional and attendance outcomes for all students.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

A Powerful Hunger for Evidence-Proven Technology

I recently saw a 1954 video of B. F. Skinner showing off a classroom full of eager students using teaching machines. In it, Skinner gave all the usual reasons that teaching machines were soon going to be far superior to ordinary teaching: They were scientifically made to enable students to experience constant success in small steps. They were adapted to students’ needs, so fast students did not need to wait for their slower classmates, and the slower classmates could have the time to solidify their understanding, rather than being whisked from one half-learned topic to the next, never getting a chance to master anything and therefore sinking into greater and greater failure.

Here it is 65 years later and “teaching machines,” now called computer-assisted instruction, are ubiquitous. But are they effective? Computers are certainly effective at teaching students to use technology, but can they teach the core curriculum of elementary or secondary schools? In a series of reviews in the Best Evidence Encyclopedia (BEE; www.bestevidence.org), my colleagues and I have reviewed research on the impacts of technology-infused methods on reading, mathematics, and science, in elementary and secondary schools. Here is a quick summary of my findings:

Mean Effect Sizes for Technology-Based Programs in Recent Reviews
Review Topic No. of Studies Mean Effect Size
Inns et al., in preparation Elementary Reading 23 +0.09
Inns et al., 2019 Struggling Readers 6 +0.06
Baye et al., 2018 Secondary Reading 23 -0.01
Pellegrini et al., 2019 Elementary Mathematics 14 +0.06

If you prefer “months of learning,” these are all about one month, except for secondary reading, which is zero. A study-weighted average across these reviews is an effect size of +0.05. That’s not nothing, but it’s not much. Nothing at all like what Skinner and countless other theorists and advocates have been promising for the past 65 years. I think that even the most enthusiastic fans of technology use in education are beginning to recognize that while technology may be useful in improving achievement on traditional learning outcomes, it has not yet had a revolutionary impact on learning of reading or mathematics.

How can we boost the impact of technology in education?

Whatever you think the effects of technology-based education might be for typical school outcomes, no one could deny that it would be a good thing if that impact were larger than it is today. How could government, the educational technology industry, researchers in and out of ed tech, and practicing educators work together to make technology applications more effective than they are now?

In order to understand how to proceed, it is important to acknowledge a serious problem in the world of ed tech today. Educational technology is usually developed by commercial companies. Like all commercial companies, they must serve their market. Unfortunately, the market for ed tech products is not terribly interested in the evidence supporting technology-based programs. Instead, they tend to pay attention to sales reps or marketing, or they seek opinions from their friends and colleagues, rather than looking at evidence. Technology decision makers often value attractiveness, ease of use, low cost, and current trends or fads, over evidence (see Morrison, Ross & Cheung, 2019, for documentation of these choice strategies).

Technology providers are not uncaring people, and they want their products to truly improve outcomes for children. However, they know that if they put a lot of money into developing and researching an innovative approach to education that happens to use technology, and their method requires a lot of professional development to produce substantially positive effects, their programs might be considered too expensive, and less expensive products that ask less of teachers and other educators would dominate the sector. These problems resemble those faced by textbook publishers, who similarly may have great ideas to increase the effectiveness of their textbooks or to add components that require professional development. Textbook designers are prisoners of their markets just as technology developers are.

The solution, I would propose, requires interventions by government designed to nudge education markets toward use of evidence. Government (federal, state, and local) has a real interest in improving outcomes of education. So how could government facilitate the use of technology-based approaches that are known to enhance student achievement more than those that exist today?

blog_5-24-18_DistStudents_500x332

How government could promote use of proven technology approaches

Government could lead the revolution in educational technology that market-driven technology developers cannot do on their own. It could do this by emphasizing two main strategies: providing funding to assist technology developers of all kinds (e.g., for-profit, non-profit, or universities), providing encouragement and incentives to motivate schools, districts, and states to use programs proven effective in rigorous research, and funding development, evaluation, and dissemination of proven technology-based programs.

Encouraging and incentivizing use of proven technology-based programs

The most important thing government must do to expand the use of proven technology-based approaches (as well as non-technology approaches) is to build a powerful hunger for them among educators, parents, and the public at large. Yes, I realize that this sounds backward; shouldn’t government sponsor development, research, and dissemination of proven programs first? Yes it should, and I’ll address this topic in a moment. Of course we need proven programs. No one will clamor for an empty box. But today, many proven programs already exist, and the bigger problem is getting them (and many others to come) enthusiastically adopted by schools. In fact, we must eventually get to the point where educational leaders value not only individual programs supported by research, but value research itself. That is, when they start looking for technology-based programs, their first step would be to find out what programs are proven to work, rather than selecting programs in the usual way and only then trying to find evidence to support the choice they have already made.

Government at any level could support such a process, but the most likely leader in this would be the federal government. It could provide incentives to schools that select and implement proven programs, and build off of this multifaceted outreach efforts to build hype around proven approaches and the idea that approaches should be proven.

A good example of what I have in mind was the Comprehensive School Reform (CSR) grants of the late 1990s. Schools that adopted whole-school reform models that met certain requirements could receive grants of up to $50,000 per year for three years. By the end of CSR, about 1000 schools got grants in a competitive process, but CSR programs were used in an estimated 6000 schools nationwide. In other words, the hype generated by the CSR grants process led many schools that never got a grant to find other resources to adopt these whole school programs. I should note that only a few of the adopted programs had evidence of effectiveness; in CSR, the core idea was whole-school reform, not evidence (though some had good evidence of effectiveness). But a process like CSR, with highly visible grants and active support from government, illustrates a process that built a powerful hunger for whole-school reform, which could work just as well, I think, if applied to building a powerful hunger for proven technology-based programs and other proven approaches.

“Wait a minute,” I can hear you saying. “Didn’t the ESSA evidence standards already do this?”

This was indeed the intention of ESSA, which established “strong,” “moderate,” and “promising” levels of evidence (as well as lower categories). ESSA has been a great first step in building interest in evidence. However, the only schools that could obtain additional funding for selecting proven programs were among the lowest-achieving schools in the country, so ordinary Title I schools, not to mention non-Title I schools, were not much affected. CSR gave extra points to high-poverty schools, but a much wider variety of schools could get into that game. There is a big different between creating interest in evidence, which ESSA has definitely done, and creating a powerful hunger for proven programs. ESSA was passed four years ago, and it is only now beginning to build knowledge and enthusiasm among schools.

Building many more proven technology-based programs

Clearly, we need many more proven technology-based programs. In our Evidence for ESSA website (www.evidenceforessa.org), we list 113 reading and mathematics programs that meet any of the three top ESSA standards. Only 28 of these (18 reading, 10 math) have a major technology component. This is a good start, but we need a lot more proven technology-based programs. To get them, government needs to continue its productive Institute for Education Sciences (IES) and Education Innovation Research (EIR) initiatives. For for-profit companies, Small Business Innovation Research (SBIR) plays an important role in early development of technology solutions. However, the pace of development and research focused on practical programs for schools needs to accelerate, and to learn from its own successes and failures to increase the success rate of its investments.

Communicating “what works”

There remains an important need to provide school leaders with easy-to-interpret information on the evidence base for all existing programs schools might select. The What Works Clearinghouse and our Evidence for ESSA website do this most comprehensively, but these and other resources need help to keep up with the rapid expansion of evidence that has appeared in the past 10 years.

Technology-based education can still produce the outcomes Skinner promised in his 1954 video, the ones we have all been eagerly awaiting ever since. However, technology developers and researchers need more help from government to build an eager market not just for technology, but for proven achievement outcomes produced by technology.

References

Baye, A., Lake, C., Inns, A., & Slavin, R. (2019). Effective reading programs for secondary students. Reading Research Quarterly, 54 (2), 133-166.

Inns, A., Lake, C., Pellegrini, M., & Slavin, R. (2019). A synthesis of quantitative research on programs for struggling readers in elementary schools. Available at www.bestevidence.org. Manuscript submitted for publication.

Inns, A., Lake, C., Pellegrini, M., & Slavin, R. (in preparation). A synthesis of quantitative research on elementary reading. Baltimore, MD: Center for Research and Reform in Education, Johns Hopkins University.

Morrison, J. R., Ross, S.M., & Cheung, A.C.K. (2019). From the market to the classroom: How ed-tech products are procured by school districts interacting with vendors. Educational Technology Research and Development, 67 (2), 389-421.

Pellegrini, M., Inns, A., Lake, C., & Slavin, R. (2019). Effective programs in elementary mathematics: A best-evidence synthesis. Available at www.bestevidence.com. Manuscript submitted for publication.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Nobel Experiments

The world of evidence-based policy just got some terrific news. Abhijit Banerjee and Esther Duflo, of MIT, and Michael Kremer of Harvard, were recently awarded the Nobel Prize in economics.

This award honors extraordinary people doing extraordinary work to alleviate poverty in developing countries. I heard Esther Duflo speak at the Society for Research on Effective Education, and saw her amazing Ted Talk on the research that won the Nobel (delivered before they knew this was going to happen). I strongly suggest you view her speech, at https://www.ted.com/talks/esther_duflo_social_experiments_to_fight_poverty?language=en

But the importance of this award goes far beyond its recognition of the scholars who received it. It celebrates the same movement toward evidence-based policy represented by the Institute for Education Sciences, Education Innovation Research, the Arnold Foundation, and others in the U.S., the Education Endowment Foundation in the U.K., and this blog. It also celebrates the work of researchers in education, psychology, sociology, as well as economics, who are committed to using rigorous research to advance human progress. The Nobel awardees represent the international development wing of this movement, largely funded by the World Bank, the InterAmerica Development Bank, and other international aid organizations.

In her Ted Talk, Esther Duflo explains the grand strategy she and her colleagues pursue. They take major societal problems in developing countries, break them down into solvable parts, and then use randomized experiments to test solutions to those parts. Along with Dr. Banerjee (her husband) and Michael Kremer, she first did a study that found that ensuring that students in India had textbooks made no difference in learning. They then successfully tested a plan to provide inexpensive tutors and, later, computers, to help struggling readers in India (Banerjee, Cole, Duflo, & Linden, 2007). One fascinating series of studies tested the cost-effectiveness of various educational treatments in developing countries. The winner? Curing children of intestinal worms. Based on this and other research, the Carter Foundation embarked on a campaign that has virtually eradicated Guinea worm worldwide.

blog_11-7-19_classroomIndia_500x333

Dr. Duflo and her colleagues later tested variations in programs to provide malaria-inhibiting bed nets in developing countries in which malaria is the number one killer of children, especially those less than five years old. Were outcomes best if bed nets (retail cost= $3) were free, or only discounted to varying degrees? Many economists and policy makers worried that people who paid nothing for bed nets would not value them, or might use them for other purposes. But the randomized study found that without question, free bed nets were more often obtained and used than were discounted ones, potentially saving thousands of children’s lives.

For those of us who work in evidence-based education, the types of experiments being done by the Nobel laureates are entirely familiar, even though they have practical aspects quite different from the ones we encounter when we work in the U.S. or the U.K., for example. However, we are far from a majority among researchers in our own countries, and we face major struggles to continue to insist on randomized experiments as the criterion of effectiveness. I’m sure people working in international development face equal challenges. This is why this Nobel Prize in economics means a lot to all of us. People pay a lot of attention to Nobel Prizes, and there isn’t one in educational research, so having a Nobel shared by economists whose main contribution is in the use of randomized experiments to solve questions of great practical and policy importance, including studies in education itself, may be the closest we’ll ever get to Nobel recognition for the principle espoused by many in applied research in psychology, sociology, and education, as it is by many economists.

Nobel Prizes are often used to send a message, to support important new developments in research as well as to recognize deserving researchers who are leaders in this area. This was clearly the case with this award. The Nobel announcement makes it clear how the work of the Nobel laureates has transformed their field, to the point that “their experimental research methodologies entirely dominate developmental economics.”  I hope this event will add further credibility and awareness to the idea that rigorous evidence is a key lever for change that matters in the lives of people

 

Reference

Banerjee, A., Cole, S., Duflo, E., & Linden, L. (2007). Remedying education: Evidence from two randomized experiments in India. The Quarterly Journal of Economics, 122 (3), 1235-1264.

 

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

On Reviews of Research in Education

Not so long ago, every middle class home had at least one encyclopedia. Encyclopedias were prominently displayed, a statement to all that this was a house that valued learning. People consulted the encyclopedia to find out about things of interest to them. Those who did not own encyclopedias found them in the local library, where they were heavily used. As a kid, I loved everything about encyclopedias. I loved to read them, but also loved their musty small, their weight, and their beautiful maps and photos.

There were two important advantages of an encyclopedia. First, it was encyclopedic, so users could be reasonably certain that whatever information they wanted was in there somewhere. Second, they were authoritative. Whatever it said in the encyclopedia was likely to be true, or at least carefully vetted by experts.

blog_10-17-19_encyclopediakid_500x331

In educational research, and all scientific fields, we have our own kinds of encyclopedias. One consists of articles in journals that publish reviews of research. In our field, the Review of Educational Research plays a pre-eminent role in this, but there are many others. Reviews are hugely popular. Invariably, review journals have a much higher citation count than even the most esteemed journals focusing on empirical research. In addition to journals, reviews appear I edited volumes, in online compendia, in technical reports, and other sources. At Johns Hopkins, we produce a bi-weekly newsletter, Best Evidence in Brief (BEiB; https://beibindex.wordpress.com/) that summarizes recent research in education. Two years ago we looked at analytics to find out the favorite articles from BEiB. Although BEiB mostly summarizes individual studies, almost all of its favorite articles were summaries of the findings of recent reviews.

Over time, RER and other review journals become “encyclopedias” of a sort.  However, they are not encyclopedic. No journal tries to ensure that key topics will all be covered over time. Instead, journal reviewers and editors evaluate each review sent to them on its own merits. I’m not criticizing this, but it is the way the system works.

Are reviews in journals authoritative? They are in one sense, because reviews accepted for publication have been carefully evaluated by distinguished experts on the topic at hand. However, review methods vary widely and reviews are written for many purposes. Some are written primarily for theory development, and some are really just essays with citations. In contrast, one category of reviews, meta-analyses, go to great lengths to locate and systematically include all relevant citations. These are not pure types, and most meta-analyses have at least some focus on theory building and discussion of current policy or research issues, even if their main purpose is to systematically review a well-defined set of studies.

Given the state of the art of research reviews in education, how could we create an “encyclopedia” of evidence from all sources on the effectiveness of programs and practices designed to improve student outcomes? The goal of such an activity would be to provide readers with something both encyclopedic and authoritative.

My colleagues and I created two websites that are intended to serve as a sort of encyclopedia of PK-12 instructional programs. The Best Evidence Encyclopedia (BEE; www.bestevidence.org) consists of meta-analyses written by our staff and students, all of which use similar inclusion criteria and review methods. These are used by a wide variety of readers, especially but not only researchers. The BEE has meta-analyses on elementary and secondary reading, reading for struggling readers, writing programs, programs for English learners, elementary and secondary mathematics, elementary and secondary science, early childhood programs, and other topics, so at least as far as achievement outcomes are concerned, it is reasonably encyclopedic. Our second website is Evidence for ESSA, designed more for educators. It seeks to include every program currently in existence, and therefore is truly encyclopedic in reading and mathematics. Sections on social emotional learning, attendance, and science are in progress.

Are the BEE and Evidence for ESSA authoritative as well as encyclopedic? You’ll have to judge for yourself. One important indicator of authoritativeness for the BEE is that all of the meta-analyses are eventually published, so the reviewers for those journals could be considered to be lending authority.

The What Works Clearinghouse (https://ies.ed.gov/ncee/wwc/) could be considered authoritative, as it is a carefully monitored online publication of the U.S. Department of Education. But is it encyclopedic? Probably not, for two reasons. One is that the WWC has difficulty keeping up with new research. Secondly, the WWC does not list programs that do not have any studies that meet its standards. As a result of both of these, a reader who types in the name of a current program may find nothing at all on it. Is this because the program did not meet WWC standards, or because the WWC has not yet reviewed it? There is no way to tell. Still, the WWC makes important contributions in the areas it has reviewed.

Beyond the websites focused on achievement, the most encyclopedic and authoritative source is Blueprints (www.blueprintsprograms.org). Blueprints focuses on drug and alcohol abuse, violence, bullying, social emotional learning, and other topics not extensively covered in other review sources.

In order to provide readers with easy access to all of the reviews meeting a specified level of quality on a given topic, it would be useful to have a source that briefly describes various reviews, regardless of where they appear. For example, a reader might want to know about all of the meta-analyses that focus on elementary mathematics, or dropout prevention, or attendance. These would include review articles published in scientific journals, technical reports, websites, edited volumes, and so on. To be cited in detail, the reviews should have to meet agreed-upon criteria, including a restriction to experimental-control comparison, a broad and well-documented search for eligible studies, documented efforts to include all studies (published or unpublished) that fall within well-specified parameters (e.g., subjects, grade levels, and start and end dates of studies included). Reviews that meet these standards might be highlighted, though others, including less systematic reviews, should be listed as well, as supplementary resources.

Creating such a virtual encyclopedia would be a difficult but straightforward task. At the end, the collection of rigorous reviews would offer readers encyclopedic, authoritative information on the topics of their interest, as well as providing something more important that no paper encyclopedias ever included: contrasting viewpoints from well-informed experts on each topic.

My imagined encyclopedia wouldn’t have the hypnotic musty smell, the impressive heft, or the beautiful maps and photos of the old paper encyclopedias. However, it would give readers access to up-to-date, curated, authoritative, quantitative reviews of key topics in education, with readable and appealing summaries of what was concluded in qualifying reviews.

Also, did I mention that unlike the encyclopedias of old, it would have to be free?

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Do School Districts Really Have Difficulty Meeting ESSA Evidence Standards?

The Center for Educational Policy recently released a report on how school districts are responding to the Every Student Succeeds Act (ESSA) requirement that schools seeking school improvement grants select programs that meet ESSA’s strong, moderate, or promising standards of evidence. Education Week ran a story on the CEP report.

The report noted that many states, districts, and schools are taking the evidence requirements seriously, and are looking at websites and consulting with researchers to help them identify programs that meet the standards. This is all to the good.

However, the report also notes continuing problems districts and schools are having finding out “what works.” Two particular problems were cited. One was that districts and schools were not equipped to review research to find out what works. The other was that rural districts and schools found few programs proven effective in rural schools.

I find these concerns astounding. The same concerns were expressed when ESSA was first passed, in 2015. But that was almost four years ago. Since 2015, the What Works Clearinghouse has added information to help schools identify programs that meet the top two ESSA evidence categories, strong and moderate. Our own Evidence for ESSA, launched in February, 2017, has up-to-date information on virtually all PK-12 reading and math programs currently in dissemination. Among hundreds of programs examined, 113 meet ESSA standards for strong, moderate, or promising evidence of effectiveness. WWC, Evidence for ESSA, and other sources are available online at no cost. The contents of the entire Evidence for ESSA website were imported into Ohio’s own website on this topic, and dozens of states, perhaps all of them, have informed their districts and schools about these sources.

The idea that districts and schools could not find information on proven programs if they wanted to do so is difficult to believe, especially among schools eligible for school improvement grants. Such schools, and the districts in which they are located, write a lot of grant proposals for federal and state funding. The application forms for school improvement grants always explain the evidence requirements, because that is the law. Someone in every state involved with federal funding knows about the WWC and Evidence for ESSA websites. More than 90,000 unique users have used Evidence for ESSA, and more than 800 more sign on each week.

blog_10-10-19_generickids_500x333

As to rural schools, it is true that many studies of educational programs have taken place in urban areas. However, 47 of the 113 programs qualified by Evidence for ESSA were validated in at least one rural study, or a study including a large enough rural sample to enable researchers to separately report program impacts for rural students. Also, almost all widely disseminated programs have been used in many rural schools. So rural districts and schools that care about evidence can find programs that have been evaluated in rural locations, or at least that were evaluated in urban or suburban schools but widely disseminated in rural schools.

Also, it is important to note that if a program was successfully evaluated only in urban or suburban schools, the program still meets the ESSA evidence standards. If no studies of a given outcome were done in rural locations, a rural school in need of better outcomes could, in effect, be asked to choose between a program proven to work somewhere and probably used in dissemination in rural schools, or they could choose a program not proven to work anywhere. Every school and district has to make the best choices for their kids, but if I were a rural superintendent or principal, I’d read up on proven programs, and then go visit some rural schools using that program nearby. Wouldn’t you?

I have no reason to suspect that the CEP survey is incorrect. There are many indications that district and school leaders often do feel that the ESSA evidence rules are too difficult to meet. So what is really going on?

My guess is that there are many district and school leaders who do not want to know about evidence on proven programs. For example, they may have longstanding, positive relationships with representatives of publishers or software developers, or they may be comfortable and happy with the materials and services they are already using, evidence-proven or not. If they do not have evidence of effectiveness that would pass muster with WWC or Evidence for ESSA, the publishers and software developers may push hard on state and district officials, put forward dubious claims for evidence (such as studies with no control groups), and do their best to get by in a system that increasingly demands evidence that they lack. In my experience, district and state officials often complain about having inadequate staff to review evidence of effectiveness, but their concern may be less often finding out what works as it is defending themselves from publishers, software developers, or current district or school users of programs, who maintain that they have been unfairly rated by WWC, Evidence for ESSA, or other reviews. State and district leaders who stand up to this pressure may have to spend a lot of time reviewing evidence or hearing arguments.

On the plus side, at the same time that publishers and software producers may be seeking recognition for their current products, many are also sponsoring evaluations of some of their products that they feel are mostly likely to perform well in rigorous evaluations. Some may be creating new programs that resemble programs that have met evidence standards. If the federal ESSA law continues to demand evidence for certain federal funding purposes, or even to expand this requirement to additional parts of federal grant-making, then over time the ESSA law will have its desired effect, rewarding the creation and evaluation of programs that do meet standards by making it easier to disseminate such programs. The difficulties the evidence movement is experiencing are likely to diminish over time as more proven programs appear, and as federal, state, district, and school leaders get comfortable with evidence.

Evidence-based reform was always going to be difficult, because of the amount of change it entails and the stakes involved. But sooner or later, it is the right thing to do, and leaders who insist on evidence will see increasing levels of learning among their students, at minimal cost beyond what they already spend on untested or ineffective approaches. Medicine went through a similar transition in 1962, when the U.S. Congress first required that medicines be rigorously evaluated for effectiveness and safety. At first, many leaders in the medical profession resisted the changes, but after a while, they came to insist on them. The key is political leadership willing to support the evidence requirement strongly and permanently, so that educators and vendors alike will see that the best way forward is to embrace evidence and make it work for kids.

Photo courtesy of Allison Shelley/The Verbatim Agency for American Education: Images of Teachers and Students in Action

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.