John Hattie is Wrong

John Hattie is a professor at the University of Melbourne, Australia. He is famous for a book, Visible Learning, which claims to review every area of research that relates to teaching and learning. He uses a method called “meta-meta-analysis,” averaging effect sizes from many meta-analyses. The book ranks factors from one to 138 in terms of their effect sizes on achievement measures. Hattie is a great speaker, and many educators love the clarity and simplicity of his approach. How wonderful to have every known variable reviewed and ranked!

However, operating on the principle that anything that looks to be too good to be true probably is, I looked into Visible Learning to try to understand why it reports such large effect sizes. My colleague, Marta Pellegrini from the University of Florence (Italy), helped me track down the evidence behind Hattie’s claims. And sure enough, Hattie is profoundly wrong. He is merely shoveling meta-analyses containing massive bias into meta-meta-analyses that reflect the same biases.

blog_6-21-18_salvagepaper_476x500

Part of Hattie’s appeal to educators is that his conclusions are so easy to understand. He even uses a system of dials with color-coded “zones,” where effect sizes of 0.00 to +0.15 are designated “developmental effects,” +0.15 to +0.40 “teacher effects” (i.e., what teachers can do without any special practices or programs), and +0.40 to +1.20 the “zone of desired effects.” Hattie makes a big deal of the magical effect size +0.40, the “hinge point,” recommending that educators essentially ignore factors or programs below that point, because they are no better than what teachers produce each year, from fall to spring, on their own. In Hattie’s view, an effect size of from +0.15 to +0.40 is just the effect that “any teacher” could produce, in comparison to students not being in school at all. He says, “When teachers claim that they are having a positive effect on achievement or when a policy improves achievement, this is almost always a trivial claim: Virtually everything works. One only needs a pulse and we can improve achievement.” (Hattie, 2009, p. 16). An effect size of 0.00 to +0.15 is, he estimates, “what students could probably achieve if there were no schooling” (Hattie, 2009, p. 20). Yet this characterization of dials and zones misses the essential meaning of effect sizes, which are rarely used to measure the amount teachers’ students gain from fall to spring, but rather the amount students receiving a given treatment gained in comparison to gains made by similar students in a control group over the same period. So an effect size of, say, +0.15 or +0.25 could be very important.

Hattie’s core claims are these:

  • Almost everything works
  • Any effect size less than +0.40 is ignorable
  • It is possible to meaningfully rank educational factors in comparison to each other by averaging the findings of meta-analyses.

These claims appear appealing, simple, and understandable. But they are also wrong.

The essential problem with Hattie’s meta-meta-analyses is that they accept the results of the underlying meta-analyses without question. Yet many, perhaps most meta-analyses accept all sorts of individual studies of widely varying standards of quality. In Visible Learning, Hattie considers and then discards the possibility that there is anything wrong with individual meta-analyses, specifically rejecting the idea that the methods used in individual studies can greatly bias the findings.

To be fair, a great deal has been learned about the degree to which particular study characteristics bias study findings, always in a positive (i.e., inflated) direction. For example, there is now overwhelming evidence that effect sizes are significantly inflated in studies with small sample sizes, brief durations, use measures made by researchers or developers, are published (vs. unpublished), or use quasi-experiments (vs. randomized experiments) (Cheung & Slavin, 2016). Many meta-analyses even include pre-post studies, or studies that do not have pretests, or have pretest differences but fail to control for them. For example, I once criticized a meta-analysis of gifted education in which some studies compared students accepted into gifted programs to students rejected for those programs, controlling for nothing!

A huge problem with meta-meta-analysis is that until recently, meta-analysts rarely screened individual studies to remove those with fatal methodological flaws. Hattie himself rejects this procedure: “There is…no reason to throw out studies automatically because of lower quality” (Hattie, 2009, p. 11).

In order to understand what is going on in the underlying meta-analyses in a meta-meta-analysis, is it crucial to look all the way down to the individual studies. As a point of illustration, I examined Hattie’s own meta-meta-analysis of feedback, his third ranked factor, with a mean effect size of +0.79. Hattie & Timperly (2007) located 12 meta-analyses. I found some of the ones with the highest mean effect sizes.

At a mean of +1.24, the meta-analysis with the largest effect size in the Hattie & Timperley (2007) review was a review of research on various reinforcement treatments for students in special education by Skiba, Casey, & Center (1985-86). The reviewers required use of single-subject designs, so the review consisted of a total of 35 students treated one at a time, across 25 studies. Yet it is known that single-subject designs produce much larger effect sizes than ordinary group designs (see What Works Clearinghouse, 2017).

The second-highest effect size, +1.13, was from a meta-analysis by Lysakowski & Walberg (1982), on instructional cues, participation, and corrective feedback. Not enough information is provided to understand the individual studies, but there is one interesting note. A study using a single-subject design, involving two students, had an effect size of 11.81. That is the equivalent of raising a child’s IQ from 100 to 277! It was “winsorized” to the next-highest value of 4.99 (which is like adding 75 IQ points). Many of the studies were correlational, with no controls for inputs, or had no control group, or were pre-post designs.

A meta-analysis by Rummel and Feinberg (1988), with a reported effect size of +0.60, is perhaps the most humorous inclusion in the Hattie & Timperley (2007) meta-meta-analysis. It consists entirely of brief lab studies of the degree to which being paid or otherwise reinforced for engaging in an activity that was already intrinsically motivating would reduce subjects’ later participation in that activity. Rummel & Feinberg (1988) reported a positive effect size if subjects later did less of the activity they were paid to do. The reviewers decided to code studies positively if their findings corresponded to the theory (i.e., that feedback and reinforcement reduce later participation in previously favored activities), but in fact their “positive” effect size of +0.60 indicates a negative effect of feedback on performance.

I could go on (and on), but I think you get the point. Hattie’s meta-meta-analyses grab big numbers from meta-analyses of all kinds with little regard to the meaning or quality of the original studies, or of the meta-analyses.

If you are familiar with the What Works Clearinghouse (2007), or our own Best-Evidence Syntheses (www.bestevidence.org) or Evidence for ESSA (www.evidenceforessa.org), you will know that individual studies, except for studies of one-to-one tutoring, almost never have effect sizes as large as +0.40, Hattie’s “hinge point.” This is because WWC, BEE, and Evidence for ESSA all very carefully screen individual studies. We require control groups, controls for pretests, minimum sample sizes and durations, and measures independent of the treatments. Hattie applies no such standards, and in fact proclaims that they are not necessary.

It is possible, in fact essential, to make genuine progress using high-quality rigorous research to inform educational decisions. But first we must agree on what standards to apply.  Modest effect sizes from studies of practical treatments in real classrooms over meaningful periods of time on measures independent of the treatments tell us how much a replicable treatment will actually improve student achievement, in comparison to what would have been achieved otherwise. I would much rather use a program with an effect size of +0.15 from such studies than to use programs or practices found in studies with major flaws to have effect sizes of +0.79. If they understand the situation, I’m sure all educators would agree with me.

To create information that is fair and meaningful, meta-analysts cannot include studies of unknown and mostly low quality. Instead, they need to apply consistent standards of quality for each study, to look carefully at each one and judge its freedom from bias and major methodological flaws, as well as its relevance to practice. A meta-analysis cannot be any better than the studies that go into it. Hattie’s claims are deeply misleading because they are based on meta-analyses that themselves accepted studies of all levels of quality.

Evidence matters in education, now more than ever. Yet Hattie and others who uncritically accept all studies, good and bad, are undermining the value of evidence. This needs to stop if we are to make solid progress in educational practice and policy.

References

Cheung, A., & Slavin, R. (2016). How methodological features affect effect sizes in education. Educational Researcher, 45 (5), 283-292.

Hattie, J. (2009). Visible learning. New York, NY: Routledge.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 (1), 81-112.

Lysakowski, R., & Walberg, H. (1982). Instructional effects of cues, participation, and corrective feedback: A quantitative synthesis. American Educational Research Journal, 19 (4), 559-578.

Rummel, A., & Feinberg, R. (1988). Cognitive evaluation theory: A review of the literature. Social Behavior and Personality, 16 (2), 147-164.

Skiba, R., Casey, A., & Center, B. (1985-86). Nonaversive procedures I the treatment of classroom behavior problems. The Journal of Special Education, 19 (4), 459-481.

What Works Clearinghouse (2017). Procedures handbook 4.0. Washington, DC: Author.

Photo credit: U.S. Farm Security Administration [Public domain], via Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

 

Advertisements

More Chinese Dragons: How the WWC Could Accelerate Its Pace

blog_4-26-18_chinesedragon_500x375

A few months ago, I wrote a blog entitled “The Mystery of the Chinese Dragon: Why Isn’t the WWC Up to Date?” It really had nothing to do with dragons, but compared the timeliness of the What Works Clearinghouse review of research on secondary reading programs and a Baye et al. (2017) review on the same topic. The graph depicting the difference looked a bit like a Chinese dragon with a long tail near the ground and huge jaws. The horizontal axis was the dates accepted studies had appeared, and the vertical axis was the number of studies. Here is the secondary reading graph.

blog_4-26-18_graph1_500x292

What the graph showed is that the WWC and the U.S. studies from the Baye et al. (2017) review were similar in coverage of studies appearing from 1987 to 2009, but after that diverged sharply, because the WWC is very slow to add new studies, in comparison to reviews using similar methods.

In the time since the Chinese Dragon for secondary reading studies appeared on my blog, my colleagues and I have completed two more reviews, one on programs for struggling readers by Inns et al. (2018) and one on programs for elementary math by Pellegrini et al. (2018). We made new Chinese Dragon graphs for each, which appear below.*

blog_4-26-18_graph3_500x300

blog_4-26-18_graph2_500x316

*Note: In the reading graph, the line for “Inns et al.” added numbers of studies from the Inns et al. (2018) review of programs for struggling readers to additional studies of programs for all elementary students in an unfinished report.

The new dragons look remarkably like the first. Again, what matters is the similar pattern of accepted studies before 2009, (the “tail”), and the sharply diverging rates in more recent years (the “jaws”).

There are two phenomena that cause the dragons’ “jaws” to be so wide open. The upper jaw, especially in secondary reading and elementary math, indicate that many high-quality rigorous evaluations are appearing in recent years. Both the WWC inclusion standards and those of the Best Evidence Encyclopedia (BEE; www.bestevidence.org) require control groups, clustered analysis for clustered designs, samples that are well-matched at pretest and have similar attrition by posttest, and other features indicating methodological rigor, of the kind expected by the ESSA evidence standards, for example.

The upper jaw of each dragon is increasing so rapidly because rigorous research is increasing rapidly in the U.S. (it is also increasing rapidly in the U.K., but the WWC does not include non-U.S. studies, and non-U.S. studies are removed from the graph for comparability). This increase is due to U. S. Department of Education funding of many rigorous studies in each topic area, through its Institute for Education Sciences (IES) and Investing in Innovation (i3) programs, and special purpose funding such as Striving Readers and Preschool Curriculum Education Research. These recent studies are not only uniformly rigorous, they are also of great importance to educators, as they evaluate current programs being actively disseminated today. Many of the older programs whose evaluations appear on the dragons’ tails no longer exist, as a practical matter. If educators wanted to adopt them, the programs would have to be revised or reinvented. For example, Daisy Quest, still in the WWC, was evaluated on TRS-80 computers not manufactured since the 1980s. Yet exciting new programs with rigorous evaluations, highlighted in the BEE reviews, do not appear at all in the WWC.

I do not understand why the WWC is so slow to add new evaluations, but I suspect that the answer lies in the painstaking procedures any government has to follow to do . . ., well, anything. Perhaps there are very good reasons for this stately pace of progress. However, the result is clear. The graph below shows the publication dates of every study in every subject and grade level accepted by the WWC and entered on its database. This “half-dragon” graph shows that only 26 studies published or made available after 2013 appear on the entire WWC database. Of these, only two have appeared after 2015.

blog_4-26-18_graph4_500x316

The slow pace of the WWC is of particular concern in light of the appearance of the ESSA evidence standards. More educators than ever before must be consulting the WWC, and many must be wondering why programs they know to exist are not listed there, or why recent studies do not appear.

Assuming that there are good reasons for the slow pace of the WWC, or that for whatever reason the pace cannot be greatly accelerated, what can be done to bring the WWC up to date? I have a suggestion.

Imagine that the WWC commissioned someone to do rapid updating of all topics reviewed on the WWC website. The reviews would follow WWC guidelines, but would appear very soon after studies were published or issued. It’s clear that this is possible, because we do it for Evidence for ESSA (www.evidenceforessa.org). Also, the WWC has a number of “quick reviews,” “single study reports,” and so on, scattered around on its site, but not integrated with its main “Find What Works” reviews of various programs. These could be readily integrated with “Find What Works.”

The recent studies identified in this accelerated process might be identified as “provisionally reviewed,” much as the U. S. Patent Office has “patent pending” before inventions are fully patented. Users would have an option to look only at program reports containing fully reviewed studies, or could decide to look at reviews containing both fully and provisionally reviewed studies. If a more time consuming full review of a study found results different from those of the provisional review, the study report and the program report in which it was contained would be revised, of course.

A process of this kind could bring the WWC up to date and keep it up to date, providing useful, actionable evidence in a timely fashion, while maintaining the current slower process, if there is a rationale for it.

The Chinese dragons we are finding in every subject we have examined indicate the rapid growth and improving quality of evidence on programs for schools and students. The U. S. Department of Education and our whole field should be proud of this, and should make it a beacon on a hill, not hide our light under a bushel. The WWC has the capacity and the responsibility to highlight current, high-quality studies as soon as they appear. When this happens, the Chinese dragons will retire to their caves, and all of us, government, researchers, educators, and students, will benefit.

References

Baye, A., Lake, C., Inns, A., & Slavin, R. (2017). Effective reading programs for secondary students. Manuscript submitted for publication. Also see Baye, A., Lake, C., Inns, A. & Slavin, R. E. (2017, August). Effective reading programs for secondary students. Baltimore, MD: Johns Hopkins University, Center for Research and Reform in Education.

Inns, A., Lake, C., Pellegrini, M., & Slavin, R. (2018). Effective programs for struggling readers: A best-evidence synthesis. Paper presented at the annual meeting of the Society for Research on Educational Effectiveness, Washington, DC.

Pellegrini, M., Inns, A., & Slavin, R. (2018). Effective programs in elementary mathematics: A best-evidence synthesis. Paper presented at the annual meeting of the Society for Research on Educational Effectiveness, Washington, DC.

Photo credit: J Bar [GFDL (http://www.gnu.org/copyleft/fdl.html), CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/), GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Evidence for ESSA Celebrates its First Anniversary

Penguin 02 22 18On February 28, 2017 we launched Evidence for ESSA (www.evidenceforessa.org), our website providing the evidence to support educational programs according to the standards laid out in the Every Child Succeeds Act in December, 2015.

Evidence for ESSA began earlier, of course. It really began one day in September, 2016, when I heard leaders of the Institute for Education Sciences (IES) and the What Works Clearinghouse (WWC) announce that the WWC would not be changed to align with the ESSA evidence standards. I realized that no one else was going to create scientifically valid, rapid, and easy-to-use websites providing educators with actionable information on programs meeting ESSA standards. We could do it because our group at Johns Hopkins University, and partners all over the world, had been working for many years creating and updating another website, the Best Evidence Encyclopedia (BEE; www.bestevidence.org).BEE reviews were not primarily designed for practitioners and they did not align with ESSA standards, but at least we were not starting from scratch.

We assembled a group of large membership organizations to advise us and to help us reach thoughtful superintendents, principals, Title I directors, and others who would be users of the final product. They gave us invaluable advice along the way. We also assembled a technical working group (TWG) of distinguished researchers to advise us on key decisions in establishing our website.

It is interesting to note that we have not been able to obtain adequate funding to support Evidence for ESSA. Instead, it is mostly being written by volunteers and graduate students, all of whom are motivated only by a passion for evidence to improve the education of students.

A year after launch, Evidence for ESSA has been used by more than 36,000 unique users, and I hear that it is very useful in helping states and districts meet the ESSA evidence standards.

We get a lot of positive feedback, as well as complaints and concerns, to which we try to respond rapidly. Feedback has been important in changing some of our policies and correcting some errors and we are glad to get it.

At this moment we are thoroughly up-to-date on reading and math programs for grades pre-kindergarten to 12, and we are working on science, writing, social-emotional outcomes, and summer school. We are also continuing to update our more academic BEE reviews, which draw from our work on Evidence for ESSA.

In my view, the evidence revolution in education is truly a revolution. If the ESSA evidence standards ultimately prevail, education will at long last join fields such as medicine and agriculture in a dynamic of practice to development to evaluation to dissemination to better practice, in an ascending spiral that leads to constantly improving practices and outcomes.

In a previous revolution, Thomas Jefferson said, “If I had to choose between government without newspapers and newspapers without government, I’d take the newspapers.” In our evidence revolution in education, Evidence for ESSA, the WWC, and other evidence sources are our “newspapers,” providing the information that people of good will can use to make wise and informed decisions.

Evidence for ESSA is the work of many dedicated and joyful hands trying to provide our profession with the information it needs to improve student outcomes. The joy in it is the joy in seeing teachers, principals, and superintendents see new, attainable ways to serve their children.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Swallowing Camels

blog216_camel_500x335

The New Testament contains a wonderful quotation that I use often, because it unfortunately applies to so much of educational research:

Ye blind guides, which strain at a gnat, and swallow a camel (Matthew 23:24).

The point of the quotation, of course, is that we are often fastidious about minor (research) sins while readily accepting major ones.

In educational research, “swallowing camels” applies to studies accepted in top journals or by the What Works Clearinghouse (WWC) despite substantial flaws that lead to major bias, such as use of measures slanted toward the experimental group, or measures administered and scored by the teachers who implemented the treatment. “Straining at gnats” applies to concerns that, while worth attending to, have little potential for bias, yet are often reasons for rejection by journals or downgrading by the WWC. For example, our profession considers p<.05 to indicate statistical significance, while p<.051 should never be mentioned in polite company.

As my faithful readers know, I have written a series of blogs on problems with policies of the What Works Clearinghouse, such as acceptance of researcher/developer-made measures, failure to weight by sample size, use of “substantively important but not statistically significant” as a qualifying criterion, and several others. However, in this blog, I wanted to share with you some of the very worst, most egregious examples of studies that should never have seen the light of day, yet were accepted by the WWC and remain in it to this day. Accepting the WWC as gospel means swallowing these enormous and ugly camels, and I wanted to make sure that those who use the WWC at least think before they gulp.

Camel #1: DaisyQuest. DaisyQuest is a computer-based program for teaching phonological awareness to children in pre-K to Grade 1. The WWC gives DaisyQuest its highest rating, “positive,” for alphabetics, and ranks it eighth among literacy programs for grades pre-K to 1.

There were four studies of DaisyQuest accepted by the WWC. In each, half of the students received DaisyQuest in groups of 3-4, working with an experimenter. In two of the studies, control students never had their hands on a computer before they took the final tests on a computer. In the other two, control students used math software, so they at least got some experience with computers. The outcome tests were all made by the experimenters and all were closely aligned with the content of the software, with the exception of two Woodcock scales used in one of the studies. All studies used a measure called “Undersea Challenge” that closely resembled the DaisyQuest game format and was taken on the computer. All four studies also used the other researcher-made measures. None of the Woodcock measures showed statistically significant differences, but the researcher-made measures, especially Undersea Challenge and other specific tests of phonemic awareness, segmenting, and blending, did show substantial significant differences.

Recall that in the mid-to late-1990s, when the studies were done, students in preschool and kindergarten were unlikely to be getting any systematic teaching of phonemic awareness. So there is no reason to expect the control students to be learning anything that was tested on the posttests, and it is not surprising that effect sizes averaged +0.62. In the two studies in which control students had never touched a computer, effect sizes were +0.90 and +0.89, respectively.

Camel #2: Brady (1990) study of Reciprocal Teaching

Reciprocal Teaching is a program that teaches students comprehension skills, mostly using science and social studies texts. A 1990 dissertation by P.L. Brady evaluated Reciprocal Teaching in one school, in grades 5-8. The study involved only 12 students, randomly assigned to Reciprocal Teaching or control conditions. The one experimental class was taught by…wait for it…P.L. Brady. The measures included science, social studies, and daily comprehension tests related to the content taught in Reciprocal Teaching but not the control group. They were created and scored by…(you guessed it) P.L. Brady. There were also two Gates-MacGinitie scales, but they had effect sizes much smaller than the researcher-made (and –scored) tests. The Brady study met WWC standards for “potentially positive” because it had a mean effect size of more than +0.25 but was not statistically significant.

Reading Recovery is a one-to-one tutoring program for first graders that has a strong tradition of rigorous research, including a recent large-scale study by May et al. (2016). However, one of the earlier studies of Reading Recovery, by Schwartz (2005), is hard to swallow, so to speak.

In this study, 47 Reading Recovery (RR) teachers across 14 states were asked by e-mail to choose two very similar, low-achieving first graders at the beginning of the year. One student was randomly assigned to receive RR, and one was assigned to the control group, to receive RR in the second semester.

Both students were pre- and posttested on the Observation Survey, a set of measures made by Marie Clay, the developer of RR. In addition, students were tested on Degrees of Reading Power, a standardized test.

The problems with this study mostly have to do with the fact that the teachers who administered pre- and posttests were the very same teachers who provided the tutoring. No researcher or independent tester ever visited the schools. Teachers obviously knew the child they personally tutored. I’m sure the teachers were honest and wanted to be accurate. However, they would have had a strong motivation to see that the outcomes looked good, because they could be seen as evaluations of their own tutoring, and could have had consequences for continuation of the program in their schools.

Most Observation Survey scales involve difficult judgments, so it’s easy to see how teachers’ ratings could be affected by their belief in Reading Recovery.

Further, ten of the 47 teachers never submitted any data. This is a very high rate of attrition within a single school year (21%). Could some teachers, fully aware of their students’ less-than-expected scores, have made some excuse and withheld their data? We’ll never know.

Also recall that most of the tests used in this study were from the Observation Survey made by Clay, which had effect sizes ranging up to +2.49 (!!!). However, on the independent Degrees of Reading Power, the non-significant effect size was only +0.14.

It is important to note that across these “camel” studies, all except Brady (1990) were published. So it was not only the WWC that was taken in.

These “camel” studies are far from unique, and they may not even be the very worst to be swallowed whole by the WWC. But they do give readers an idea of the depth of the problem. No researcher I know of would knowingly accept an experiment in which the control group had never used the equipment on which they were to be posttested, or one with 12 students in which the 6 experimentals were taught by the experimenter, or in which the teachers who tutored students also individually administered the posttests to experimentals and controls. Yet somehow, WWC standards and procedures led the WWC to accept these studies. Swallowing these camels should have caused the WWC a stomach ache of biblical proportions.

 

References

Brady, P. L. (1990). Improving the reading comprehension of middle school students through reciprocal teaching and semantic mapping and strategies. Dissertation Abstracts International, 52 (03A), 230-860.

May, H., Sirinades, P., Gray, A., & Goldsworthy, H. (2016). Reading Recovery: An evaluation of the four-year i3 scale-up. Newark, DE: University of Delaware, Center for Research in Education and Social Policy.

Schwartz, R. M. (2005). Literacy learning of at-risk first grade students in the Reading Recovery early intervention. Journal of Educational Psychology, 97 (2), 257-267.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

“Substantively Important” Isn’t Substantive. It Also Isn’t Important

Since it began in 2002, the What Works Clearinghouse has played an important role in finding, rating, and publicizing findings of evaluations of educational programs. It performs a crucial function for evidence-based reform. For this very reason, it needs to be right. But in several important ways, it uses procedures that are indefensible and have a big impact on its conclusions.

One of these relates to a study rating called “substantively important-positive.” This refers to study outcomes with an effect size of at least +0.25, but that are not statistically significant. I’ve written about this before, but the WWC has recently released a database of information on its studies that makes it easy to analyze WWC data on a large scale, and we have learned a lot more about this topic.

Study outcomes rated as “substantively important – positive” can qualify a study as “potentially positive,” the second-highest WWC rating. “Substantively important-negative” findings (non-significant effect sizes less than -0.25) can cause a study to be rated as potentially negative, which can keep a study from getting a positive rating forever, as a single “potentially negative” rating, under current rules, ensures that a program can never receive a rating better than “mixed,” even if other studies found hundreds of significant positive effects.

People who follow the WWC and know about “substantively important” may assume that it may be a strange rule, but relatively rare in practice. But that is not true.

My graduate student, Amanda Inns, has just done an analysis of WWC data from their own database, and if you are a big fan of the WWC, this is going to be a shock. Amanda has looked at all WWC-accepted reading and math studies. Among these, she found a total of 339 individual outcomes rated “positive” or “potentially positive.” Of these, 155 (46%) reached the “potentially positive” level only because they had effect sizes over +0.25, but were not statistically significant.

Another 36 outcomes were rated “negative” or “potentially negative.” 26 of these (72%) were categorized as “potentially negative” only because they had effect sizes less than -0.25 and were not significant. I’m sure patterns would be similar for subjects other than reading and math.

Put another way, almost half (48%) of outcomes rated positive/potentially positive or negative/potentially negative by the WWC were not statistically significant. As one example of what I’m talking about, consider a program called The Expert Mathematician. It had just one study with only 70 students in 4 classrooms (2 experimental and 2 control). The WWC re-analyzed the data to account for clustering, and the outcomes were nowhere near statistically significant, though they were greater than +0.25. This tiny study, and this study alone, caused The Expert Mathematician to receive the WWC “potentially positive” rating and to be ranked seventh among all middle school math programs. Similarly, Waterford Early Learning received a “potentially positive” rating based on a single tiny study with only 70 kindergarteners in 6 schools. The outcomes ranged from -0.71 to +1.11, and though the mean was more than +0.25, the outcome was far from significant. Yet this study alone put Waterford on the WWC list of proven kindergarten programs.

I’m not taking any position on whether these particular programs are in fact effective. All I am saying is that these very small studies with non-significant outcomes say absolutely nothing of value about that question.

I’m sure that some of you nerdier readers who have followed me this far are saying to yourselves, “well, sure, these substantively important studies may not be statistically significant, but they are probably unbiased estimates of the true effect.”

More bad news. They are not. Not even close.

The problem, also revealed in Amanda Inns’ data, is that studies with large effect sizes but not statistical significance tend to have very small sample sizes (otherwise, they would have been significant). Across WWC reading and math studies that used individual-level assignment, median sample sizes were 48, 74, or 86, for substantively important, significant, or indeterminate (non-significant with ES < +0.25), respectively. For cluster studies, they were 10, 17, and 33 clusters respectively. In other words, “substantively important” outcomes averaged less than half the sample sizes of other outcomes.

And small-sample studies greatly overstate effect sizes. Among all factors that bias effect sizes, small sample size is the most important (only use of researcher/developer-made measures comes close). So a non-significant positive finding in a small study is not an unbiased point estimate that just needs a larger sample to show its significance. It is probably biased, in a consistent, positive direction. Studies with sample sizes less than 100 have about three times the mean effect sizes of studies with sample sizes over 1000, for example.

But “substantively important” ratings can throw a monkey wrench into current policy. The ESSA evidence standards require statistically significant effects for all of its top three levels (strong, moderate, and promising). Yet many educational leaders are using the What Works Clearinghouse as a guide to which programs will meet ESSA evidence standards. They may logically assume that if the WWC says a program is effective, then the federal government stands behind it, regardless of what the ESSA evidence standards actually say. Yet in fact, based on the data analyzed by Amanda Inns for reading and math, 46% of the outcomes rated as positive/potentially positive by WWC (taken to correspond to “strong” or “moderate,” respectively, under ESSA evidence standards) are non-significant, and therefore do not qualify under ESSA.

The WWC needs to remove “substantively important” from its ratings as soon as possible, to avoid a collision with ESSA evidence standards, and to avoid misleading educators any further. Doing so would help make the WWC’s impact on ESSA substantive. And important.

 

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Why the What Works Clearinghouse Matters

In 1962, the most important breakthrough in modern medicine took place. It was not a drug, not a device, not a procedure. It did not immediately save a single life, or cure a single person of disease. But it profoundly changed medicine worldwide, and led to the rapid progress in all of medicine that we have come to take for granted.

This medical miracle was a law, passed in the U.S. Congress, called the Kefauver-Harris Drug Act. It required that drugs sold in the U.S. be proven safe and effective, in high-quality randomized experiments. This law was introduced by Senator Estes Kefauver of Tennessee, largely in response to the thalidomide disaster, when a widely used drug was found to produce disastrous birth defects.

From the moment the Act was passed, medical research changed utterly. The number of randomized experiments shot up. There are still errors and debates and misplaced enthusiasm, but the progress that has made in every area of medicine is undeniable. Today, it is unthinkable in medicine that any drug would be widely sold if it has not been proven to work. Even though Kefauver-Harris itself only applies to the U.S., all advanced countries now have similar laws requiring rigorous evidence of safety and effectiveness of medicines.

One of the ways the Kefauver-Harris Act made its impact was through reviews and publications of research on the evidence supporting the safety and efficacy of medicines. It’s no good having a law requiring strong evidence if only experts know what the evidence is. Many federal programs have sprung up over the years to review the evidence of what works and communicate it to front-line practitioners.

In education, we are belatedly going through our own evidence revolution. Since 2002, the function of communicating the findings of rigorous research in education has mostly been fulfilled by the What Works Clearinghouse (WWC), a website maintained by the U.S. Department of Education’s Institute of Education Sciences (IES). The existence of the WWC has been enormously beneficial. In addition to reviewing the evidence base for educational programs, the WWC’s standards set norms for research. No funder and no researcher wants to invest resources in a study they know the WWC will not accept.

In 2015, education finally had what may be its own Kefauver-Harris moment. This was the passage by the U.S. Congress of the Every Student Succeeds Act (ESSA), which contains specific definitions of strong, moderate, and promising levels of evidence. For certain purposes, especially for school improvement funding for very low-achieving schools, schools must use programs that meet ESSA evidence standards. For others, schools or districts can receive bonus points on grant applications if they use proven programs.

ESSA raises the stakes for evidence in education, and therefore should have raised the stakes for the WWC. If the government itself now requires or incentivizes the use of proven programs, then shouldn’t the government provide information on what individual programs meet those standards?

Yet several months after ESSA was passed, IES announced that the WWC would not be revised to align itself with ESSA evidence standards. This puts educators, and the government itself, in a bind. What if ESSA and WWC conflict? The ESSA standards are in law, so they must prevail over the WWC. Yet the WWC has a website, and ESSA does not. If WWC standards and ESSA standards were identical, or nearly so, this would not be a problem. But in fact they are very far apart.

Anticipating this situation, my colleagues and I at Johns Hopkins University created a new website, www.evidenceforessa.org. It launched in February, 2017, including elementary and secondary reading and math. We are now adding other subjects and grade levels.

In creating our website, we draw from the WWC every day, and in particular use a new Individual Study Database (ISD) that contains information on all of the evaluations the WWC has ever accepted.

The ISD is a useful tool for us, but it has made it relatively easy to ask and answer questions about the WWC itself, and the answers are troubling. We’ve found that almost half of the WWC outcomes rated “positive” or “potentially positive” are not even statistically significant. We have found that measures made by researchers or developers produce effect sizes more than three times those that are independent, yet they are fully accepted by the WWC.

As reported in a recent blog, we’ve discovered that the WWC is very, very slow to add new studies to its main “Find What Works” site. The WWC science topic is not seeking or accepting new studies (“This area is currently inactive and not conducting reviews”). Character education, dropout prevention, and English Language Learners are also inactive. How does this make any sense?

Over the next couple of months, starting in January, I will be releasing a series of blogs sharing what we have been finding out about the WWC. My hope in this is that we can help create a dialogue that will lead the WWC to reconsider many of its core policies and practices. I’m doing this not to compete or conflict with the WWC, but to improve it. If evidence is to have a major role in education policy, government has to help educators and policy makers make good choices. That is what the WWC should be doing, and I still believe it is possible.

The WWC matters, or should matter, because it expresses government’s commitment to evidence, and evidence-based reform. But it can only be a force for good if it is right, timely, accessible, comprehensible, and aligned with other government initiatives. I hope my upcoming blogs will be read in the spirit in which they were written, with hopes of helping the WWC do a better job of communicating evidence to educators eager to help young people succeed in our schools.

 

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

The Mystery of the Chinese Dragon: Why Isn’t the WWC Up to Date?

As evidence becomes more important in educational practice and policy, it is increasingly critical that it be up-to-date. This sounds obvious. Of course we’d prefer evidence from recent studies, which are more likely to have been done under social and political conditions like those that exist today, using standards like those prevailing today.

However, there are reasons that up-to-date evidence is especially important in today’s policy environment. Up-to-date evidence is critical because it is far more likely than earlier research to meet very high methodological standards. Because of substantial investments by the U.S. Department of Education and others, there has been an outpouring of top-quality, randomized, usually third-party evaluations of programs for all subjects and grade levels, published from 2012 to the present.

The reason this matters in practice is that to satisfy ESSA evidence standards, many educators are using the What Works Clearinghouse (WWC) to identify proven programs. And the What Works Clearinghouse is very slow in reviewing studies, and therefore does not contain many of the latest, and therefore highest-quality studies.

The graph below illustrates the problem. It compares all secondary literacy (grades 6-12) studies reported by the WWC as of fall, 2017, on the orange line. The blue line represents a review of research on the same topic by Baye et al. (2017; see www.bestevidence.org). I think the graph resembles a Chinese dragon, with its jaws wide open and a long tail. The sort of dragon you see in Chinese New Year’s parades.

 

 

What the graph shows is that while the number of studies published up to 2009 were about equal for Baye et al. and WWC, they diverged sharply in 2010 (thus the huge open jaws). Baye et al. reported on 58 studies published in 2010 to 2017. WWC reported on only 6, and none at all from 2016 or 2017.

The same patterns are apparent throughout the WWC. Across every topic and grade level, the WWC has only 7 accepted studies from 2014, 7 from 2015, zero from 2016, and zero from 2017.

It is likely that every one of the Baye et al. studies would meet WWC standards. Yet the WWC has just not gotten to them.

It’s important to note that the What Works Clearinghouse is plenty busy. Recent studies are often included in Quick Reviews, Single Study Reviews, Grant Competition Reports, and Practice Guides. However, an educator going to the WWC for guidance on what works will go to Find What Works and click on one of the 12 topic areas, which will list programs. They then may filter their search and go to intervention reports.

These intervention reports are not integrated with Quick Reviews, Single Study Reviews, Grant Competition Reports, or Practice Guides, so the user has no easy way to find out about more recent evaluations, if they in fact appear anywhere in any of these reports. Even if users did somehow find additional information on a program in one of these supplemental reports, the information may be incomplete. In many cases, the supplemental report only notes whether a study meets WWC standards, but does not provide any information about what the outcome was.

The slow pace of the WWC reviews is problematic for many reasons. In addition to missing out on the strongest and most recent studies, the WWC does not register changes in the evidence base for programs already in its database. New programs may not appear at all, leaving readers to wonder why.

Any website developer knows that if users go to a website and are unable to find what they expect to find, they are unlikely to come back. The WWC is a website, and it cannot expect many users to check back every few months to see if programs that interest them, which they know to exist, have been added lately.

In the context of the ESSA evidence standards, the slow pace of the WWC is particularly disturbing. Although the WWC has chosen not to align itself with ESSA standards, many educators use the WWC as a guide to which programs are likely to meet ESSA standards. Failing to keep the WWC up to date may convince many users seeking ESSA information that there are few programs meeting either WWC or ESSA standards.

Educators need accurate, up-to-date information to make informed choices for their students. I hope the WWC will move quickly to provide its readers with essential, useful data on today’s evidence supporting today’s programs. It’s going to have to catch up with the Chinese dragon, or be left to watch the parade going by.