What Kinds of Studies Are Likely to Replicate?

Replicated scientists 03 01 18

In the hard sciences, there is a publication called the Journal of Irreproducible Results.  It really has nothing to do with replication of experiments, but is a humor journal by and for scientists.  The reason I bring it up is that to chemists and biologists and astronomers and physicists, for example, an inability to replicate an experiment is a sure indication that the original experiment was wrong.  To the scientific mind, a Journal of Irreproducible Results is inherently funny, because it is a journal of nonsense.

Replication, the ability to repeat an experiment and get a similar result, is the hallmark of a mature science.  Sad to say, replication is rare in educational research, which says a lot about our immaturity as a science.  For example, in the What Works Clearinghouse, about half of programs across all topics are represented by a single evaluation.  When there are two or more, the results are often very different.  Relatively recent funding initiatives, especially studies supported by Investing in Innovation (i3) and the Institute for Education Sciences (IES), and targeted initiatives such as Striving Readers (secondary reading) and the Preschool Curriculum Evaluation Research (PCER), have added a great deal in this regard. They have funded many large-scale, randomized, very high-quality studies of all sorts of programs in the first place, and many of these are replications themselves, or they provide a good basis for replications later.  As my colleagues and I have done many reviews of research in every area of education, pre-kindergarten to grade 12 (see www.bestevidence.org), we have gained a good intuition about what kinds of studies are likely to replicate and what kinds are less likely.

First, let me define in more detail what I mean by “replication.”  There is no value in replicating biased studies, which may well consistently find the same biased results (as when, for example, both the original studies and the replication studies used the same researcher- or developer-made outcome measures that are slanted toward the content the experimental group experienced but not what the control group experienced) (See http://www.tandfonline.com/doi/abs/10.1080/19345747.2011.558986.)

Instead, I’d consider a successful replication one that shows positive outcomes both in the original studies and in at least one large-scale, rigorous replication. One obvious way to increase the chances that a program producing a positive outcome in one or more initial studies will succeed in such a rigorous replication evaluation is to use a similar, equally rigorous evaluation design in the first place. I think a lot of treatments that fail to replicate are ones that used weak methods in the original studies. In particular, small studies tend to produce greatly inflated effect sizes (see http://www.bestevidence.org/methods/methods.html), which are unlikely to replicate in larger evaluations.

Another factor likely to contribute to replicability is use in the earlier studies of methods or conditions that can be repeated in later studies, or in schools in general. For example, providing teachers with specific manuals, videos demonstrating the methods, and specific student materials all add to the chances that a successful program can be successfully replicated. Avoiding unusual pilot sites (such as schools known to have outstanding principals or staff) may contribute to replication, as these conditions are unlikely to be found in larger-scale studies. Having experimenters or their colleagues or graduate students extensively involved in the early studies diminishes replicability, of course, because those conditions will not exist in replications.

Replications are entirely possible. I wish there were a lot more of them in our field. Showing that programs can be effective in just two rigorous evaluations is way more convincing than just one. As evidence becomes more and more important, I hope and expect that replications, perhaps carried out by states or districts, will become more common.

The Journal of Irreproducible Results is fun, but it isn’t science. I’d love to see a Journal of Replications in Education to tell us what really works for kids.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Advertisements

Higher Ponytails (And Researcher-Made Measures)

blog220_basketball_333x500

Some time ago, I coached my daughter’s fifth grade basketball team. I knew next to nothing about basketball (my sport was…well, chess), but fortunately my research assistant, Holly Roback, eagerly volunteered. She’d played basketball in college, so our girls got outstanding coaching. However, they got whammed. My assistant coach explained it after another disastrous game, “The other team’s ponytails were just higher than ours.” Basically, our girls were terrific at ball handling and free shots, but they came up short in the height department.

Now imagine that in addition to being our team’s coach I was also the league’s commissioner. Imagine that I changed the rules. From now on, lay-ups and jump shots were abolished, and the ball had to be passed three times from player to player before a team could score.

My new rules could be fairly and consistently enforced, but their entire effect would be to diminish the importance of height and enhance the importance of ball handling and set shots.

Of course, I could never get away with this. Every fifth grader, not to mention their parents and coaches, would immediately understand that my rule changes unfairly favored my own team, and disadvantaged theirs (at least the ones with the higher ponytails).

This blog is not about basketball, of course. It is about researcher-made measures or developer-made measures. (I’m using “researcher-made” to refer to both). I’ve been writing a lot about such measures in various blogs on the What Works Clearinghouse (https://wordpress.com/post/robertslavinsblog.wordpress.com/795 and https://wordpress.com/post/robertslavinsblog.wordpress.com/792).

The reason I’m writing again about this topic is that I’ve gotten some criticism for my criticism of researcher-made measures, and I wanted to respond to these concerns.

First, here is my case, simply put. Measures made by researchers or developers are likely to favor whatever content was taught in the experimental group. I’m not in any way suggesting that researchers or developers are deliberately making measures to favor the experimental group. However, it usually works out that way. If the program teaches unusual content, no matter how laudable that content may be, and the control group never saw that content, then the potential for bias is obvious. If the experimental group was taught on computers and control group was not, and the test was given on a computer, the bias is obvious. If the experimental treatment emphasized certain vocabulary, and the control group did not, then a test of those particular words has obvious bias. If a math program spends a lot of time teaching students to do mental rotations of shapes, and the control treatment never did such exercises, a test that includes mental rotations is obviously biased. In our BEE full-scale reviews of pre-K to 12 reading, math, and science programs, available at www.bestevidence.org, we have long excluded such measures, calling them “treatment-inherent.” The WWC calls such measures “over-aligned,” and says it excludes them.

However, the problem turns out to be much deeper. In a 2016 article in the Educational Researcher, Alan Cheung and I tested outcomes from all 645 studies in the BEE achievement reviews, and found that even after excluding treatment-inherent measures, measures from studies that were made by researchers or developers had effect sizes that were far higher than those for measures not made by researchers or developers, by a ratio of two to one (effect sizes =+0.40 for researcher-made measures, +0.20 for independent measures). Graduate student Marta Pellegrini more recently analyzed data from all WWC reading and math studies. The ratio among WWC studies was 2.7 to 1 (effect sizes = +0.52 for researcher-made measures, +0.19 for independent ones). Again, the WWC was supposed to have already removed overaligned studies, all of which (I’d assume) were also researcher-made.

Some of my critics argue that because the WWC already excludes overaligned measures, they have already taken care of the problem. But if that were true, there would not be a ratio of 2.7 to 1 in effect sizes between researcher-made and independent measures, after removing measures considered by the WWC to be overaligned.

Other critics express concern that my analyses (of bias due to researcher-made measures) have only involved reading, math, and science measures, and the situation might be different for measures of social-emotional outcomes, for example, where appropriate measures may not exist.

I will admit that in areas other than achievement the issues are different, and I’ve written about them. So I’ll be happy to limit the simple version of “no researcher-made measures” to achievement measures. The problems of measuring social- emotional outcomes fairly are far more complex, and for another day.

Other critics express concern that even on achievement measures, there are situations in which appropriate measures don’t exist. That may be so, but in policy-oriented reviews such as the WWC or Evidence for ESSA, it’s hard to imagine that there would be no existing measures of reading, writing, math, science, or other achievement outcomes. An achievement objective so rarified that it has never been measured is probably not particularly relevant for policy or practice.

The WWC is not an academic journal, and it is not primarily intended for academics. If a researcher needs to develop a new measure to test a question of theoretical interest, they should do so by all means. But the findings from that measure should not be accepted or reported by the WWC, even if a journal might accept it.

Another version of this criticism is that researchers often have a strong argument that the program they are evaluating emphasizes standards that should be taught to all students, but are not. Therefore, enhanced performance on a (researcher-made) measure of the better standard is prima facie evidence of a positive program impact. This argument confuses the purpose of experimental evaluations with the purpose of standards. Standards exist to express what we want students to know and be able to do. Arguing for a given standard involves considerations of the needs of the economy, standards of other states or countries, norms of the profession, technological or social developments, and so on—but not comparisons of experimental groups scoring well on tests of a new proposed standard to control groups never exposed to content relating to that standard. It’s just not fair.

To get back to basketball, I could have argued that the rules should be changed to emphasize ball handling and reduce the importance of height. Perhaps this would be a good idea, for all I know. But what I could not do was change the rules to benefit my team. In the same way, researchers cannot make their own measures and then celebrate higher scores on them as indicating higher or better standards. As any fifth grader could tell you, advocating for better rules is fine, but changing the rules in the middle of the season is wrong.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Swallowing Camels

blog216_camel_500x335

The New Testament contains a wonderful quotation that I use often, because it unfortunately applies to so much of educational research:

Ye blind guides, which strain at a gnat, and swallow a camel (Matthew 23:24).

The point of the quotation, of course, is that we are often fastidious about minor (research) sins while readily accepting major ones.

In educational research, “swallowing camels” applies to studies accepted in top journals or by the What Works Clearinghouse (WWC) despite substantial flaws that lead to major bias, such as use of measures slanted toward the experimental group, or measures administered and scored by the teachers who implemented the treatment. “Straining at gnats” applies to concerns that, while worth attending to, have little potential for bias, yet are often reasons for rejection by journals or downgrading by the WWC. For example, our profession considers p<.05 to indicate statistical significance, while p<.051 should never be mentioned in polite company.

As my faithful readers know, I have written a series of blogs on problems with policies of the What Works Clearinghouse, such as acceptance of researcher/developer-made measures, failure to weight by sample size, use of “substantively important but not statistically significant” as a qualifying criterion, and several others. However, in this blog, I wanted to share with you some of the very worst, most egregious examples of studies that should never have seen the light of day, yet were accepted by the WWC and remain in it to this day. Accepting the WWC as gospel means swallowing these enormous and ugly camels, and I wanted to make sure that those who use the WWC at least think before they gulp.

Camel #1: DaisyQuest. DaisyQuest is a computer-based program for teaching phonological awareness to children in pre-K to Grade 1. The WWC gives DaisyQuest its highest rating, “positive,” for alphabetics, and ranks it eighth among literacy programs for grades pre-K to 1.

There were four studies of DaisyQuest accepted by the WWC. In each, half of the students received DaisyQuest in groups of 3-4, working with an experimenter. In two of the studies, control students never had their hands on a computer before they took the final tests on a computer. In the other two, control students used math software, so they at least got some experience with computers. The outcome tests were all made by the experimenters and all were closely aligned with the content of the software, with the exception of two Woodcock scales used in one of the studies. All studies used a measure called “Undersea Challenge” that closely resembled the DaisyQuest game format and was taken on the computer. All four studies also used the other researcher-made measures. None of the Woodcock measures showed statistically significant differences, but the researcher-made measures, especially Undersea Challenge and other specific tests of phonemic awareness, segmenting, and blending, did show substantial significant differences.

Recall that in the mid-to late-1990s, when the studies were done, students in preschool and kindergarten were unlikely to be getting any systematic teaching of phonemic awareness. So there is no reason to expect the control students to be learning anything that was tested on the posttests, and it is not surprising that effect sizes averaged +0.62. In the two studies in which control students had never touched a computer, effect sizes were +0.90 and +0.89, respectively.

Camel #2: Brady (1990) study of Reciprocal Teaching

Reciprocal Teaching is a program that teaches students comprehension skills, mostly using science and social studies texts. A 1990 dissertation by P.L. Brady evaluated Reciprocal Teaching in one school, in grades 5-8. The study involved only 12 students, randomly assigned to Reciprocal Teaching or control conditions. The one experimental class was taught by…wait for it…P.L. Brady. The measures included science, social studies, and daily comprehension tests related to the content taught in Reciprocal Teaching but not the control group. They were created and scored by…(you guessed it) P.L. Brady. There were also two Gates-MacGinitie scales, but they had effect sizes much smaller than the researcher-made (and –scored) tests. The Brady study met WWC standards for “potentially positive” because it had a mean effect size of more than +0.25 but was not statistically significant.

Reading Recovery is a one-to-one tutoring program for first graders that has a strong tradition of rigorous research, including a recent large-scale study by May et al. (2016). However, one of the earlier studies of Reading Recovery, by Schwartz (2005), is hard to swallow, so to speak.

In this study, 47 Reading Recovery (RR) teachers across 14 states were asked by e-mail to choose two very similar, low-achieving first graders at the beginning of the year. One student was randomly assigned to receive RR, and one was assigned to the control group, to receive RR in the second semester.

Both students were pre- and posttested on the Observation Survey, a set of measures made by Marie Clay, the developer of RR. In addition, students were tested on Degrees of Reading Power, a standardized test.

The problems with this study mostly have to do with the fact that the teachers who administered pre- and posttests were the very same teachers who provided the tutoring. No researcher or independent tester ever visited the schools. Teachers obviously knew the child they personally tutored. I’m sure the teachers were honest and wanted to be accurate. However, they would have had a strong motivation to see that the outcomes looked good, because they could be seen as evaluations of their own tutoring, and could have had consequences for continuation of the program in their schools.

Most Observation Survey scales involve difficult judgments, so it’s easy to see how teachers’ ratings could be affected by their belief in Reading Recovery.

Further, ten of the 47 teachers never submitted any data. This is a very high rate of attrition within a single school year (21%). Could some teachers, fully aware of their students’ less-than-expected scores, have made some excuse and withheld their data? We’ll never know.

Also recall that most of the tests used in this study were from the Observation Survey made by Clay, which had effect sizes ranging up to +2.49 (!!!). However, on the independent Degrees of Reading Power, the non-significant effect size was only +0.14.

It is important to note that across these “camel” studies, all except Brady (1990) were published. So it was not only the WWC that was taken in.

These “camel” studies are far from unique, and they may not even be the very worst to be swallowed whole by the WWC. But they do give readers an idea of the depth of the problem. No researcher I know of would knowingly accept an experiment in which the control group had never used the equipment on which they were to be posttested, or one with 12 students in which the 6 experimentals were taught by the experimenter, or in which the teachers who tutored students also individually administered the posttests to experimentals and controls. Yet somehow, WWC standards and procedures led the WWC to accept these studies. Swallowing these camels should have caused the WWC a stomach ache of biblical proportions.

 

References

Brady, P. L. (1990). Improving the reading comprehension of middle school students through reciprocal teaching and semantic mapping and strategies. Dissertation Abstracts International, 52 (03A), 230-860.

May, H., Sirinades, P., Gray, A., & Goldsworthy, H. (2016). Reading Recovery: An evaluation of the four-year i3 scale-up. Newark, DE: University of Delaware, Center for Research in Education and Social Policy.

Schwartz, R. M. (2005). Literacy learning of at-risk first grade students in the Reading Recovery early intervention. Journal of Educational Psychology, 97 (2), 257-267.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

…But It Was The Very Best Butter! How Tests Can Be Reliable, Valid, and Worthless

I was recently having a conversation with a very well-informed, statistically savvy, and experienced researcher, who was upset that we do not accept researcher- or developer-made measures for our Evidence for ESSA website (www.evidenceforessa.org). “But what if a test is reliable and valid,” she said, “Why shouldn’t it qualify?”

I inwardly sighed. I get this question a lot. So I thought I’d write a blog on the topic, so at least the people who read it, and perhaps their friends and colleagues, will know the answer.

Before I get into the psychometric stuff, I should say in plain English what is going on here, and why it matters. Evidence for ESSA excludes researcher- and developer-made measures because they enormously overstate effect sizes. Marta Pellegrini, at the University of Florence in Italy, recently analyzed data from every reading and math study accepted for review by the What Works Clearinghouse (WWC). She compared outcomes on tests made up by researchers or developers to those that were independent. The average effect sizes across hundreds of studies were +0.52 for researcher/developer-made measures, and +0.19 for independent measures. Almost three to one. We have also made similar comparisons within the very same studies, and the differences in effect sizes averaged 0.48 in reading and 0.45 in math.

Wow.

How could there be such a huge difference? The answer is that researchers’ and developers’ tests often focus on what they knew would be taught in the experimental group but not the control group. A vocabulary experiment might use a test that contains the specific words emphasized in the program. A science experiment might use a test that emphasizes the specific concepts taught in the experimental units but not in the control group. A program using technology might test students on a computer, which the control group did not experience. Researchers and developers may give tests that use response formats like those used in the experimental materials, but not those used in control classes.

Very often, researchers or developers have a strong opinion about what students should be learning in their subject, and they make a test that represents to them what all students should know, in an ideal world. However, if only the experimental group experienced content aligned with that curricular philosophy, then they have a huge unfair advantage over the control group.

So how can it be that using even the most reliable and valid tests doesn’t solve this problem?

In Alice in Wonderland, the Mad Hatter tries to fix the White Rabbit’s watch by opening it and putting butter in the works. This does not help at all, and the Mad Hatter remarks, “But it was the very best butter!”

The point of the “very best butter” conversation in Alice in Wonderland is that something can be excellent for one purpose (e.g., spreading on bread), but worse than useless for another (e.g., fixing watches).

Returning to assessment, a test made by a researcher or developer might be ideal for determining whether students are making progress in the intended curriculum, but worthless for comparing experimental to control students.

Reliability (the ability of a test to give the same answer each time it is given) has nothing at all to do with the situation. Validity comes into play where the rubber hits the road (or the butter hits the watch).

Validity can mean many things. As reported in test manuals, it usually just means that a test’s scores correlate with other scores on tests intended to measure the same thing (convergent validity), or possibly that it correlates better with things it should correlate than with things it shouldn’t, as when a reading test correlates better with other reading tests than with math tests (discriminant validity). However, no test manual ever addresses validity for use as an outcome measure in an experiment. For a test to be valid for that use, it must measure content being pursued equally in experimental and control classes, not biased toward the experimental curriculum.

Any test that reports very high reliability and validity in its test manual or research report may be admirable for many purposes, but like “the very best butter” for fixing watches, a researcher- or developer-made measure is worse than worthless for evaluating experimental programs, no matter how high it is in reliability and validity.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Headstands and Evidence: On Over-Teachable Measures

Working on Evidence for ESSA has been a wonderful educational experience for those of us who are doing it. The problem is that the Every Student Succeeds Act (ESSA) evidence standards allow evidence supporting a given program to be considered “strong,” “moderate,” or “promising” based on positive effects in a single well-conducted study. While these standards are a major step forward in general, this definition allows the awful possibility that programs could be validated based on flawed studies, and specifically based on outcomes on measures that are not valid for this purpose. For this reason, we’ve excluded measures that are too closely aligned with the experimental program, such as measures made by developers or researchers.

In the course of doing the research reviews behind Evidence for ESSA, we’ve realized that there is a special category of measures that is also problematic. These are measures that are not made by researchers or developers, but are easy to teach to. A good example from the old days is verbal analogies on the Scholastic Achievement Test (e.g., Big: Small::Wet: _?_).  (Answer: dry)

These are no longer used on the SAT, presumably because the format is unusual and can be taught, giving an advantage to students who are coached on the SAT rather than to ones who actually know useful content of use in post-secondary education.

One key example of over-teachable measures are tests given in one minute, such as DIBELS. These are popular because they are inexpensive and brief, and may be useful as benchmark tests. But as measures for highly consequential purposes, such as anything relating to student or teacher accountability, or program evaluation for the ESSA evidence standards, one-minute tests are not appropriate, even if they correlate well with longer, well-validated measures. The problem is that such measures are overly teachable. In the case of DIBELS, for example, students can be drilled to read (ignoring punctuation or meaning) to correctly pronounce as many words as possible in a minute.

Another good example is elision tests in early reading (how would you say “bird” without the /b/?). Elision tests focus on an unusual situation that children are unlikely to have seen unless their teacher is specifically preparing them for an elision test.

Using over-teachable measures is a bit like holding teachers or schools or programs accountable for their children’s ability to do headstands. For PE, measures of time taken to run a 100-yeard dash, ability to lift weights, or percent of successful free throws in basketball would be legitimate, because these are normal parts of a PE program, and they assess general strength, speed, and muscular control. But headstands are relatively unusual as a focus in PE, and are easily taught and practiced. A PE program should not be able to meet ESSA evidence standards based on students’ ability to do headstands because this is not a crucial skill and because the control group would not be likely to spend much time on it.

Over-teachable measures have an odd but interesting aspect. When they are first introduced, there is nothing wrong with them. They may be reliable, valid, correlated with other measures, and show other solid psychometric properties. But the day the very same measure is used to evaluate students, teachers, schools, or programs, the measure’s over-teachability may come into play, and its reliability for this particular purpose may no longer be acceptable.

There’s nothing wrong with headstands per se. They have long been a part of gymnastics. But when the ability to do headstands becomes a major component of overall PE evaluation, it turns the whole concept… well, it turns the whole concept of evaluation on its head.  The same can be said about other over-teachable measures.

This blog is sponsored by the Laura and John Arnold Foundation

Reviewing Social and Emotional Learning for ESSA: MOOSES, not Parrots

This blog was co-authored by Elizabeth Kim

I’m delighted to see all the interest lately in social and emotional skills. These range widely, from kindness and empathy to ability to delay gratification to grit to belief that effort is more important than intelligence to avoidance of bullying, violence, and absenteeism. Social and emotional learning (SEL) has taken on even more importance as the Every Student Succeeds Act (ESSA) allows states to add to their usual reading and math accountability measures, and some are adding measures of SEL. This makes it particularly important to have rigorous research on this topic.

I’ve long been interested in social-emotional development, but I have just started working with a student, Liz Kim, on a systematic review of SEL research. Actually, Liz is doing all the work. Part of the purpose of the SEL review is to add a section on this topic to Evidence for ESSA. In conceptualizing our review, we immediately ran into a problem. While researchers studying achievement mostly use tests, essays, products, and other fairly objective indicators, those studying social-emotional skills and behaviors use a wide variety of measures, many of which are far less objective. For example, studies of social-emotional skills make much use of student self-report, or ratings of students’ behaviors by the teachers who administered the treatment. Researchers in this field are well aware of the importance of objectivity, but they report more and less objective measures within the same studies depending on their research purposes. For academic purposes this is perfectly fine. SEL researchers and the readers of their reports are of course free to emphasize whichever measures they find most meaningful.

The problem arises when SEL measures are used in reviews of research to determine which programs and practices meet the ESSA standards for strong, moderate, or promising levels of evidence. Under ESSA, selecting programs meeting strong, moderate, or promising criteria can have consequences for schools in terms of grant funding, so it could be argued that more objective measures should be required.

In our reviews of K-12 reading and math programs for Evidence for ESSA, we took a hard line on objectivity. For example, we do not accept outcome measures made by the researchers or developers, or those that assess skills taught in the experimental group but not the control group. The reason for this is that effect sizes for such studies are substantially inflated in comparison to independent measures. We also do not accept achievement measures administered individually to students by the students’ own teachers, who implemented the experimental treatment, for the same reason. In the case of achievement studies that use independent measures, at least as one of several measures, we can usually exclude non-independent measures without excluding whole studies.

Now consider measures in studies of social-emotional skills. They are often dependent on behavior ratings by teachers or self-reports by students. For example, in some studies students are taught to recognize emotions in drawings or photos of people. Recognizing emotions accurately may correlate with valuable social-emotional skills, but an experiment whose only outcome is the ability to recognize emotions could just be teaching students to parrot back answers on a task of unknown practical value in life. Many SEL measures used in studies with children are behavior ratings by the very teachers who delivered the treatment. Teacher ratings are sure to be biased (on average) by the normal human desire to look good (called social desirability bias). This is particularly problematic when teachers are trained to use a strategy to improve a particular outcome. For example, some programs are designed to improve students’ empathy. That’s a worthy goal, but empathy is hard to identify in practice. So teachers taught to identify behaviors thought to represent empathy are sure to see those behaviors in their children a lot more than teachers in the control group do, not necessarily because those children are in fact more empathetic, but because teachers and the children themselves may have learned a new vocabulary to recognize, describe, and exhibit empathy. This could be seen as another example of “parroting,” which means that subjects or involved raters (such as teachers or parents) have learned what to say or how to act under observation at the time of rating, instead of truly changing behaviors or attitudes.

For consequential purposes, such as reviews for ESSA evidence standards, it makes sense to ask for independently verified indicators demonstrating that students in an experimental group can and do engage in behaviors that are likely to help them in life. Having independent observers blind to treatments observe students in class or carry out structured tasks indicating empathetic or prosocial or cooperative behavior, for example, is very different from asking them on a questionnaire whether they engage in those behaviors or have beliefs in line with those skills. The problem is not only that attitudes and behaviors are not the same thing, but worse, that participants in the experimental group are likely to respond on a questionnaire in a way influenced by what they have just been taught. Students taught that bullying is bad will probably respond as the experimenters hope on a questionnaire. But will they actually behave differently with regard to bullying? Perhaps, but it is also quite possible that they are only parroting what they were just taught.

To determine ESSA ratings, we’d emphasize indicators we call MOOSES: Measureable, Observable, Objective Social Emotional Skills. MOOSES are quantifiable measures that can be observed in the wild (i.e., the school) objectively, ideally on routinely collected data unlikely to change just because staff or students know there is an experiment going on. For example, reports of disciplinary referrals, suspensions, and expulsions would be indicators of one type of social-emotional learning. Reports of fighting or bullying incidents could be MOOSES indicators.

Another category of MOOSES indicators would include behavioral observations by observers who are blind to experimental/control conditions, or observations of students in structured situations. Intergroup relations could be measured by watching who students play with during recess, for example. Or, if a SEL program focuses on building cooperative behavior, students could be placed in a cooperative activity and observed as they interact and solve problems together.

Self-report measures might serve as MOOSES indicators if they ask about behaviors or attitudes independent of the treatment students received. For instance, if students received a mindfulness intervention in which they were taught to focus on and regulate their own thoughts and feelings, then measures of self-reported or peer-reported prosocial behaviors or attitudes may not be an instance of parroting, because prosocial behavior was not the content of the intervention.

Social-emotional learning is clearly taking on an increasingly important role in school practice, and it is becoming more important in evidence-based reform as well. But reviewers will have to use conservative and rigorous approaches to evaluating SEL outcomes, as we do in evaluating achievement outcomes, if we want to ensure that SEL can be meaningfully incorporated in the ESSA evidence framework. We admit that this will be difficult and that we don’t have all the answers, but we also maintain that there should be some effort to focus on objective measures in reviewing SEL outcomes for ESSA.

This blog is sponsored by the Laura and John Arnold Foundation

The Rapid Advance of Rigorous Research

My colleagues and I have been reviewing a lot of research lately, as you may have noticed in recent blogs on our reviews of research on secondary reading and our work on our web site, Evidence for ESSA, which summarizes research on all of elementary and secondary reading and math according to ESSA evidence standards.  In the course of this work, I’ve noticed some interesting trends, with truly revolutionary implications.

The first is that reports of rigorous research are appearing very, very fast.  In our secondary reading review, there were 64 studies that met our very stringent standards.  55 of these used random assignment, and even the 9 quasi-experiments all specified assignment to experimental or control conditions in advance.  We eliminated all researcher-made measures.  But the most interesting fact is that of the 64 studies, 19 had publication or report dates of 2015 or 2016.  Fifty-one have appeared since 2011.  This surge of recent publications on rigorous studies was greatly helped by the publication of many studies funded by the federal Striving Readers program, but Striving Readers was not the only factor.  Seven of the studies were from England, funded by the Education Endowment Foundation (EEF).  Others were funded by the Institute of Education Sciences at the U.S. Department of Education (IES), the federal Investing in Innovation (i3) program, and many publishers, who are increasingly realizing that the future of education belongs to those with evidence of effectiveness.  With respect to i3 and EEF, we are only at the front edge of seeing the fruits of these substantial investments, as there are many more studies in the pipeline right now, adding to the continuing build-up in the number and quality of studies started by IES and other funders.  Looking more broadly at all subjects and grade levels, there is an unmistakable conclusion: high-quality research on practical programs in elementary and secondary education is arriving in amounts we never could have imagined just a few years ago.

Another unavoidable conclusion from the flood of rigorous research is that in large-scale randomized experiments, effect sizes are modest.  In a recent review I did with my colleague Alan Cheung, we found that the mean effect size for large, randomized experiments across all of elementary and second reading, math, and science is only +0.13, much smaller than effect sizes from smaller studies and from quasi-experiments.  However, unlike small and quasi-experimental studies, rigorous experiments using standardized outcome measures replicate.  These effect sizes may not be enormous, but you can take them to the bank.

In our secondary reading review, we found an extraordinary example of this. The University of Kansas has an array of programs for struggling readers in middle and high schools, collectively called the Strategic Instruction Model, or SIM.  In the Striving Readers grants, several states and districts used methods based on SIM.  In all, we found six large, randomized experiments, and one large quasi-experiment (which matched experimental and control groups).  The effect sizes across the seven varied from a low of 0.00 to +0.15, but most clustered closely around the weighted mean of +0.09.  This consistency was remarkable given that the contexts varied considerably.  Some studies were in middle schools, some in high schools, some in both.  Some studies gave students an extra period of reading each day, some did not.  Some studies went for multiple years, some did not.  Settings included inner-city and rural locations, and all parts of the U.S.

One might well argue that the SIM findings are depressing, because the effect sizes were quite modest (though usually statistically significant).  This may be true, but once we can replicate meaningful impacts, we can also start to make solid improvements.  Replication is the hallmark of a mature science, and we are getting there.  If we know how to replicate our findings, then the developers of SIM and many other programs can create better and better programs over time with confidence that once designed and thoughtfully implemented, better programs will reliably produce better outcomes, as measured in large, randomized experiments.  This means a lot.

Of course, large, randomized studies may also be reliable in telling us what does not work, or does not work yet.  When researchers get zero impacts and then seek funding to do the same treatment again, hoping for better luck, they and their funders are sure to be disappointed.  Researchers who find zero impacts may learn a lot, which may help them create something new that will, in fact, move the needle.  But they have to then use those learnings to do something meaningfully different if they expect to see meaningfully different outcomes.

Our reviews are finding that in every subject and grade level, there are programs right now that meet high standards of evidence and produce reliable impacts on student achievement.  Increasing numbers of these proven programs have been replicated with important positive outcomes in multiple high-quality studies.  If all 52,000 Title I schools adopted and implemented the best of these programs, those that reliably produce impacts of more than +0.20, the U.S. would soon rise in international rankings, achievement gaps would be cut in half, and we would have a basis for further gains as research and development build on what works to create approaches that work better.  And better.  And then better still.

There is bipartisan, totally non-political support for the idea that America’s schools should be using evidence to enhance outcomes.  However a school came into being, whoever governs it, whoever attends it, wherever it is located, at the end of the day the school exists to make a difference in the lives of children.  In every school there are teachers, principals, and parents who want and need to ensure that every child succeeds.  Research and development does not solve all problems, but it helps leverage the efforts of all educators and parents so that they can have a maximum positive impact on their children’s learning.  We have to continue to invest in that research and development, especially as we get smarter about what works and what does not, and as we get smarter about research designs that can produce reliable, replicable outcomes.  Ones you can take to the bank.