# Another Way to Understand Effect Sizes

Whenever I talk to educators and mention effect sizes, someone inevitably complains. “We don’t understand effect sizes,” they say. I always explain that you don’t have to understand exactly what effect sizes are, but if you do know that more of them are good and less of them are bad, assuming that the research from which they came is of equal quality, then why do you have to know precisely what they are? Sometimes I mention the car reliability rating system Consumer Reports uses, with full red circles at the top and full black circles at the bottom. Does anyone understand how they arrived at those ratings? I don’t even know, but I don’t care, because like everyone else, what I do know is that I don’t want a car with a reliability rating in the black.

People always tell me that they would like it better if I’d use “additional months of gain.” I do this when I have to, but I really do not like it, because these “months of gain” do not really mean very much, and they work very differently at the early elementary grades than they do in high schools.

So here is an idea that some people might find useful. The National Assessment of Educational Progress (NAEP) uses reading and math scales that have a theoretical standard deviation of 50. So an effect size of, say, +0.20 can be expressed as a gain equivalent to a NAEP score gain of +10 (0.20 x 50 = 10) points.  That’s not really interesting yet, because most people also don’t know what NAEP scores mean.

But here’s another way to use such data that might be more fun and easier to understand. I think people could understand and care about their state’s rank on NAEP scores. So for example, the highest-scoring state on 4th grade reading is Massachusetts, with a NAEP reading score of 231 in 2019. What if the 13th state, Nebraska (222), adopted a great reading program statewide, and it gained an average effect size of +0.20. That’s equivalent to 10 NAEP points. Such a gain in effect size would make Nebraska score one point ahead of Massachusetts (if Massachusetts didn’t change). Number 1!

If we learned to speak in terms of how many ranks states would gain if they gained a given effect size, I wonder if that would give educators more understanding and respect for the findings of experiments. Even fairly small effect sizes, if replicated across a whole state, could propel a state past its traditional rivals. For example, 26th ranked Wisconsin (220) could equal neighboring 12th ranked Minnesota (222) with a statewide reading effect size gain of only +0.04. As a practical matter, Wisconsin could increase its fourth grade test scores by an effect size of +0.04, perhaps by using a program with an effect size of +0.20 with (say) the lowest-achieving fifth of its fourth graders.

If only one could get states thinking this way, the meaning and importance of effect sizes would soon become clear. And as a side benefit, perhaps if Wisconsin invested its enthusiasm and money in a “Beat Minnesota” reading campaign, as it does to try to beat the University of Minnesota’s football team, Wisconsin’s students might actually benefit. I can hear it now:

On Wisconsin, On Wisconsin,

Raise effect size high!

We are not such lazy loafers

We can beat the Golden Gophers

Point-oh-four or point-oh-eight

We’ll surpass them, just you wait!

Well, a nerd can dream, can’t he?

_______

Note:  No states were harmed in the writing of this blog.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

# When Scientific Literacy is a Matter of Life and Death

The Covid-19 crisis has put a spotlight on the importance of science.  More than at any time I can recall (with the possible exception of panic over the Soviet launch of Sputnik), scientists are in the news.  We count on them to find a cure for people with the Covid-19 virus and a vaccine to prevent new cases.  We count on them to predict the progression of the pandemic, and to discover public health strategies to minimize its spread.  We are justifiably proud of the brilliance, dedication, and hard work scientists are exhibiting every day.

Yet the Covid-19 pandemic is also throwing a harsh light on the scientific understanding of the whole population.  Today, scientific literacy can be a matter of life or death.  Although political leaders, advised by science experts, may recommend what we should do to minimize risks to ourselves and our families, people have to make their own judgments about what is safe and what is not.  The graphs in the newspaper showing how new infections and deaths are trending have real meaning.  They should inform what choices people make.  We are bombarded with advice on the Internet, from friends and neighbors, from television, in the news.  Yet these sources are likely to conflict.  Which should we believe?  Is it safe to go for a walk?  To the grocery store?  To church?  To a party?  Is Grandpa safer at home or in assisted living?

Scientific literacy is something we all should have learned in school. I would define scientific literacy as an understanding of scientific method, a basic understanding of how things work in nature and in technology, and an understanding of how scientists generate new knowledge and subject possible treatments, such as medicines, to rigorous tests.  All of these understandings, and many more, are ordinarily useful in generally understanding the news, for example, but for most people they do not have major personal consequences.  But now they do, and it is terrifying to hear the misconceptions and misinformation people have.  In the current situation, a misconception or misinformation can kill you, or cause you to make decisions that can lead to the death of a family member.

The importance of scientific literacy in the whole population is now apparent in everyday life.  Yet scientific literacy has not been emphasized in our schools.  Especially in elementary schools, science has taken a back seat, because reading and mathematics are tested every year on state tests, beginning in third grade, but science is not tested in most years.  Many elementary teachers will admit that their own preparation in science was insufficient.  In secondary schools, science classes seem to have been developed to produce scientists, which is of course necessary, but not to produce a population that values and understands scientific information.  And now we are paying the price for this limited focus.

One indicator of our limited focus on science education is the substantial imbalance between the amount of rigorous research in science compared to the amount in mathematics and reading.  I have written reviews of research in each of these areas (see www.bestevidence.org), and it is striking how many fewer experimental studies there are in elementary and secondary science.  Take a look at the What Works Clearinghouse, for another example.  There are many programs in the WWC that focus on reading and mathematics, but science?  Not so many.   Given the obvious importance of science and technology to our economy, you would imagine that investments in research in science education would be a top priority, but judging from the numbers of studies of science programs for elementary and secondary schools, that is certainly not taking place.

The Covid-19 pandemic is giving us a hard lesson in the importance of science for all Americans, not just those preparing to become scientists.  I hope we are learning this lesson, and when the crisis is over, I hope our government and private foundations will greatly increase their investments in research, development, evaluation, and dissemination of proven science approaches for all students.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

# Would Your School or District Like to Participate in Research?

As research becomes more influential in educational practice, it becomes important that studies take place in all kinds of schools. However, this does not happen. In particular, the large-scale quantitative research evaluating practical solutions for schools tends to take place in large, urban districts near major research universities. Sometimes they take place in large, suburban districts near major research universities. This is not terribly surprising, because in order to meet the highest standards of the What Works Clearinghouse or Evidence for ESSA, a study of a school-level program will need 40 to 50 schools willing to be assigned at random to either use a new program or to serve as a control group.

Naturally, researchers want to have to deal with a small number of districts (to avoid having to deal with many different district-level rules and leaders), so they try to sign up districts in which they might find 40 or 50 schools willing to participate, or perhaps split between two or three districts at most. But there are not that many districts with that number of schools. Further, researchers do not want to spend their time or money flying around to visit schools, so they usually try to find schools close to home.

As a result of these dynamics, of course, it is easy to predict where high-quality quantitative research on innovative programs is not going to take place very often. Small districts (even urban ones) can be hard to serve, but the main category of schools left out of big studies are ones in rural districts. This is not only unfair, but it deprives rural schools of a robust evidence base for practice. Also, it can be a good thing for schools and districts anywhere to participate in research. Typically, schools are paired and assigned at random to treatment or control groups. Treatment groups get the treatment, and control schools usually get some incentive, such as money, or an opportunity to use the innovative treatment a year after the experiment is over. So why should some places get all this attention and opportunity, while others complain that they never get to participate and that there are few programs evaluated in districts like theirs?

I have a solution to propose for this problem: A “Registry of Districts and Schools Seeking Research Opportunities.” The idea is that district leaders or principals could list information about themselves and the kinds of research they might be willing to host in their schools or districts. Researchers seeking district or school partners for proposals or funded projects could post invitations for participation. In this way, researchers could find out about districts they might never have otherwise considered, and district and school leaders could find out about research opportunities. Sort of like a dating site, but adapted to the interests of researchers and potential research partners (i.e., no photos would be required).

If this idea interests you, or if you would like to participate, please write to Susan Davis at sdavi168@jh.edu . If you wish, you can share any opinions and ideas about how such a registry might best accomplish its goals. If you represent a district or school and are interested in participating in research, tell us, and I’ll see what I can do.

If I get lots of encouragement, we might create such a directory and operate it on behalf of all districts, schools, and researchers, to benefit students. I’ll look forward to hearing from you!

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.