Bad Science II: Brief, Small, and Artificial Studies

Bad Science II 072612.png

“We learned from correlational research that students who speak Latin do better in school. So this year we’re teaching everything in Latin.”

The oldest joke in academia goes like this. A professor is shown the results of an impressive experiment. “That may work in practice,” she says, “but how will it work in the laboratory?”

For practitioners trying to make sense of the findings of educational research, this is no laughing matter. They are often left to figure out whether or not there is meaningful evidence supporting a given practice or policy. Yet all too often academics report findings from experiments that are too brief, too small, and/or too artificial to be reliable for making educational decisions.

Looking at the original articles, this problem is easy to see. Would you use or recommend a classroom management approach that has been successfully evaluated in a one-hour experiment? Or one evaluated with only 20 students? Or evaluated in a situation in which teachers in the experimental group had graduate students helping them in class every day?

The problem comes when busy educators or researchers rely on reviews of research. The reviews may make sweeping statements about the about the effects of various practices based on very brief, small, or artificial experiments, yet a lot of detective work may be necessary to find this out. Years ago, I was re-analyzing a review of research on class size and found one study with a far larger effect than all others. After much sleuthing I found out why: It was a study of tennis instruction, where students in larger tennis groups get a lot less court time.

So what should a reader do? Some reviews, including Social Programs that WorkBlueprints for Violence Prevention, and our own Best Evidence Encyclopedia, take sample size, duration, and artificiality into account. Otherwise, if you want to know for sure, you’ll have to put on your own deerstalker and do your own detective work, finding the essential experiments that took place in real schools over real periods of time, under realistic conditions. Evidence-based reform in education won’t really take hold until readers can consistently find reliable, easily interpretable and unbiased information on practical programs and practices available to them.

In case you missed last week first part in the series, check it out here: Bad Science I: Bad Measures

Illustration: Slavin, R.E. (2007). Educational research in the age of accountability. Boston: Allyn & Bacon. Reprinted with permission of the author.

Find Bob Slavin on Facebook!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s