A constant refrain in articles about education and the economy highlights the need for more of a focus on STEM: Science, Technology, Engineering, and Mathematics. In fact, the National Science Foundation and many other public and private entities spend billions each year to advance STEM education. STEM is indeed critical for American economic competitiveness and progress. So naturally you’d expect that STEM subjects would be among the best researched of all, right?
Wrong. My colleagues and I just published a review of research on elementary science programs in the most prestigious science education journal, the Journal of Research on Science Teaching (JRST). I’ll get to the substantive conclusions in a moment. What I want to focus on first is the most important finding of the review: that we found only 23 studies that met our inclusion standards. Our standards were not that tough. We required that studies compare experimental to well-matched or randomly assigned control groups on measures that fairly assessed what was taught in both groups. Studies had to last at least 4 weeks (less than the 12 weeks we’ve required in every other subject). Our 23 studies were the product of all qualifying research published in English throughout the world over a period of more than 30 years. That’s less than one study per year. Had we required random assignment and analysis at the level of random assignment, only seven studies would have qualified.
Of course, there are thousands of studies of elementary science teaching. Why did so few meet our standards? A lot of them had no control group, or no measure of science learning. Many were very brief lab studies lasting from an hour to a few days.
Among the few studies that did compare experimental and control classes over at least four weeks, most had obvious problems that made it impossible to include them. Many used measures made to register the gains in the experimental group but unrelated to what was taught in the control group. For example, many studies taught a unit on, say, electricity, to the experimental group and compared their gains on an electricity test to those of a group that was not taught electricity at all during the experiment.
Among the studies we could include, the outcomes favored inquiry approaches that emphasized professional development for teachers, using methods such as cooperative learning and reading-science integration. Inquiry methods using science kits did no better than control groups, and disturbingly, these were the highest-quality studies. There were positive effects for approaches emphasizing technology, but there were very few studies in this category.
The larger question posed by our review, however, is why there were so few qualifying studies. How could the entire field of science education produce less than one methodologically adequate experimental study of practical elementary science approaches per year?
At first, my colleagues and I thought that this problem must surely just be due to the fact that science educators focus more on secondary schools than elementary schools. However, we are now working on a review of secondary science programs, under a grant from the Spencer Foundation. We are not finding markedly more qualifying studies at that level, either.
The number of studies that meet similar inclusion standards in elementary and secondary reading and math is much higher than in science. What is it about science education that makes such research rare? Of course, there is a poignant irony in the observation that among all major branches of educational research, science education is least likely to use rigorous scientific evidence to evaluate its own programs. Science educators should be, and could still become, leaders in evidence-based reform, but this will require a serious change in direction in the field.