How Universal Access to Technology Could Advance Evidence-Based Reform. Or Not.

2014-11-20-HP4211_20_2014.jpg

Since the early 1960s (at least), breakthroughs in education caused by advances in technology have been confidently predicted. First it was teaching machines, then mainframes, then laptops, then video disks, then interactive whiteboards, and now blended and flipped learning. Sadly, however, each innovation of the past has ended up making little if any difference in student achievement. I remain hopeful that this time, technology could produce breakthroughs, if the new capabilities of technology are used to create systematically enhanced environments for learning, new approaches to teaching based on new technologies are rigorously evaluated, and approaches found to be successful are broadly disseminated.

My reason for hope, this time around, lies in the fact that schools are rapidly moving toward providing universal access to tablets or other relatively low-cost digital devices. This is a potential game-changer, as universal access makes it possible for teachers to give digital assignments to all students. It also makes it possible for developers to create replicable strategies that make optimal use of personalized instruction, simulations, visual media, games, sophisticated real-time assessments, links to other students within and beyond the classroom, links to prescreened and curated information, and so on. If students also have compatible technology at home, this adds the possibility of integration of homework and classwork (which is essential for blended and flipped learning, for example, but also for simpler means of making homework engaging, game-like, and useful for learning).

All of these possibilities are only potentials, not actualities, and given the long, sad history of technology in schools, they may well not take place. A lot of the applications of universal access to digital devices common today are merely reinventions of computer-assisted instruction (CAI), which has a particularly poor research record. Other uses are for poorly designed project-based learning or for applications that do little more than make traditional teaching a little easier for teachers. There are hundreds of applications available for every possible classroom use, but the quality of these applications varies widely, and they do not readily integrate with each other or with other instruction or standards. Hardworking, tech-savvy teachers can in principle assemble fabulous lessons, but this is difficult to do on a large scale.

Before universal access to technology can transform education, a great deal of creative work needs to be done to make and evaluate courses or major portions of courses using the new technology opportunities. Imagine developers, researchers, nonprofits, and for-profits partnering with experienced teachers to create astonishing, integrated, and complete approaches to, say, beginning reading, elementary math and science, secondary algebra, or high school physics. In each case, programs would be rigorously evaluated, and then disseminated if found to be effective.

There are many applications of universal access technologies that make me optimistic. For example, current or on-the-horizon technologies could enhance teachers’ abilities to teach traditional lessons. Prepared lessons might incorporate visual media, games, or simulations in initial teaching. They might use computer-facilitated cooperative learning with embedded assessments and feedback to replace worksheets. They might embed assessments in games, simulations, and cooperative activities to replace formative and summative assessments. Simulations of lab experiments could make inquiry-oriented instruction in science and math far more common. Access to curated, age-appropriate libraries of information could transform social studies and science. Computerized assessments of writing, including creative writing as well as grammar, punctuation, and spelling could help students working with peers to become more effective writers.

In each of these cases, extensive development, piloting, and evaluation will be necessary, but once created and found to be effective, digitally enhanced models will be extremely popular, and their costs will decline with scale.

Even as technology’s past should make us wary of unsupported claims and premature enthusiasm, the future can be different. In all areas of technology other than education, someone creates a new product, finds it to be effective, and then makes it available for widespread adoption. A time of tinkering yields to a time of solid accomplishment. This can happen in education, too. With adequate support for R&D, breakthroughs are likely, and when they happen in any area, they increase the possibilities of breakthroughs in other areas.

Sooner or later, technology will help students learn far more than they do today. The technology models ready to go today do not yet have the evidence base to justify a lot of optimism, but in the age of universal access, we’ve only just begun.

To the New Congress

With the results of the mid-term elections just behind us, it is time to think about the opportunities and challenges for education policy in the next two years and beyond. At the top level gridlock is sure to continue, but much progress remains possible if the administration and congressional leaders can cooperate in areas where they fundamentally agree. These include a shared belief that American education cannot be complacent, but must continue to advance by helping teachers, districts and school leaders raise standards, improve teaching and learning, and make wise choices for children based on the best available evidence. Here are some specific actions I’d suggest to accomplish these goals.

1. Maintain the Investing in Innovation (i3) Program and other sources of evidence
Investing in Innovation (i3) is a U. S. Department of Education program that funds development, evaluation, and dissemination of proven programs for all of grades pre-k to 12. In a policy environment emphasizing local schools’ right to choose their path to success, i3 offers information on “what works” that is essential in a system moving from government mandates to local control. Just as Department of Agriculture-funded research has long provided information but not direction to farmers, i3, the Institute of Education Sciences (IES), and other agencies are providing information for informed decision-making in education.

2. Encourage use of proven programs
Where the federal government continues to fund programs of assistance to schools such as Title I, educators should be encouraged to use programs and practices with strong evidence of effectiveness. This encouragement could include modest incentives in competitive grants (such as a few preference points) or modest additional funds in formula grants if grantees agree to use a portion of their formula grants (such as Title I) on proven programs.

3. Maintain and upgrade the What Works Clearinghouse
The What Works Clearinghouse provides information on the strength of evidence supporting various educational programs. It is not as user-friendly as it might be, and it needs to be revamped to focus on pragmatic programs with impacts on measures that matter. However, for evidence to matter in a system of informed local choice, something much like the WWC is needed.

While there are many issues on which the Congress and the administration will disagree, I hope and expect that they will agree that every federal dollar spent on education should make the largest possible difference for children. Investments designed to create, evaluate, and disseminate effective approaches have to be central to any strategy intended to help educators improve outcomes among all schools. Our children can’t wait for better educational programs and practices. They need them now. This is something that all people of good will can agree on.

* Illustration by James Bravo

How Biased Measures Lead to False Conclusions

2014-11-06-HP49_Research_11_6_2014.jpg

One hopeful development in evidence-based reform in education is the improvement in the quality of evaluations of educational programs. Because of policies and funding provided by the Institute of Education Sciences (IES) and Investing in Innovation (i3) in the U.S. and by the Education Endowment Foundation (EEF) in the U.K., most evaluations of educational programs today use far better procedures than was true as recently as five years ago. Experiments are likely to be large, to use random assignment or careful matching, and to be carried out by third-party evaluators, all of which give (or should give) educators and policy makers greater confidence that evaluations are unbiased and that their findings are meaningful.

Despite these positive developments, there remain serious problems in some evaluations. One of these relates to measures that give the experimental group an unfair advantage.

There are several ways in which measures can unfairly favor the experimental group. The most common is where measures are made by the creator of the program and are precisely aligned with the curriculum taught in the experimental group but not the control group. For example, a developer might reason that a new curriculum represents what students should be taught in, say, science or math, so it’s all right to use a measure aligned with the experimental program. However, use of such measures gives a huge advantage to the experimental group. In an article published in the Journal of Research on Educational Effectiveness, Nancy Madden and I looked at effect sizes for such over-aligned measures among studies accepted by the What Works Clearinghouse (WWC). In reading, we found an average effect size of +0.51 for over-aligned measures, compared to an average of +0.06 for measures that were fair to the content taught in experimental and control groups. In math, the difference was +0.45 for over-aligned measures, -0.03 for fair ones. These are huge differences.

A special case of over-aligned measures takes place when content is introduced earlier than usual in students’ progression through school in the experimental group, but not the control group. For example, if students are taught first-grade math skills in kindergarten, they will of course do better on a first grade test (in kindergarten) than will students not taught these skills in kindergarten. But will the students still be better off by the end of first grade, when all have been taught first grade skills? It’s unlikely.

One more special case of over-alignment takes place in relatively brief studies when students are pre-tested, taught a given topic, and then post-tested, say, eight weeks later. The control group, however, might have been taught that topic earlier or later than that eight-week period, or might have spent much less than 8 weeks on it. In a recent review of elementary science programs, we found many examples of this, including situations in which experimental groups were taught a topic such as electricity during an experiment, while the control group was not taught about electricity at all during that period. Not surprisingly, these studies produce very large but meaningless effect sizes.

As evidence becomes more important in educational policy and practice, we researchers need to get our own house in order. Insisting on the use of measures that are not biased in favor of experimental groups is a major necessity in building a body of evidence that educators can rely on.