Lessons from Innovators: Collaborative Strategic Reading

2013-09-25-HPImage092513.jpg

The process of moving an educational innovation from a good idea to widespread effective implementation is far from straightforward, and no one has a magic formula for doing it. The William T. Grant and Spencer Foundations, with help from the Forum for Youth Investment, have created a community composed of grantees in the federal Investing in Innovation (i3) program to share ideas and best practices. Our Success for All program participates in this community. In this space, I, in partnership with the Forum for Youth Investment, highlight observations from the experiences of i3 grantees other than our own, in an attempt to share the thinking and experience of colleagues out on the front lines of evidence-based reform.

This blog is based on an interview that the Forum for Youth Investment conducted with Janette Klingner and Alison Boardman from the School of Education at the University of Colorado Boulder (CU). They are working closely with Denver Public Schools on an i3 validation project focused on scaling the Collaborative Strategic Reading (CSR) instructional model across middle schools. As lead partners in a district-led initiative, the two reflect on the dynamics of collaborating across research and practice, as well as the critical importance of embedding new practices within existing district infrastructure. Some of their lessons learned are summarized below.

Project “home” matters
Unlike many i3 projects, the CSR grant was submitted by and awarded to Denver Public Schools, with the university as a subcontractor. The project “home” has influenced dynamics in important ways and at multiple levels, beginning with a basic level of buy-in and ownership that is not always present in school improvement projects and studies that the CU team has been involved in. In university-led projects, as Klingner pointed out, a district can simply decide to back out for any number of reasons. Though there are obvious downsides to “outsiders” coming in with an intervention for people to adopt, being district-led as opposed to university-led hasn’t necessarily meant smooth sailing. Some teachers, Klingner noted, expressed resistance because they were being told by the district to implement yet another program. Despite occasional resistance, CSR is making good progress on ambitious expansion goals laid out by the district, and in fact the project is ahead of schedule in terms of middle school expansion. “We are moving faster than we envisioned. We have teachers and schools really wanting to learn CSR, and we are adding them sooner than planned,” said Klingner.

Rethink traditional research-practice relationships
The CU team brings a “design-based implementation research” perspective to this work, which is based on the idea of collaborative, iterative design and implementation, focused on a district-identified problem of practice. “We know that handing schools an innovation in a box and seeing how it works is not effective,” said Klingner. “We are trying to be intentional about scaffolding from the validation stage, where there is more support available for an intervention, to scale-up, where the new practices become integrated and can be scaled and sustained. Working closely with the district seems like the only way to do that successfully.” While there are clear advantages to this approach, there are instances where despite the close partnership, conflicting priorities of the partners emerge. For example, in an effort to implement consistently and in a coordinated fashion across a large group of schools, the district sometimes imposes strict guidelines, such as requiring all science teachers to implement CSR on a given day. While this helps with knowing where and when to deploy coaches, it doesn’t necessarily make sense if your goal is to better understand and support teachers’ authentic integration of a new instructional model into their classroom practice in the context of their curriculum. Despite occasional bumps in the road, the project is built upon a strong partnership, and that partnership is critical to how the team thinks about scale and sustainability.

Embed within existing structures
The CSR team has been intentional from the beginning about embedding the intervention within existing district and school infrastructure. “We are very aware that this needs to become part of the daily practice of what the district does,” said Klingner. From working to maximize teacher-leader roles, to housing a principal liaison within the district as opposed to at the University, the team is constantly re-evaluating to what extent practices are being embedded. “Sometimes it feels like this is becoming part of the ongoing infrastructure, and then there will be some change and we’re not so sure. There’s a tipping point and even though we have a lot of great buy-in, I’d say we’re not there yet.” Boardman noted that making sure that everyone working in support roles with teachers is trained in CSR would be ideal. “Ideally all of the different coaches in the district would be able to coach for CSR. So the literacy coaches that are in schools, the teacher effectiveness coaches that visit schools, those supporting classroom management or use of the curriculum – all these different existing mechanisms would be able to support CSR. We are trying to do this and have done a lot of training for people in different roles, but we are not there yet and the plan for how to get there is still evolving.”

Align with existing initiatives, tools, and processes
In addition to extensive training, linking and aligning CSR with other district initiatives has also been a priority. For example, it was clear early on that for teachers to engage in a meaningful way, any new instructional model needed to align with LEAP, Denver’s teacher effectiveness framework. This makes sense and has been a priority, though LEAP itself, in addition to its uses, is still evolving. As Boardman put it, “Maintaining a consistent research design when everything around you is changing is a challenge. That said, we are working hard to understand how our model aligns with LEAP and working with teachers to help them understand the connections and to ensure they feel what they’re doing is supported and will pay off for them.” Implementation of the Common Core standards has been another new effort with which the project has had to align. The team’s commitment to link CSR to existing or emerging work is consistent and laudable, though they are aware of potential trade-offs. “We are rolling with the new things as they come in,” said Boardman, “but there are pros and cons. Sometimes we become overly consumed by trying to connect with district initiatives. We have to be careful about where things fit and where they simply might not.”

Find the right communications channels and messengers
Just as important as trying to figure out where CSR fits is making sure it doesn’t become “just another add-on.” One thing the project team feels is important for sustainability is figuring out at what point information needs to be communicated and by whom. As Boardman said, “Things have to be communicated by the right players. We and our district colleagues are constantly trying to figure out where and by whom key information should be communicated in order for teachers and others to feel this is the real deal. Is it the district’s monthly principal meetings? Is the key that we need area superintendents to say this is a priority?” The team is thinking about communications and messaging at both the district and the school levels. “At the school level, there is also a great deal of integration that needs to happen, and CSR people can’t be at every meeting. So which meetings are critical to attend? Which planning sessions should we prioritize?” Keenly aware that change happens in the context of relationships, the CSR team is being as intentional about communications and messaging as they are about things like tools and trainings.

How To Do Lots of High-Quality Educational Evaluations for Peanuts

2013-09-19-Peanuts09192013.jpg

One of the greatest impediments to evidence-based reform in education is the difficulty and expense of doing large-scale randomized experiments. These are essential for several reasons. Large-scale experiments are important because when treatments are at the level of classrooms and schools, you need a lot of classrooms and schools to avoid having just a few unusual sites influence the results too much. Also, research finds that small-scale studies produce inflated effects, particularly because researchers can create special conditions on a small scale that they could not sustain on a large scale. Large experiments simulate the situations likely to exist when programs are used in the real world, not under optimal, hothouse conditions.

Randomized experiments are important because when schools or classrooms are assigned at random to use a program or continue to serve as a control group (doing what they were doing before), we can be confident that there are no special factors that favor the experimental group other than the program itself. Non-randomized, matched studies that are well designed can also be valid, but they have more potential for bias.

Most quantitative researchers would agree that large-scale randomized studies are preferable, but in the real world such studies done well can cost a lot – more than $10 million per study in some cases. That may be chump change in medicine, but in education, we can’t afford many such studies.

How could we do high-quality studies far less expensively? The answer is to attach studies to funding being offered by the U. S. Department of Education. That is, when the Department is about to hand out a lot of money, it should commission large-scale randomized studies to evaluate specific ways of spending these resources.

To understand what I’m proposing, consider what the Department might have done when No Child Left Behind (NCLB) required that low-performing schools offer after-school tutoring to low-achieving students, in its Supplemental Educational Services (SES) initiative. The Department might have invited proposals from established providers of tutoring services, which would have had to participate in research as a condition of special funding. It might have then chosen a set of after-school tutoring providers (I’m making these up):

Program A provides structured one-to-three tutoring.
Program B rotates children through computer, small-group, and individualized activities.
Program C provides computer-assisted instruction.
Program D offers small-group tutoring in which children who make progress get treats or free time for sports.

Now imagine that for each program, 60 qualifying schools were recruited for the studies. For the Program A study, half get Program A and half get the same funding to do whatever they wanted to do (except Programs A to D) consistent with the national rules. The assignment to Program A or its control group would be at random. Program B, C, and D would be evaluated in the same way.

Here’s why such a study would have cost peanuts. The costs of offering the program to the schools that got Programs A, B, C, or D would have been covered by Title I, as was true of all NCLB after-school tutoring programs. Further, state achievement tests, routinely collected in every state in grades 3-8, could have been obtained at pre- and posttest at little cost for data collection. The only costs would be for data management, analysis, and reporting, plus some amount of questionnaires and/or observations to see what was actually happening in the participating classes. Peanuts.

Any time money is going out from the Department, such designs might be used. For example, in recent years a lot of money has been spent on School Improvement Grants (SIG), now called School Turnaround Grants. Imagine that various whole-school reform models were invited to work with many of the very low-achieving schools that received SIG grants. Schools would have been assigned at random to use Programs A, B, C, or D, or to control groups able to use the same amount of money however they wished. Again, various models could be evaluated. The costs of implementing the programs would have been provided by SIG (which was going to spend this money anyway), and the cost of data collection would have been minimal because test scores and graduation rates already being collected could have been used. Again, the costs of this evaluation would have just involved data management, analysis, and reporting. More peanuts.

Note that in such evaluations, no school gets nothing. All of them get the money. Only schools that want to sign up for the studies would be randomly assigned. Modest incentives might be necessary to get schools to participate in the research, such as a few competitive preference points in competitive proposals (such as SIG) or somewhat higher funding levels in formula grants (such as after-school tutoring). Schools that do not want to participate in the research could do what they would have done if the study had never existed.

Against the minimal cost, however, weigh the potential gain. Each U. S. Department of Education program that lends itself to this type of evaluation would produce information about how the funds could best be used. Over time, not only would we learn about specific effective programs, we’d also learn about types of programs most likely to work. Also, additional promising programs could enter into the evaluation over time, ultimately expanding the range of options for schools. Funding from the Institute of Education Sciences (IES) or Investing in Innovation (i3) might be used specifically to build up the set of promising programs for use in such federal programs and evaluations.

Ideally, the Department might continuously commission evaluations of this kind alongside any funding it provides for schools to adopt programs capable of being evaluated on existing measures. Perhaps the Department might designate an evaluation expert to sit in on early meetings to identify such opportunities, or perhaps it might fund an external “Center for Cost-Effective Evaluations in Education.”

There are many circumstances in which expensive evaluations of promising programs still need to be done, but constantly doing inexpensive studies where they are feasible might free up resources to do necessarily expensive research and development. It might also accelerate the trend toward evidence-based reform by adding a lot of evidence quickly to support (or not) programs of immediate importance to educators, to government, and to taxpayers.

Because of the central role government plays in education, and because government routinely collects a lot of data on student achievement, we could be learning a lot more from government initiatives and innovative programs. For just a little more investment, we could learn a lot about how to make the billions we spend on providing educational services a lot more effective. Very important peanuts, if you ask me.

Guesses and Hype Give Way to Data in Study of Education: New York Times

2013-09-10-HPImage09102013.jpg

If you live long enough, eventually you’ll see everything, even a positive article about evidence-based reform in a major newspaper. In the September 2 New York Times, science reporter Gina Kolata writes about how educational research is starting to use scientific methods to evaluate the effectiveness of educational interventions. The articleis great, and much appreciated. But I found my own conversation with the reporter even more interesting.

As is my wont, I was waxing poetic about how Investing in Innovation (i3) was the most important thing ever to happen in building an evidence base for replicable programs in education. “How much did they invest?” she asked. “$650 million in the first year, and $150 million a year after that,” I enthused. But then there was silence on the other end of the line. Finally, Ms. Kolata said, “I mostly work on medical issues. $650 million is, well…”

“Coffee money?” I suggested.

“Something like that,” she said.

Just for perspective, NIH spends $31 billion a year on medical research, and this is just a portion of government, foundation, and private funding of medial R&D. In its best year, i3 was only 2% of the NIH medical research budget, and at $150 million a year it’s now one half of one percent of the NIH budget. The Institute of Education Sciences (IES), referred to in the article as “a little-known office in the Education Department” has an annual budget of about $220 million for research and development, roughly two days at NIH.

The (relatively) tiny investment in education research, development, and dissemination is old news, of course. It is a classic no-win conundrum. Due to minimal funding, educational research makes slow progress, which diminishes the enthusiasm for it among politicians, who then continue to allocate minimal funding. Educational researchers and government have embraced higher standards for program evaluation research, but such research is expensive, so not much top-quality research can be funded.

A visible, undeniable breakthrough, where one or more research-proven programs become used on a large scale and then show positive impacts at scale, could build a stronger case for investments in the whole pipeline of research to evaluation to scale-up. That is one purpose of i3, or at least could be a positive outcome. Much as the moon landing spurred aerospace R&D for decades and the Human Genome Project motivates much biology R&D, a big breakthrough in education R&D could send ripples of enthusiasm far beyond itself.

Because education is dominated by government, such a breakthrough can only happen if government wants it to. But perhaps experience with i3 will cause government to promote the use of proven programs in Title I and other government funding streams, demonstrating that research really can affect widespread practice and improve student outcomes. If that happened, hopefully respect for research in education would grow and funding would grow in proportion.

Well, at least a person can dream.

Innovation in Education and Medicine

2013-09-04-HPImage09042013.jpg

Advocates of evidence-based reform in education can’t help citing evidence-driven progress in medicine as an analogue to justify evidence-driven progress in their own field. Yet opponents bring up numerous differences between the two disciplines in order to make the analogy seem misleading. In today’s blog, I’ll address some similarities and differences in the way evidence affects, or could affect, education and medicine, leading up to a provocative conclusion:

Education research is as likely as medical research to lead to profound breakthroughs in practice and outcomes in the coming years.

First, let me dispose of a few common misconceptions about evidence in medicine, often raised to contest evidence-based reform in education. The first myth is that unlike teachers, physicians are trained as scientists and participate in research themselves. In reality, while it is true that medical (and pre-med) training focus more on science, most physicians are not practicing scientists, and it is not necessary that they be scientifically sophisticated in order to benefit from scientific research. Another field deeply affected by research, agriculture, does not depend on scientifically trained farmers, for example. There is a great deal that physicians and farmers (and teachers) must know and be able to do, but they need not all be researchers.

Another common comeback is “teaching is more complex than giving a pill.” So it may be, but many medical advances are not in the form of pills. Instead, they require substantial professional development, practice with feedback, and systems of support. Think of almost any surgical procedure to get an idea of how complex and how personalized to a patient’s needs a research-proven medical procedure must be. As in education, some practitioners may be more or less skilled at doing a proven operation, but if the research supports doing this operation, physicians need to work to learn it, just as teachers and schools may learn proven programs.

A recent commentary in Education Week claims that education cannot learn from medicine because doctors treat patients one at a time, while teachers work with classes. The author prefers public health as an analogue, and notes the difficulties with, for example, improving diet and exercise. Yet public health is if anything a model case of evidence-based reform that education would do well to emulate. In recent decades, evidence-based strategies have been used to massively reduce fatalities and injuries by promoting use of seat belts and bicycle helmets, to greatly reduce smoking and consequently reduce lung cancer, and to reduce drug abuse, crime, teen pregnancy, and many other scourges. This all took place over a thirty-year period when American education, which has little respect for evidence, has produced hardly any gain in reading or math performance. If only education could use public health as its model!

One complaint I often hear is the urban legend that it takes an average of 17 years for a given drug to go through all the steps leading to FDA approval and widespread use. Most change in medicine happens much faster than this. In any case, much of what slows down medical advances is concern about rare but fatal or debilitating side effects. Research in medicine cannot just show that a treatment is effective, it also has to show that is universally safe, and this may take a long time. In education, serious negative side effects of a new teaching method that is beneficial for most students are unlikely.

Now for my provocative conclusion: That with greater support, education research could have at least as profound an impact on education outcomes as medicine or public health do on health outcomes. There are several reasons this could be true.

First, medicine must prevent or cure thousands of individual diseases, and curing one may not suggest cures for others. In education, however, it is likely that solutions for any major problem will apply across many more, or at least suggest solutions for other problems. Already, solutions such as cooperative learning, tutoring, and teaching of metacognitive skills are known to improve learning across many subjects and age groups. Proven classroom management methods tend to work in all subjects and many grade levels. New technology solutions may similarly bring about changes in many areas. When education research does advance, it can advance on a broad front.

Second, education research is easier to do than medical research. The unlikelihood of serious negative side effects is one reason. Another is that because pretests in education are so highly correlated with posttests, we can accurately predict what students would have achieved without treatment, making it easier to do studies. Another is that children are in school every day, so gaining access to them is much less difficult than for medical or public health researchers. Routinely collected data are available for students, also making some research easier.

If new medical treatments are found to be effective, companies and hospitals, and sometimes government, ensure widespread adoption of the new methods. This could happen in education, too. Almost all educators are government employees, and if government decided to promote the use of proven programs, it could readily do so. As I’ve written elsewhere, imagine what could happen if the U. S. Department of Education encouraged the use of proven programs in Title I schools, or gave competitive preference points on grant proposals that promise to implement proven models.

It is certainly true today that evidence has had far more impact on the practice of medicine and public health than on the practice of education. But it’s early days for serious research in education, and even earlier days for evidence-based policy. When we build up a stock of proven programs and have the support of government for using them, watch out. Education could show medicine a thing or two about how to improve outcomes on a national scale using rigorous research and innovation.