R&D That Makes a Difference

Over the course of my career, I’ve written a lot of proposals. I’ve also reviewed a lot, and mostly, I’ve seen many funded projects crash and burn, or produce a scholarly article or two that are never heard of again.

As evidence becomes more important in educational policy and practice, I think it’s time to rethink the whole process of funding for development, evaluation, and dissemination.

Here’s how the process works now at the federal level. The feds put out a Request for Proposals (RFP) in the Federal Register. It specifies the purpose of the grant, who is eligible, funding available, deadlines, and most importantly, the criteria on which the proposals will be judged. Proposal writers know that they must follow those criteria very carefully to make it easy for readers to know that each criterion has been satisfied.

The problem with the whole proposal system lies in the perception that each proposal starts with a perfect score (usually 100), and is then marked down for any deficiencies. To oversimplify, reviewers nitpick, and if there is much left after the nits have been picked, the proposal wins.

What this system rewards is enormous care and OCD-level attention to detail. It does not reward creativity, risk, insight, or actual utility for schools. Yet funding grants that do not move forward practice at any significant scale do not do much good in an applied field like education (in related fields such as psychology, purely basic research might justify such approaches, but in education this is a hard argument to make). Maybe our collective inability to do research that affects practice on a broad scale explains some of the lack of enthusiasm our political leadership has for research.

So what would I propose as an alternative? I’m so glad you asked. I’d propose that RFPs be explicitly structured to ask not, “Why shouldn’t we fund this proposal,” but, “Why should we?” That is, proposal writers should be asked to make a case for the potential importance of their work. Here’s a model set of evaluation standards to illustrate what I mean.

A. Significance
1. What are you planning to create?
2. What national problem does your proposed program potentially solve?
3. What outcomes do you expect to achieve, and why are these important?
4. Based on prior research by yourself and others, what is the likelihood that your program will produce the outcomes you expect?
5. What is the likelihood that, if your program is successful, it will work on a significant scale? What is your experience with working at scale or scaling up proven programs in educational settings?
6. In what way is your program creative or distinctive? How might it spark new thinking or development to solve longstanding problems in education?

B. Capabilities
1. Describe the organizational capabilities of the partners to this proposal, as well as the capabilities of the project leadership. Consider capabilities in the following areas:
a. development
b. roll-out, piloting
c. evaluation
d. reporting
e. scale-up
f. communications, marketing
2. Timelines, milestones

C. Evaluation
1. Research questions
2. Design, analysis

D. Impact
Given all you’ve written so far, summarize in one page why this project will make a substantial difference in educational practice and policy.

If we want research and development to produce useful solutions to educational problems, we have to ask the field for just that, and reward those able to produce, evaluate, and disseminate such solutions. Ironically, the federal funding stream closest to the ideal I’ve described is the Investing in Innovation (i3) program, which Congress may be about to shut down. i3 is at least focused on pragmatic solutions rather than theory-building and it has high standards of evidence. But if i3 survives or if it is replaced by another initiative to support innovation, development, evaluation, and scale-up of proven programs, I’d argue that it needs to focus even more on pragmatic issues of effectiveness and scale. Reviewers should be exclaiming, “I get it!” rather than “I gotcha!”

What Counts as Research?

Evidence-based reform in education is popular among educational researchers who like quantitative, randomized research, but that’s a small slice of the profession. A much larger portion of the educational research community ranges from skeptical to downright hostile. They don’t see a place for themselves in the brave new world of evidence-informed policy.

Evidence-based reform does in fact rely primarily on experiments that are always quantitative and usually randomized. Critics point out that this is just one approach to research and that there are many other equally valid approaches. Why should one be valued (and funded) far above the others?

I think the question really comes down to appropriate research methods for particular research questions. For some questions, there is no valid alternative to a quantitative experiment. For example, imagine that you want to know whether a new math curriculum is more effective than common practices in terms of enhancing achievement. The question itself demands a quantitative approach, to measure math achievement. It demands an experiment, because the question is a comparison between one approach and another. The outcome measures might include authentic assessments as well as tests, the experiment might be randomized or matched, but essentially the comparative question demands a comparative design.

However, there are many other questions that do not lend themselves to quantitative or experimental measures. How does the new math approach change teachers’ and students’ roles and relationships? How do they change the culture of the school? How does the new program flow from and reciprocally influence local or state policies? How do students’ success with the new program correlate with their prior success, demographic categories, or social class? Each of these, and many other questions, demand different research designs: ethnographic, descriptive, correlational, policy, and so on.

Even in randomized experiments, there is usually a qualitative element, included to provide information on what is really happening in the various treatments. Further, if evidence-based reform takes hold in a big way, the demand for all sorts of evidence on a broad array of questions is sure to expand, as policy makers come to understand and value the entire research enterprise.

It will take all of us working together to bring knowledge to bear on critical questions of educational policy and practice. We can respectfully disagree about strategies and methodologies, of course, but a broader interest in the findings of educational research within the policy community seems sure to be beneficial to the research community. Besides, our focus needs to be on what is best for children, not what is best for our favorite methodology.

Who Opposes Evidence-Based Reform?

The slow and uncertain pace of progress in evidence-based reform in education seems surprising at one level. How could anyone be against anything so obviously beneficial to children? It must indeed be embarrassing to come out openly against evidence. Who argues for ignorance? Yet while few would stand up and condemn it, I would guess that many educators and researchers would be (secretly) happy if the movement just shriveled up and died.

To illustrate part of the problem, let me tell you about a couple of conversations I had at a dinner for new department heads at the University of York, in England. At the dinner, I chatted with the person on my right, who was the chair in biology, as I recall. I told him I was in York to promote evidence-based reform in education. “I’m against that” he said. “My daughter is a very gifted high school student. If someone found programs that worked on average, her school might use them. Yet the system is serving my daughter very well.”

I turned to my left and chatted with the chair of the physics department. His response was almost identical to that of the biology chair. He also had a brilliant daughter, and the system was working very well for her, thank you very much.

So from this and many other experiences, I have learned that one reason for lack of enthusiasm for evidence is that the system we have is built by and for the people who benefit from it. (A privileged glimpse into the perfectly obvious.) High quality, widely disseminated evidence cannot be controlled, so it might actually cause change, thereby disrupting the system for those for whom it fortunately works. I once heard a respected state superintendent, speaking entirely without irony to an audience of researchers, say the following:

“If research confirms what I believe, it is good research. If it does not, it is bad research.”

The problem with rigorous research is that it can and often does contradict what its funders and advocates originally hope for, and this makes it dangerous. Ignoring or twisting research makes life so much easier for stakeholders of all kinds.

Another group of evidence skeptics are fellow researchers concerned that the kind of quantitative, experimental research emphasized in evidence-based reform is not what they do. So if it prevails, their funding or esteem might be diminished.

Many teachers are uneasy about evidence, because they see it as one more way they may be oppressed by standardized tests, or that they may be forced to implement proven programs. I’m sympathetic to teachers’ concerns in these arenas, but policies to allay these concerns are possible, for example, by allowing teachers to vote on adopting proven programs (as we do in our Success for All whole-school approach). Also, most teachers, in my experience, are delighted to have effective tools to use to make them more effective in their jobs.

If support for evidence-based research comes only from those who benefit from it personally or institutionally, we are doomed. The movement will only prevail if the issue is posed this way:

“How can we use evidence to make sure that students get the best possible outcomes from their education?”

As long as we think only about what is best for kids, evidence-based reform will succeed. There are many legitimate debates to be had about methods and mechanisms, but if we could all agree that students would be better off receiving programs that have been rigorously tested and found to be effective, we’d be 90% of the way to our goal. Anyone who is in education because they want to see kids succeed, which is nearly everyone, should be able to agree. Start with the kids and everything else falls into place. Isn’t that always the case?

Happy Birthday, IDEA

It’s hard for me to believe, but this year marks the 40th anniversary of Pub. L. 94-142, the forerunner of today’s Individuals with Disabilities Education Act (IDEA). I was a special education teacher before IDEA, and I saw many of the positive changes that took place because of the law. I taught developmentally delayed children in Oregon, which was ahead of the curve in many ways and was starting to implement mainstreaming before 1975. I worked in a self-contained school, and the classes were slowly being moved out to other schools one by one. My kids were the lowest-performing in the self-contained school (a privilege for the new teacher), so we were going to be the last class to move. My principal took me to see the elementary school to which we were to be moved. It was totally inappropriate. The only source of water or toilet facilities was the boy’s bathroom down the hall, which had one of those old-fashioned fountains that you activated by stepping on a bar and getting a fine spray of water. I had diapers to change. I refused to go.

My principal was understanding. She and the whole rest of the staff left my building. They put the school phone in my classroom. It was actually kind of fun. I was 22 years old and was in charge of my own school. The irony is that then and now I’ve been a big advocate of mainstreaming, or integration, but I started my career fighting it.

In that long-ago school, I had many extraordinary experiences, but one in particular sticks with me. It makes me so angry and frustrated that to this day I cannot speak about it without choking up.

Because I was the only male teacher, I was assigned large, obstreperous kids. One of them was a 15-year old I’ll call Sam. Sam had spent his entire school career in my school. He was extremely difficult. If you asked him to do anything at all, he would fly into a rage, scratch, kick, and bite. Under the best of circumstances he would spit in all directions.

Because I was young and idealistic, I decided to go visit all of my kids at home, something no other teacher had ever done, to my knowledge. When I told my colleagues I was going to visit Sam and his mom, they took me aside and whispered to me, “Watch out. His mom is crazy. She thinks Sam can talk.”

I went to Sam’s house which was on a small farm. His mother was very nice, and she did not seem crazy at all. We chatted for a while, and then I casually mentioned that the staff at my school said that she thought Sam could talk. Could he?

Sam’s mom sighed. “Yes,” she said, “but I’ve long ago given up getting anyone at the school to listen.”

I asked what kinds of things he could say, and then asked what he liked. She told me that he loved music and asked for records all the time.

The very next day, I was ready. I had my aide watch the other kids, and I put on a record for Sam. Then I picked up the arm on the turntable. “Say ‘record,’” I said to Sam, “and I’ll put the music back on.”

Sam went completely wild. He tore his clothing. He tried to scratch and bite me. I got him in a gentle but secure hold on the floor where he could not hurt himself or me, and just held him struggling and making inarticulate groans and shrieks. We remained in that position for perhaps a half hour. Finally, Sam calmed down.

“Record,” he said.

Sam could talk.

From then on, I worked with Sam every day. I got audio tapes with current popular songs on them and used a few seconds of music to reinforce good behaviors. He learned, or really relearned, language skills at a great rate. Later, I realized that he needed to learn occupational skills so he could stay at home and work in a sheltered workshop that had moved into my school after all the kids left. I taught him to do what the sheltered workshop did, fold and stuff letters and other similar tasks. Sam became calm, well behaved, even loving. He stopped spitting.

So what is infuriating? Sam could always talk. I don’t want to blame my fellow teachers, who were some of the finest people I’ve ever worked with. But the fact is that Sam did not talk because no one was willing to do what was necessary to reach out to him, to ask him for his best.

The story of Sam haunts me as I advocate for evidence in education. Well-meaning people who love children, who have devoted their lives to children, all too often fail to ask for the best from children because they choose to ignore the evidence. It is a full-blown crisis when a child does not learn to read. There are numerous programs with strong evidence of effectiveness in preventing or remediating reading failure. When educational leaders choose not to seek out these programs or practices, they are failing to ask the best from children, or to give children the best chance to succeed. Why is this OK? When our leaders fail to fund research, innovation, and diffusion of proven programs, they are failing hundreds or thousands of children who could have succeeded with a program not yet invented because no one insisted that it be supported, evaluated, and disseminated if it worked. Why is this OK?

On this 40th anniversary of Pub. L. 94-142, I hope we can take a moment to reflect on what that extraordinary act was supposed to achieve. It was not just intended to guarantee services to children with disabilities, not just to serve them in the least restrictive environment, not just to see that they had IEPs. The idea was to help children achieve the maximum degree of success they could achieve. The idea was to ask for the best from every child, and to give whatever it takes to see that they do succeed. With the advent of evidence-based reform, and a rising number of proven solutions available for use with all sorts of children, it should become increasingly possible to demand more of ourselves as educators. If we know how to do better for children at risk, there is no excuse for not doing it.