Farewell to the Walking Encyclopedia

Like just about everyone these days, I carry a digital device in my pocket at all times. At the office, I have a powerful desktop, and in the evening, I curl up with my iPad. Each of these contains the knowledge and wisdom of the ages. Kids and parents have as much access as I do.

The ubiquity of knowledge due to digital devices has led many educational theorists and practitioners to wonder whether teachers are even necessary anymore. Can’t everyone just look things up, do calculations, and generally provide themselves with just-in-time wisdom-on-the-spot?

Unfortunately, the truth is that digital devices are not yet transforming education. But what they are doing is putting the last nail in the coffin of the teacher as walking encyclopedia.

In the old days, a teacher could contribute a lot just by knowing more than the students. Teaching was composed of content knowledge (what the teacher knows and can transmit) and pedagogy (how the teacher manages classrooms, motivates students, makes complex ideas clear, and teaches learning-to-learn skills). Content knowledge is still crucial, but a “walking encyclopedia” is of declining value when everyone can find out everything all the time.

Does the decline of the walking encyclopedia diminish the role of the teacher? Just the opposite. When kids are immersed in too much information, what they need is a guide to help them learn how to comprehend complex texts and understand and organize information. They need to know how to write, how to solve complex problems, how to set up and carry out experiments, how to work well with others, how to contextualize their own thoughts to reason productively, how to manage their own behavior, how to maintain positive motivation, and how to be productive even in the face of difficulties. Each of these objectives, and many more, are at the heart of effective pedagogy. All are aided by content knowledge, of course, but a teacher who knows a lot about his or her discipline but not much about managing and motivating students is not going to succeed in today’s world.

It is my experience that the teaching innovations most likely to enhance student learning are hardly ever those that provide new, improved textbooks or digital content. Instead, they almost invariably provide extensive professional development to teachers, followed up by in-school coaching. In each case, the professional development and coaching focuses on pedagogy, not content. We’ve found the same pattern in all subjects and grade levels.

The task ahead of us in evidence-based education, I believe, is to use evidence of what works in pedagogy to help teachers grow as motivating, engaging, self-aware learning guides, capable of using general and subject-specific pedagogies effectively to help students become eager and capable learners. My encyclopedia walks with me in my pocket wherever I go. That’s true of students, too. They don’t need another at the front of their class. What they do need is someone who can make them care about, comprehend, organize, synthesize, and communicate the megabytes of information they carry.

Proven Programs vs. Local Evidence

All evidence from rigorous studies is good evidence, as long as it addresses actionable policies or practices that could make a difference for student outcomes. However, there is a big distinction between two kinds of good evidence that I think it is useful to make.

One kind of good evidence relates to proven programs. These are approaches to teaching various subjects, increasing graduation rates, improving social-emotional or behavioral outcomes, remediating or preventing learning deficits, and so on. Examples might include programs to improve outcomes in preschool, early reading programs, science programs, math programs, bilingual programs, or school-to-work programs. A hallmark of proven programs is that they are designed for replication. That is, if the findings of local, regional, or national evaluations are positive, the program could, in principle, be used elsewhere, perhaps with modest adjustments to local circumstances and needs.

The other type of good evidence, local evidence, is derived internally to a given school, district, city, or state. Such evidence helps policymakers and educators understand their own situation, opportunities, and problems, and to evaluate policies or practices already underway or being considered. Such data may be particularly valued by the local leadership, because it addresses problems they care about, but it is not intended to produce answers to universal problems, except perhaps as a byproduct. For example, local evidence might address the impact of a local change in graduation requirements or policies on access to bilingual programs or teacher certification procedures, without any particular concern for the degree to which these findings might inform other districts or states. Other districts and states may learn from the example of the district or state with the local evidence, but they may never hear about it or may not think it is relevant to their own systems.

Of course, proven programs and local evidence can overlap, as when a given district or state implements and evaluates a replicable program that responds to its own needs, or when a local district collaborates in a national evaluation of a program clearly intended for national application. Yet assessments of proven programs and local evaluations usually differ in several ways. First, research on proven programs is usually funded by federal funders or by companies that developed the program, so national impact is intended from the outset. Second, findings of evaluations of proven programs are usually published, or made nationally available in some form, while local evaluations may or may not be made available beyond an internal report. Every year, Division H of the American Educational Research Association (AERA) makes available award-winning local evaluations of all sorts of programs and policies, and for decades I have reviewed these to find high-quality evaluations of approaches with national significance, which I then include in the Best Evidence Encyclopedia (BEE) if they meet BEE standards. I always find some real gems. These terrific evaluations are rarely published in journals, however, since there is little incentive, time, or resources for district or state research directors to publish them.

Research on proven programs is taking on greater importance because federal initiatives such as Investing in Innovation (i3) are producing and evaluating such programs, and practical programs such as School Improvement Grants (SIG) and Title II SEED grants now encourage use of programs with strong evidence of effectiveness. As federal programs increasingly encourage use of proven programs when they are appropriate, research evaluating proven programs will become more central to evidence-based reform.

Proven programs and local evaluations play different, complementary roles in education reform. The difference is something like the difference in medicine between research evaluating new drugs, devices, and procedures, on one hand, and research on the operations and outcomes of a given hospital, group of hospitals, or state health system, on the other. When a new drug, for example, is found to be effective for patients who fit a particular profile in terms of diagnosis, age, gender, and other conditions, then this drug is immediately applicable nationwide. If it is clearly more effective than current drugs and has no more side effects or other downsides, the new drug may become the new standard of care very quickly, throughout the U.S. and perhaps the world.

In contrast, local evaluations of hospitals and medical systems are likely to inform the local leadership but are less likely to inform the whole country. For example, a local evaluation might check levels of bacteria in emergency rooms, note the time it takes for ambulances to get from car accidents to the hospital, or assess patient satisfaction with their treatment, and then measure changes in these factors when the hospital implements new procedures intended to improve them.

Both types of research and development are valuable, but each has its own particular benefits and drawbacks. Studies of proven programs are more likely to be published or at least made available in the Education Resources Information Center (ERIC) at the Institute of Education Sciences (IES) or on web sites, as noted earlier, especially if academics were involved (academics publish or perish, of course). If positive outcomes on learning are found, proven programs are more likely to have capacity to go to scale nationally; local districts have little incentive or capacity to do this beyond their own borders. Creators of proven programs are likely to have a longstanding interest in their program, while interest in local evidence may evaporate when the superintendent who commissioned it leaves office. Proven programs are likely to contribute to national evidence or experience about what works, while local evaluations may be done to solve a local problem, with little interest in how the findings add to broader understandings.

On the other hand, local evaluations are more likely to engage local decision makers and educators from the beginning, and therefore to benefit from their wisdom of practice. Because the local leadership was involved all along, they may have greater commitment to obtaining good data and then acting on it. Local evaluations exist in a particular context, which may make the findings of interest in that context and in other places with similar contexts. For example, educators in El Paso, Texas, are sure to pay a lot more attention to research done in El Paso or Laredo or Brownsville than to research done in Philadelphia or Travers City, Michigan, or even Phoenix or Los Angeles.

Proven program research and local evaluations are not in conflict with each other, but it is useful to understand how they differ and how they can best work together to improve outcomes for students. As we build up stronger and broader evidence of both kinds, it will be important to learn how each contributes to learning about optimal practice in education.

The Future of Title I – 2040

When you get to a certain age, you find increasing evidence that you’ve been yammering on the same topics for a very long time, usually to no great benefit. I just made one of those discoveries. Going through some old publications, I found a 1991 article I wrote on the future of Title I (back then it was called Chapter I). I was writing just after the 25th anniversary of Title I, painting a picture of how Title I should be on its 50th anniversary. That would be now.

When you’re writing about how things will be in 25 years, you naturally start thinking about flying cars, robots and so on. However, what I was proposing turned out to be even more fantastical and unlikely. I proposed that Title I be used for whole-school, comprehensive approaches with strong evidence of effectiveness. This would mean investing in development and evaluation of whole-school programs, and then encouraging schools to adopt proven approaches. What a wacky thought! With students at risk, use what has been proven to work! Of course, this was not to come to pass. In fact, until recently, even School Improvement Grants (SIG), special funding for the lowest performing schools in the country, the concept that at least in these schools proven approaches would be a good idea has been slow to catch on.

Actually, during the 1990s, there was a movement to do the very thing I was proposing. It was called comprehensive school reform (CSR). CSR programs provided innovative, integrated approaches to curriculum, instruction, provisions for struggling students, professional development, leadership, parent involvement and more. Many were supported by New American Schools, a private foundation with funding mostly from large corporations. In the late 1990s, the federal government provided up to $50,000 per year for three years for schools to use CSR models. More than 2000 schools did so, more often than not using their own Title I funds, not the special CSR funds. Research evaluating these programs found that some of them were effective in accelerating achievement in Title I schools. Examples were James Comer’s School Development Program, America’s Choice, Modern Red Schoolhouse and our Success for All approach. However, a new administration in 2002 had different priorities, and the CSR movement largely disappeared. Success for All is the only survivor at any scale.

Spacing forward to the present, very few developers are creating or evaluating new whole-school models, but research funded by Investing in Innovation (i3) program, the Institute of Education Sciences (IES), and other funders are increasingly identifying effective approaches to particular subject areas and objectives, which could be assembled into new whole-school models. Developments in the capabilities and availability of technology add possibilities for making whole-school innovations more effective, less expensive and earlier to train and monitor.

So here is my hopeful prediction for Title I on its 75th birthday, in 2040 (!!!).

In 2040, Title I will serve primarily as special funding for high-poverty schools to help them adopt proven, whole-school approaches. There will be many such models, each of which has been proven in rigorous experiments to improve student learning in reading and math. School staffs will have opportunities to review existing models and select those they believe to be most appropriate to their needs, with confidence that whichever models they choose will be effective. Models will include elements personalized for the needs of struggling students and others who need unique accommodations, greatly reducing the need for traditional special education.
Title I funds will be used for materials, software, professional development, coaching, additional personnel and other requirements for implementation of a particular proven approach.

A substantial enterprise of development and evaluation will continuously produce new whole-school models and improvements in every aspect of existing models. Assuming that curriculum materials, assessments, lesson materials and other components will make extensive use of technology, it should be relatively easy for program developers to routinely collect trace data on student progress, which will enable teachers and program developers to constantly improve implementation in each participating school and to test innovations large and small.

Of course, all of this could be done in five years, not 25, if we decided to make the success of disadvantaged students a priority. However, given the pace of evidence-based reform in Title I so far, maybe 25 years is more realistic. I am enthusiastic and hopeful about changes in this direction over the past few years, but there is still a lot to be done to get Title I to be a fund for evidence-proven approaches.

Then again, maybe flying cars and robots are a better bet.

New Years Resolutions for Evidence-Based Reform

I want to wish all of my readers a very happy New Year. The events of 2015, particularly the passage of the Every Student Succeeds Act (ESSA), gives us reason to hope that evidence may finally become a basis for educational decisions. The appointment of John King, clearly the most evidence-oriented and evidence informed Secretary of Education ever to serve in that office, is another harbinger of positive change. Yet these and other extraordinary events only create possibilities for major change, not certainties. What happens next depends on our political leaders, on those of you reading this blog, and on others who believe that children deserve proven programs. In recognition of this, I would suggest a set of New Years resolutions for us all to collectively achieve in the coming year.

1. Get beyond D.C. The evidence movement in education has some sway within a half-mile radius of K Street and in parts of academia, but very little in the rest of the country. ESSA puts a lot of emphasis on moving power from Washington to the states, and even if this were not true, it is now time to advocate in state capitols for use of proven programs and evidence-informed decisions. In the states and even in Washington, evidence-based reform needs a lot more allies.

2. Make SIG work. School Improvement Grants (SIG) written this spring for implementation in fall, 2016, continue to serve the lowest performing 5% of schools in each state. Schools can choose among six models, the four original ones (i.e., school closure, charter-ization, transformation, and turnaround) plus two new ones: proven, whole-school reforms, and state-developed models, which may include proven programs. SIG is an obvious focus for evidence, since these are schools in need of sure-fire solutions, and the outcomes of SIG with the original four models have been less than impressive. Also, since SIG is already well underway, it could provide an early model of how proven programs could transform struggling schools. But this will only happen if there is encouragement to states and schools to choose the proven program option. Perceived success in SIG would go a long way toward building support for use of proven programs more broadly. (School Improvement will undergo significant changes the following year pursuant to ESSA and this merits its own blog, but it’s important to note here that states will be required to include evidence-based interventions as part of their plan, so moving towards evidence now may help ease their transition later.)

3. Celebrate successes of Investing in Innovation (i3). The 2010 cohort, the first and largest cohort of i3 grantees, is beginning to report achievement outcomes from third-party evaluations. As in any set of rigorous evaluations, studies that did not find significant differences are sure to outnumber those that do. We need to learn everything we can from these evaluations, whatever their findings, but there is a particular need to celebrate the findings of those studies that did find positive impacts. These provide support for the entire concept of evidence-based reform, and give practicing educators programs with convincing evidence that are ready to go.

4. In as many federal discretionary funding programs as possible, provide preference points for applications proposing to implement proven programs. ESSA names several (mostly small) funding programs that will provide preference points to proposals proposing to use proven programs. This list should be expanded to include any funding program in which proven programs exist. Is there any reason not to encourage use of proven programs? It costs nothing, does not require use of any particular program, and makes positive outcomes for children a lot more likely.

5. Encourage use of proven programs in formula funding, such as Title I. Formula funding is where the big money goes, and activities funded by these resources need to have as strong an evidence base as possible. Incentives to match formula funding, as in President Obama’s Leveraging What Works proposal, would help, of course, but are politically unlikely at the moment. However, plain old encouragement from Washington and state departments of education could be just as effective. Who can argue against using Title I funds, for example, to implement proven approaches? Will anyone stand up to advocate for ineffective or unproven approaches for disadvantaged children, once the issue is out in the open?

These resolutions are timely, because, at least in my experience, both government and the field adjust to new legislation in the first year, and then whatever sticks stays the same for many years. Whatever does not stick is hard to add in later. The evidence elements of ESSA will matter to the extent our leaders make them matter, right now, in 2016. Let’s do whatever we can to help them make the right choices for our children.