Getting the Best Mileage from Proven Programs

Race carWouldn’t you love to have a car that gets 200 miles to the gallon? Or one that can go hundreds of miles on a battery charge? Or one that can accelerate from zero to sixty twice as fast as any on the road?

Such cars exist, but you can’t have them. They are experimental vehicles or race cars that can only be used on a track or in a lab. They may be made of exotic materials, or may not carry passengers or groceries, or may be dangerous on real roads.

In working on our Evidence for ESSA website (www.evidenceforessa.org), we see a lot of studies that are like these experimental cars. For example, there are studies of programs in which the researcher or her graduate students actually did the teaching, or in which students used innovative technology with one adult helper for every machine or every few machines. Such studies are fine for theory building or as pilots, but we do not accept them for Evidence for ESSA, because they could never be replicated in real schools.

However, there is a much more common situation to which we pay very close attention. These are studies in which, for example, teachers receive a great deal of training and coaching, but an amount that seems replicable, in principle. For example, we would reject a study in which the experimenters taught the program, but not one in which they taught ordinary teachers how to use the program.

In such studies, the problem comes in dissemination. If studies validating a program provided a lot of professional development, we would accept it only if the disseminator provides a similar level of professional development, and their estimates of cost and personnel take this level of professional development into account. We put on our website clear expectations that these services be provided at a level similar to what was provided in the research, if the positive outcomes seen in the research are to be obtained.

The problem is that disseminators often offer schools a form of the program that was never evaluated, to keep costs low. They know that schools don’t like to spend a lot on professional development, and they are concerned that if they require the needed levels of PD or other services or materials, schools won’t buy their program. At the extreme end of this, there are programs that were successfully evaluated using extensive professional development, and then put their teacher’s manual on the web for schools to use for free.

A recent study of a program called Mathalicious illustrated the situation. Mathalicious is an on-line math course for middle school. An evaluation found that teachers randomly assigned to just get a license, with minimal training, did not obtain significant positive impacts, compared to a control group. Those who received extensive on-line training, however, did see a significant improvement in math scores, compared to controls.

When we write our program descriptions, we compare program implementation details in the research to what is said or required on the program’s website. If these do not match, within reason, we try to make it clear what were the key elements necessary for success.

Going back to the car analogy, our procedures eliminate those amazing cars that can only operate on special tracks, but we accept cars that can run on streets, carry children and groceries, and generally do what cars are expected to do. But if outstanding cars require frequent recharging, or premium gasoline, or have other important requirements, we’ll say so, in consultation with the disseminator.

In our view, evidence in education is not for academics, it’s for kids. If there is no evidence that a program as disseminated benefits kids, we don’t want to mislead educators who are trying to use evidence to benefit their children.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Advertisements

Evidence-Based Does Not Equal Evidence-Proven

Chemist

As I speak to educational leaders about using evidence to help them improve outcomes for students, there are two words I hear all the time that give me the fantods (as Mark Twain would say):

Evidence-based

            I like the first word, “evidence,” just fine, but the second word, “based,” sort of negates the first one. The ESSA evidence standards require programs that are evidence-proven, not just evidence-based, for various purposes.

“Evidence-proven” means that a given program, practice, or policy has been put to the test. Ideally, students, teachers, or schools have been assigned at random to use the experimental program or to remain in a control group. The program is provided to the experimental group for a significant period of time, at least a semester, and then final performance on tests that are fair to both groups are compared, using appropriate statistics.

If your doctor gives you medicine, it is evidence proven. It isn’t just the same color or flavor as something proven, it isn’t just generally in line with what research suggests might be a good idea. Instead, it has been found to be effective, compared to current standards of care, in rigorous studies.

“Evidence-based,” on the other hand, is one of those wiggle words that educators love to use to indicate that they are up-to-date and know what’s expected, but don’t actually intend to do anything different from what they are doing now.

Evidence-based is today’s equivalent of “based on scientifically-based research” in No Child Left Behind. It sure sounded good, but what educational program or practice can’t be said to be “based on” some scientific principle?

In a recent Brookings article Mark Dynarski wrote about state ESSA plans, and conversations he’s heard among educators. He says that the plans are loaded with the words “evidence-based,” but with little indication of what specific proven programs they plan to implement, or how they plan to identify, disseminate, implement, and evaluate them.

I hope the ESSA evidence standards give leaders in even a few states the knowledge and the courage to insist on evidence-proven programs, especially in very low-achieving “school improvement” schools that desperately need the very best approaches. I remain optimistic that ESSA can be used to expand evidence-proven practices. But will it in fact have this impact? That remains to be proven.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Half a Worm: Why Education Policy Needs High Evidence Standards

There is a very old joke that goes like this:

What’s the second-worst thing to find in your apple?  A worm.

What’s the worst?  Half a worm.

The ESSA evidence standards provide clearer definitions of “strong,” “moderate,” and “promising” levels of evidence than have ever existed in law or regulation. Yet they still leave room for interpretation.  The problem is that if you define evidence-based too narrowly, too few programs will qualify.  But if you define evidence-based too broadly, it loses its meaning.

We’ve already experienced what happens with a too-permissive definition of evidence.  In No Child Left Behind, “scientifically-based research” was famously mentioned 110 times.  The impact of this, however, was minimal, as everyone soon realized that the term “scientifically-based” could be applied to just about anything.

Today, we are in a much better position than we were in 2002 to insist on relatively strict evidence of effectiveness, both because we have better agreement about what constitutes evidence of effectiveness and because we have a far greater number of programs that would meet a high standard.  The ESSA definitions are a good consensus example.  Essentially, they define programs with “strong evidence of effectiveness” as those with at least one randomized study showing positive impacts using rigorous methods, and “moderate evidence of effectiveness” as those with at least one quasi-experimental study.  “Promising” is less well-defined, but requires at least one correlational study with a positive outcome.

Where the half-a-worm concept comes in, however, is that we should not use a broader definition of “evidence-based”.  For example, ESSA has a definition of “strong theory.”  To me, that is going too far, and begins to water down the concept.  What program in all of education cannot justify a “strong theory of action”?

Further, even in the top categories, there are important questions about what qualifies. In school-level studies, should we insist on school-level analyses (i.e., HLM)? Every methodologist would say yes, as I do, but this is not specified. Should we accept researcher-made measures? I say no, based on a great deal of evidence indicating that such measures inflate effects.

Fortunately, due to investments made by IES, i3, and other funders, the number of programs that meet strict standards has grown rapidly. Our Evidence for ESSA website (www.evidenceforessa.org) has so far identified 101 PK-12 reading and math programs, using strict standards consistent with ESSA definitions. Among these, more than 60% meet the “strong” standard. There are enough proven programs in every subject and grade level to give educators choices among proven programs. And we add more each week.

This large number of programs meeting strict evidence standards means that insisting on rigorous evaluations, within reason, does not mean that we end up with too few programs to choose among. We can have our apple pie and eat it, too.

I’d love to see federal programs of all kinds encouraging use of programs with rigorous evidence of effectiveness.  But I’d rather see a few programs that meet a strict definition of “proven” than to see a lot of programs that only meet a loose definition.  20 good apples are much better than applesauce of dubious origins!

This blog is sponsored by the Laura and John Arnold Foundation

Proven Tutoring Approaches: The Path to Universal Proficiency

There are lots of problems in education that are fundamentally difficult. Ensuring success in early reading, however, is an exception. We know what skills children need in order to succeed in reading. No area of teaching has a better basis in high-quality research. Yet the reading performance of America’s children is not improving at an adequate pace. Reading scores have hardly changed in the past decade, and gaps between white, African-American, and Hispanic students have been resistant to change.
In light of the rapid growth in the evidence base, and of the policy focus on early reading at the federal and state levels, this is shameful. We already know a great deal about how to improve early reading, and we know how to learn more. Yet our knowledge is not translating into improved practice and improved outcomes on a large enough scale.
There are lots of complex problems in education, and complex solutions. But here’s a really simple solution:

 

Over the past 30 years researchers have experimented with all sorts of approaches to improve students’ reading achievement. There are many proven and promising classroom approaches, and such programs should be used with all students in initial teaching as broadly as possible. Effective classroom instruction, universal access to eyeglasses, and other proven approaches could surely reduce the number of students who need tutors. But at the end of the day, every child must read well. And the only tool we have that can reliably make a substantial difference at scale with struggling readers is tutors, using proven one-to-one or small-group methods.

I realized again why tutors are so important in a proposal I’m making to the State of Maryland, which wants to bring all or nearly all students to “proficient” on its state test, the PARCC. “Proficient” on the PARCC is a score of 750, with a standard deviation of about 50. The state mean is currently around 740. I made a colorful chart (below) showing “bands” of scores below 750 to show how far students have to go to get to 750.

 

Each band covers an effect size of 0.20. There are several classroom reading programs with effect sizes this large, so if schools adopted them, they could move children scoring at 740 to 750. These programs can be found at www.evidenceforessa.org. But implementing these programs alone still leaves half of the state’s children not reaching “proficient.”

What about students at 720? They need 30 points, or +0.60. The best one-to-one tutoring can achieve outcomes like this, but these are the only solutions that can.

Here are mean effect sizes for various reading tutoring programs with strong evidence:

 

 

As this chart shows, one-to-one tutoring, by well-trained teachers or paraprofessionals using proven programs, can potentially have the impacts needed to bring most students scoring 720 (needing 30 points or an effect size of +0.60) to proficiency (750). Three programs have reported effect sizes of at least +0.60, and several others have approached this level. But what about students scoring below 720?

So far I’ve been sticking to established facts, studies of tutoring that are, in most cases, already being disseminated. Now I’m entering the region of well-justified supposition. Almost all studies of tutoring occupy just one year or less. But what if the lowest achievers could receive multiple years of tutoring, if necessary?

One study, over 2½ years, did find an effect size of +0.68 for one-to-one tutoring. Could we do better that that? Most likely. In addition to providing multiple years of tutoring, it should be possible to design programs to achieve one-year effect sizes of +1.00 or more. These may incorporate technology or personalized approaches specific to the needs of individual children. Using the best programs for multiple years, if necessary, could increase outcomes further. Also, as noted earlier, using proven programs other than tutoring for all students may increase outcomes for students who also receive tutoring.

But isn’t tutoring expensive? Yes it is. But it is not as expensive as the costs of reading failure: Remediation, special education, disappointment, and delinquency. If we could greatly improve the reading performance of low achievers, this would of course reduce inequities across the board. Reducing inequities in educational outcomes could reduce inequities in our entire society, an outcome of enormous importance.

Even providing a substantial amount of teacher tutoring could, by my calculations, increase total state education expenditures (in Maryland) by only about 12%. These costs could be reduced greatly or even eliminated by reducing expenditures on ineffective programs, reducing special education placements, and other savings. Having some tutoring done by part time teachers may reduce costs. Using small-group tutoring (fewer than 6 students at a time) for students with milder problems may save a great deal of money. Even at full cost, the necessary funding could be phased in over a period of 6 years at 2% a year.

The bottom line is that the low levels of achievement and high levels of gaps according to economic and racial differences could be improved a great deal using methods already proven to be effective and already widely available. Educators and policy makers are always promising policies that bring every child to proficiency: “No Child Left Behind” and “Every Student Succeeds” come to mind. Yet if these outcomes are truly possible, why shouldn’t we be pursuing them, with every resource at our disposal?

How Networks of Proven Programs Could Help State-Level Reform

America is a great country, but it presents a serious problem for school reformers. The problem is that it is honkin’ humongous, with strong traditions of state and local autonomy. Reforming even a single state is a huge task, because most of our states are the size of entire small nations. (My small state, Maryland, has about the population of Scotland, for example.) And states, districts, schools, and teachers are all kind of prickly about taking orders from anyone further up the hierarchy.

The Every Student Succeeds Act (ESSA) puts a particular emphasis on state and local control, a relief after the emphasis on mandates from Washington central to No Child Left Behind. ESSA also contains a welcome focus on using evidence-based programs.

ESSA is new, and state, district and school leaders are just now grappling with how to use the ESSA opportunities to move forward on a large scale. How can states hope to bring about major change on a large scale, working one school at a time?

The solution to this problem might be for states, large districts, or coalitions of smaller districts to offer a set of proven, whole school reform models to a number of schools in need of assistance, such as Title I schools. School leaders and their staffs would have opportunities to learn about programs, find some appropriate to their needs, ideally visit schools using the programs now, and match the programs with their own needs, derived from a thorough needs assessment. Ultimately, all school staff might vote, and at least 80% would have to vote in favor. The state or district would set aside federal or state funds to enable schools to afford the program they have chosen.

All schools in the state, district, or consortium that selected a given program could then form a network. The network would have regular meetings among principals, teachers of similar grades, and other job-alike staff members, to provide mutual help, share ideas, and interact cost-effectively with representatives of program providers. Network members would share a common language, and drawing from common experiences could be of genuine help to each other. The network arrangement would also reduce the costs of adopting each program, because it would create local scale to reduce costs of training and coaching.

The benefits of such a plan would be many. First, schools would be implementing programs they selected, and school staffs would be likely to put their hearts and minds into making the program work. Because the programs would all have been proven to be effective in the first place, they would be very likely to be measurably effective in these applications.

There might be schools that would initially opt not to choose anything, and this would be fine. Such schools would have opportunities each year to join colleagues in one of the expanding networks as they see that the programs are working in their own districts or regions.

As the system moved forward, it would become possible to do high-quality evaluations of each of the programs, contributing to knowledge of how each program works in particular districts or areas.

As the number of networked schools increased across a given state, it would begin to see widespread and substantial gains on state assessments. Further, all involved in this process would be learning not only the average effectiveness of each program, but also how to make each one work, and how to use programs to succeed with particular subgroups or solve particular problems. Networks, program leaders, and state, district, and school leaders, would get smarter each year about how to use proven programs to accelerate learning among students.

How could this all work at scale? The answer is that there are nonprofit organizations and companies that are already capable of working with hundreds of schools. At the elementary level, examples include the Children’s Literacy Initiative, Positive Action, and our own Success for All. At the secondary level, examples include BARR, the Talent Development High School, Reading Apprenticeship, and the Institute for Student Achievement. Other programs currently work with specific curricula and could partner with other programs to provide whole-school approaches, or some schools may only want or need to work on narrower problems. The programs are not that expensive at scale (few are more than $100 per student per year), and could be paid for with federal funds such as school improvement, Title I, Title II, and Striving Readers, or with state or local funds.

The proven programs do not ask schools to reinvent the wheel, but rather to put their efforts and resources toward adopting and effectively implementing proven programs and then making necessary adaptations to meet local needs and circumstances. Over time this would build capacity within each state, so that local people could take increasing responsibility for training and coaching, further reducing costs and increasing local “flavor.”

We’ve given mandates 30 years to show their effectiveness. ESSA offers new opportunities to do things differently, allowing states and districts greater freedom to experiment. It also strongly encourages the use of evidence. This would be an ideal time to try a simple idea: use what works.

This blog is sponsored by the Laura and John Arnold Foundation

Where Will the Capacity for School-by-School Reform Come From?

In recent months, I’ve had a number of conversations with state and district leaders about implementing the ESSA evidence standards. To its credit, ESSA diminishes federal micromanaging, and gives more autonomy to states and locals, but now that the states and locals are in charge, how are they going to achieve greater success? One state department leader described his situation in ESSA as being like that of a dog who’s been chasing cars for years, and then finally catches one. Now what?

ESSA encourages states and local districts to help schools adopt and effectively implement proven programs. For school improvement, portions of Title II, and Striving Readers, ESSA requires use of proven programs. Initially, state and district folks were worried about how to identify proven programs, though things are progressing on that front (see, for example, www.evidenceforessa.org). But now I’m hearing a lot more concern about capacity to help all those individual schools do needs assessments, select proven programs aligned with their needs, and implement them with thought, care, and knowledgeable application of implementation science.

I’ve been in several meetings where state and local folks ask federal folks how they are supposed to implement ESSA. “Regional educational labs will help you!” they suggest. With all due respect to my friends in the RELs, this is going to be a heavy lift. There are ten of them, in a country with about 52,000 Title I schoolwide projects. So each REL is responsible for, on average, five states, 1,400 districts, and 5,200 high-poverty schools. For this reason, RELs have long been primarily expected to work with state departments. There are just not enough of them to serve many individual districts, much less schools.

State departments of education and districts can help schools select and implement proven programs. For example, they can disseminate information on proven programs, make sure that recommended programs have adequate capacity, and perhaps hold effective methods “fairs” to introduce people in their state to program providers. But states and districts rarely have capacity to implement proven programs themselves. It’s very hard to build state and local capacity to support specific proven programs. For example, due to frequent downturns in state or district funding come, the first departments to be cut back or eliminated often involve professional development. For this reason, few state departments or districts have large, experienced professional development staffs. Further, constant changes in state and local superintendents, boards, and funding levels, make it difficult to build up professional development capacity over a period of years.

Because of these problems, schools have often been left to make up their own approaches to school reform. This happened on a wide scale in the NCLB School Improvement Grants (SIG) program, where federal mandates specified very specific structural changes but left the essentials, teaching, curriculum, and professional development, up to the locals. The MDRC evaluation of SIG schools found that they made no better gains than similar, non-SIG schools.

Yet there is substantial underutilized capacity available to help schools across the U.S. to adopt proven programs. This capacity resides in the many organizations (both non-profit and for-profit) that originally created the proven programs, provided the professional development that caused them to meet the “proven” standard, and likely built infrastructure to ensure quality, sustainability, and growth potential.

The organizations that created proven programs have obvious advantages (their programs are known to work), but they also have several less obvious advantages. One is that organizations built to support a specific program have a dedicated focus on that program. They build expertise on every aspect of the program. As they grow, they hire capable coaches, usually ones who have already shown their skills in implementing or leading the program at the building level. Unlike states and districts that often live in constant turmoil, reform organizations or for-profit professional development organizations are likely to have stable leadership over time. In fact, for a high-poverty school engaged with a program provider, that provider and its leadership may be the only partner stable enough to be likely to be able to help them with their core teaching for many years.

State and district leaders play major roles in accountability, management, quality assurance, and personnel, among many other issues. With respect to implementation of proven programs, they have to set up conditions in which schools can make informed choices, monitor the performance of provider organizations, evaluate outcomes, and ensure that schools have the resources and supports they need. But truly reforming hundreds of schools in need of proven programs one at a time is not realistic for most states and districts, at least not without help. It makes a lot more sense to seek capacity in organizations designed to provide targeted professional development services on proven programs, and then coordinate with these providers to ensure benefits for students.

This blog is sponsored by the Laura and John Arnold Foundation

Pilot Studies: On the Path to Solid Evidence

This week, the Education Technology Industry Network (ETIN), a division of the Software & Information Industry Association (SIIA), released an updated guide to research methods, authored by a team at Empirical Education Inc. The guide is primarily intended to help software companies understand what is required for studies to meet current standards of evidence.

In government and among methodologists and well-funded researchers, there is general agreement about the kind of evidence needed to establish the effectiveness of an education program intended for broad dissemination. To meet its top rating (“meets standards without reservations”) the What Works Clearinghouse (WWC) requires an experiment in which schools, classes, or students are assigned at random to experimental or control groups, and it has a second category (“meets standards with reservations”) for matched studies.

These WWC categories more or less correspond to the Every Student Succeeds Act (ESSA) evidence standards (“strong” and “moderate” evidence of effectiveness, respectively), and ESSA adds a third category, “promising,” for correlational studies.

Our own Evidence for ESSA website follows the ESSA guidelines, of course. The SIIA guidelines explain all of this.

Despite the overall consensus about the top levels of evidence, the problem is that doing studies that meet these requirements is expensive and time-consuming. Software developers, especially small ones with limited capital, often do not have the resources or the patience to do such studies. Any organization that has developed something new may not want to invest substantial resources into large-scale evaluations until they have some indication that the program is likely to show well in a larger, longer, and better-designed evaluation. There is a path to high-quality evaluations, starting with pilot studies.

The SIIA Guide usefully discusses this problem, but I want to add some further thoughts on what to do when you can’t afford a large randomized study.

1. Design useful pilot studies. Evaluators need to make a clear distinction between full-scale evaluations, intended to meet WWC or ESSA standards, and pilot studies (the SIIA Guidelines call these “formative studies”), which are just meant for internal use, both to assess the strengths or weaknesses of the program and to give an early indicator of whether or not a program is ready for full-scale evaluation. The pilot study should be a miniature version of the large study. But whatever its findings, it should not be used in publicity. Results of pilot studies are important, but by definition a pilot study is not ready for prime time.

An early pilot study may be just a qualitative study, in which developers and others might observe classes, interview teachers, and examine computer-generated data on a limited scale. The problem in pilot studies is at the next level, when developers want an early indication of effects on achievement, but are not ready for a study likely to meet WWC or ESSA standards.

2. Worry about bias, not power. Small, inexpensive studies pose two types of problems. One is the possibility of bias, discussed in the next section. The other is lack of power, mostly meaning having a large enough sample to determine that a potentially meaningful program impact is statistically significant, or unlikely to have happened by chance. To understand this, imagine that your favorite baseball team adopts a new strategy. After the first ten games, the team is doing better than it did last year, in comparison to other teams, but this could have happened by chance. After 100 games? Now the results are getting interesting. If 10 teams all adopt the strategy next year and they all see improvements on average? Now you’re headed toward proof.

During the pilot process, evaluators might compare multiple classes or multiple schools, perhaps assigned at random to experimental and control groups. There may not be enough classes or schools for statistical significance yet, but if the mini-study avoids bias, the results will at least be in the ballpark (so to speak).

3. Avoid bias. A small experiment can be fine as a pilot study, but every effort should be made to avoid bias. Otherwise, the pilot study will give a result far more positive than the full-scale study will, defeating the purpose of doing a pilot.

Examples of common sources of biases in smaller studies are as follows.

a. Use of measures made by developers or researchers. These measures typically produce greatly inflated impacts.

b. Implementation of gold-plated versions of the program. . In small pilot studies, evaluations often implement versions of the program that could never be replicated. Examples include providing additional staff time that could not be repeated at scale.

c. Inclusion of highly motivated teachers or students in the experimental group, which gets the program, but not the control group. For example, matched studies of technology often exclude teachers who did not implement “enough” of the program. The problem is that the full-scale experiment (and real life) include all kinds of teachers, so excluding teachers who could not or did not want to engage with technology overstates the likely impact at scale in ordinary schools. Even worse, excluding students who did not use the technology enough may bias the study toward more capable students.

d. Learn from pilots. Evaluators, developers, and disseminators should learn as much as possible from pilots. Observations, interviews, focus groups, and other informal means should be used to understand what is working and what is not, so when the program is evaluated at scale, it is at its best.

 

***

As evidence becomes more and more important, publishers and software developers will increasingly be called upon to prove that their products are effective. However, no program should have its first evaluation be a 50-school randomized experiment. Such studies are indeed the “gold standard,” but jumping from a two-class pilot to a 50-school experiment is a way to guarantee failure. Software developers and publishers should follow a path that leads to a top-tier evaluation, and learn along the way how to ensure that their programs and evaluations will produce positive outcomes for students at the end of the process.

 

This blog is sponsored by the Laura and John Arnold Foundation