Many Programs Meet New Evidence Standards

2013-06-11-HPPost.jpg

One of the most common objections to evidence-based reform is that there are too few programs with strong evidence of effectiveness to start encouraging schools to use proven programs. The concern is that it looks bad if a policy of “use what works” leads educators to look for proven programs, only to find that there are very few such programs in a given area, or that there are none at all.

The lack of proven programs is indeed a problem in some areas, such as science and writing, but it is not a problem in others, such as reading and math. There is no reason to hold back on encouraging evidence where it exists.

The U.S. Department of Education has proposed changes to its EDGAR regulations to define “strong” and “moderate” levels of evidence supporting educational programs. These standards use information from the What Works Clearinghouse (WWC), and are very similar to those used in the federal Investing in Innovation (i3) program to designate programs eligible for “scale-up” or “validation” grants, respectively.

As an exercise, my colleagues and I checked to see how many elementary reading programs currently exist that qualify as “strong” or “moderate” according to the new EDGAR standards. This necessitated excluding WWC-approved programs that are not actively disseminated and those that would not meet current WWC standards (2.1 or 3.0), and adding programs not yet reviewed by WWC but that appear likely to meet its standards.

Here’s a breakdown of what we found.

Beginning Reading (K-1)
Total programs                              26
School/classroom programs      16
Small-group tutoring                   4
1-1 tutoring                                   6

Upper Elementary Reading (2-6)
Total programs                               17
School/classroom programs       12
Small-group tutoring                    4
1-1 tutoring                                     1

The total number of unique programs is 35 (many of the programs covered both beginning and upper-elementary reading). Of these, only four met the EDGAR “strong” criterion, but the “moderate” category, which requires a single rigorous study with positive impacts, had 31 programs.

We’ll soon be looking at secondary reading and elementary and secondary math, but the pattern is clear. While few programs will meet the highest EDGAR standard, many will meet the “moderate” standard.

Here’s why this matters. The EDGAR definitions can be referenced in any competitive request for proposals to encourage and/or incentivize the use of proven programs, perhaps offering two competitive preference points for proposals to implement programs meeting the “moderate” standard and three points for proposals to adopt programs meeting the “strong” standard.

Since there are many programs to choose from, educators will not feel constrained by this process. In fact, many may be happy to learn about the many offerings available, and to obtain objective information on their effectiveness. If none of the programs fit their needs, they can choose something unevaluated and forgo the extra points, but even then, they will have considered evidence as a basis for their decisions. And that would be a huge step forward.

Advertisements

EDGAR and the Two-Point Conversion

2013-05-22-Edgar052213.jpg

Once upon a time, there was a football player named EDGAR. His team was in the state championship. It was the fourth quarter, and they were down by seven points. But just as time ran out, EDGAR ran around the opposing line and scored a touchdown.

EDGAR’s coach now had a dilemma. Should he try a safe kick for one point, putting the game into overtime, or go for a much more difficult two-point conversion, one chance to score from the five-yard line?

Evidence-based reform faces a similar dilemma. The U.S. Department of Education proposed several months ago some additions to EDGAR, not a football player but a stultifyingly boring doorstopper of a book of regulations for grants and contracts. These new regulations, as I noted in an earlier blog, are really exciting, at least to evidence wonks. They define four levels of evidence for educational programs: Strong evidence of effectiveness, moderate evidence of effectiveness, evidence of promise, and strong-theory. These definitions are similar to those used in Investing in Innovation (i3) to qualify proposals for scale-up (strong), validation (moderate), or development grants (evidence of promise or strong theory).

Here’s where the two-point conversion comes in. Readers of this blog may recall that I have long advocated the provision of, say, two competitive preference points in discretionary grants for proposals promising to use proven programs when such programs are available. The idea is that two points on a scale of 100 would greatly increase the interest of grant writers in proven programs without heavy handedly requiring their use. No grant writer ignores two points, but school leaders may have good reasons to prefer programs that have not yet been successfully evaluated. In those cases, the schools would be free to forego the competitive preference points. Still, the two points would telegraph the government’s support for the use of proven programs without undermining local control. Requests for proposals routinely include competitive preference points for criteria a lot less important than whether the program schools are proposing to use have been proven to work. Why not provide at least this much encouragement for potential innovators, at no additional cost to the government?

Offering two points for proposing to use proven programs would bring about a major conversion of education reform. It would focus attention on the evidence and make school and district leaders aware that effective options are available to them and are ok, even encouraged, by the powers that be.

As in my fictitious football game, it’s EDGAR who sets up the two-point conversion. The new definitions in EDGAR make it relatively easy for the Department of Education to put competitive preference points or other encouragements for use of proven programs into requests for proposals. EDGARS’s end run does not win the game, but it creates a condition in which the game can be won – by a two point conversion.