Do Struggling Charter Schools Deserve a Second Chance?

Last week, I wrote about the “Struggling Schools and the Problem with the ‘Shut It Down’ Mentality.” The post seemed to strike a chord, so I would like to encourage my readers to consider the same framework for struggling charter schools. Most people who follow research on charter schools would agree that there is little evidence that, on average, students in charter schools gain any more than similar children in non-charters. Charter advocates admit this to be true, but point to positive effects documented for outstanding charter networks, such as KIPP, and often vow to “weed out” failing charters from their ranks.

Opponents of closure of traditional public schools seem to accept this tough love approach for charters. I would like to suggest that their circumstances, and solutions, are more similar than some may think.

Unfortunately, “weeding out” (i.e., shutting down) ineffective charters is no easier than shutting down ineffective non-charters. In both cases, there may be a reasonable rationale for closing down the worst of the worst, but not only is closing any school financially and politically wrenching, a recent study of closing public schools found that students from the closed schools perform worse than similar children for a year or more and then end up doing no better. School closure (in charters as well as non-charters) must be an extreme solution in extreme circumstances to keep the system honest, but if other solutions exist, they should be tried first.

There are other proven solutions for failing schools. The Best Evidence Encyclopedia lists all sorts of reading, math, and whole-school reform programs with excellent evidence of effectiveness in both traditional public and public charter settings. The U.S. Department of Education’s Investing in Innovations (i3) program is funding the development, evaluation, and scaling up of proven programs of all kinds. All of these programs should work in charters as well as they do in any other schools, and there are already some charters (in addition to KIPP itself, which has a large i3 grant to scale up its leadership model) using the programs in partnership with i3 programs.

The charter movement has become increasingly courageous and open in admitting problems within its own membership, but weeding the charter garden is not the only way forward. Charters getting subpar outcomes need professional development and proven programs, and a renewed commitment to make a difference, just like traditional public schools that are struggling. Even a small part of the substantial private as well as government funding supporting the opening of new charters could be set aside to help all charters improve instruction, curriculum, and outcomes, and the entire charter movement, not to mention hundreds of thousands of kids, could greatly benefit.

Disclosure: Robert Slavin is the Director of the Center for Research and Reform in Education, which hosts the Best Evidence Encyclopedia, and the co-founder of Success for All, which received an i3 grant and operates in approximately 100 charter schools nationwide.


Is Whole School Reform Poised for a Comeback in ESEA?

Whole school (or comprehensive) reform models are making a remarkable comeback in policy and practice. Popular in the 1990’s, with as many as 6,000 schools using whole-school models by 2001, the Bush administration tried to eliminate the approach in the 2000s, despite strong positive effects in evaluations of several of the most popular models.

Recently, whole-school reform has re-appeared in the Senate’s proposals for reauthorization of ESEA. Here’s the proposed language:

(iv) WHOLE SCHOOL REFORM STRATEGY- A local educational agency implementing a whole school reform strategy for a school shall implement an evidence-based strategy that ensures whole school reform. The strategy shall be undertaken in partnership with a strategy developer offering a school reform program that is based on at least a moderate level of evidence that the program will have a statistically significant effect on student outcomes, including more than 1 well-designed or well-implemented experimental or quasi-experimental study.

This whole-school reform language is much better than the language in the 1997 Obey-Porter bill that greatly accelerated investments in whole-school approaches. Obey-Porter was clear about the nine (later 11) elements that should be included in whole-school plans (instruction, curriculum, professional development, parent involvement, etc.), but it was vague about the evidence requirement. Last week’s Senate bill, however, is clear that to qualify, whole school programs seeking to turn around the nation’s worst-performing schools will have to meet a specific set of evidence standards. That’s a big improvement in itself.

This whole-school reform provision was one of the few aspects of the turnaround portions of the bill to receive broad support. During an often-heated debate, Republicans and Democrats seemed to agree that the evidence was supportive of this approach to turning around low-achieving schools. Senator Burr (R-NC), in seeking to strike the whole turnaround section, acknowledged whole school reform as the model most likely to produce results. Senator Franken (D-MN) cited the addition of whole school reform as an improvement over the four models rolled out by the administration.

Whatever happens with the overall Senate proposal, I very much hope this provision survives. There are far too many persistently low-achieving schools in the U.S. to expect that each of them is going to invent its own successful approach. While the Senate bill does not (and should not) mandate use of whole-school reforms, its mention of the approach and of rigorous standards of evidence are sure to encourage many schools to consider it. And that would lead many organizations to create, rigorously evaluate, and disseminate a wide variety of models that would empower struggling schools to turn themselves around.

Struggling Schools and the Problem with the “Shut It Down” Mentality

One of the solutions often proposed for schools in which students perform poorly is closing down the school. It’s one of the four options required for schools to receive School Improvement Grants in the current administration and has been an option for consistently low-achieving schools under No Child Left Behind. The Senate HELP Committee’s proposal for reauthorizing ESEA maintains school closure among seven options for persistently low-achieving schools.

“Shut it down” sounds like a logical, if extreme, option when all else has failed, but a study by John Engberg from RAND and his colleagues presented some disturbing data about school closure. They found that students in schools that are closed due to poor performance actually do substantially worse on reading and math tests in the new school to which they are sent for at least a year, and then recover and end up doing about as well as they were doing at their original school. In other words, after all the expense, acrimony, and heartache involved in closing a school, the students involved do not benefit.

This does not mean that schools should never be closed. Schools often have to be closed due to declining populations or economic factors. Sometimes a school has such a dysfunctional environment or bad reputation that it needs to closed, and every once in a while closing a school might impress other low-performing schools, in the sense that Voltaire suggested that it is sometimes good to execute a (losing) admiral “to encourage the others.”

Yet the Engberg et al. findings caution those who want to use school closure broadly. A school building does not cause low achievement. Bringing new leaders and new staff, and new programs with strong evidence of effectiveness, seem more likely to benefit struggling schools.

What Else Could We Do With $800 Million?

Tutor students after class?
No! says every lad and lass
Yes! replies the ruling class
But will it help the children pass?

My colleague Steve Ross, writing in yesterday’s guest blog on Sputnik, refers to the noble intentions and disappointing outcomes of Supplemental Educational Services (SES). I wanted to add some additional perspectives on what we can learn from the many SES evaluations and their larger meaning for policy.

Ross notes that most would raise participating students from the 25th to the 28th percentile, and a recent review of SES evaluations from Old Dominion University suggests the effect is even smaller. It is important to be clear that even this effect applies only to the students who were actually tutored, roughly 10-20 percent of students in most cases. So the effects of SES on the whole school were even smaller. It is entirely appropriate to focus on the students in greatest need, but SES could never have improved the achievement of entire schools to a substantial degree.

The lesson of SES is not “don’t do after-school tutoring.” I’m sure all of the SES providers had the best of intentions, and many of their models would succeed in rigorous evaluations if given the chance. Instead, the lesson for policy is, “focus on approaches that are proven and scalable.” At an annual cost of $800 million, SES has been using Title I funds that could have been supporting research-proven models in the school itself, rather than adding additional, hard-to-coordinate services after school. Rather than attempting to micromanage tens of thousands of Title I schools, the federal government’s responsibility is to help find out what works and then let struggling schools choose among effective options.

For $800 million, for example, more than 11,000 elementary schools could have chosen and implemented one of several proven, whole-school reform models. Proven cooperative learning models could have been implemented by 40,000 elementary and secondary schools. If they felt tutoring was what they needed, schools could have provided proven one-to-one or small-group phonics-focused tutoring to a far larger number of struggling readers using teachers or paraprofessionals already on the school staff during the school day, which would have been much more likely to be integrated with the rest of students’ instruction.

The ESEA renewal still has many steps to go through, and if there is any further consideration of continuing SES, I hope that available research and evidence is part of that conversation.

Supplemental Educational Services: Noble Ideas + Unreasonable Expectations = Disappointing Results

NOTE: This is a guest post by Steven Ross, Professor in the Center for Research and Reform in Education at Johns Hopkins University

With the upcoming reauthorization of ESEA pending, and the future of Supplemental Educational Services (SES) in question, it is due time to reflect on the research and implementation lessons of this program.

Making tutoring available to increase the academic performance of low-achieving and disadvantaged students is a noble idea. After all, one-on-one and small-group tutoring have been supported by extensive research evidence, while having universal appeal as a teaching strategy. However, expectations that tutoring can be delivered efficiently and effectively when filtered through multiple layers of administrative requirements and processes are unrealistic.

First created under the last ESEA reauthorization in 2001, SES mandates school districts to offer free tutoring to disadvantaged students who attend low-performing schools. While noble in intentions, SES has turned out to be quite costly. To fund SES along with transportation for students who opt to transfer to better schools, districts must set aside 20 percent of their Title I allocations, a cost currently approximating $800 million each year.

If SES could accomplish what was originally hoped–raising student achievement sufficiently to move schools out of improvement status–the cost would be worth every penny. Unfortunately, results from numerous evaluation studies indicate much more modest effects and sometimes none at all. When serving as principal investigator of over 15 state-sponsored evaluations of SES, my impression was that the vast majority of SES providers offered quality tutoring services that were helping students both academically and socio-emotionally. But the road from the noble idea to the tutoring session is long and bumpy. Guided by lengthy federal compliance regulations, the process filters down first to the states, which are charged with approving, monitoring, and evaluating providers. Next in line are the school districts, which must fund and roll out the program locally. Parents who are low-income (and not expert about evidence-based practices) are then charged with choosing their children’s tutoring providers. The providers, in turn, must hire tutors, find tutoring space, market their services with parents, and grapple with all the federal, state, and local regulations. Ironically, those least involved with SES are the classroom teachers who deal with the children every day and best know their needs.

Research shows that, on average, SES raises participants’ reading and math scores by only .05 to .08 of a standard deviation compared to matched control groups. For example, a student participating in SES could advance from the 25th to 27nd or 28th percentile, while a comparable non-SES student would remain in the 25th percentile. For an intervention lasting only 30 to 60 hours per student, some might view such effects as a reasonable return. Reasonable or not, small gains by the relatively small subgroup of tutored students can’t do much to remove schools from “improvement status” and being required to follow the same “intervention pathway” another year.

Although the efficacy of the existing SES program needs to be questioned, after-school tutoring remains a viable intervention for boosting student achievement. Judging from the SES experience, rolling out tutoring in a large-scale, top-down, one-size-fits-all manner is not the most efficient way to expend limited Title I resources. A noble idea with much more reasonable expectations for success is to provide schools and districts freedom, with appropriate vetting, to adopt the evidence-based interventions (including tutoring, improved reading and math programs, practice-based professional development, etc.) that most directly address site-based their improvement needs.

-Steven Ross

NOTE: The Center for Research and Reform in Education at Johns Hopkins University develops the Best Evidence Encyclopedia which provides unbiased, practical reviews about the strength of evidence supporting a range of education programs. Robert Slavin is the director of CRRE.

Failing to Succeed

The road to wisdom? Well, it’s plain
And simple to express:
Err and err and err again
But less and less and less

.-Piet Hein

This past summer, a project funded by the Defense Advanced Research Projects Agency (DARPA) carried out a test of the Falcon Hypersonic Technology Vehicle 2 (HTV-2), the fastest airplane ever built. Traveling at 13,000 miles per hour, the pilotless plane will be able to go anywhere on Earth in an hour or less.

In the test, the plane worked perfectly, soaring into near-space orbit and then gliding as planned for nine minutes. It then “lost contact” with its controllers and plunged harmlessly into the Pacific (or if it harmed anyone, they haven’t said so).

Here’s where the story gets interesting for educators. DARPA, and the project leaders, confidently hailed this test as a step toward a solution of great importance, and were sure that the hypersonic plane would soon be functional. That kind of tolerance for failure as a step toward success does not exist in education research. In our field, any setback in a series of experiments is likely to be fatal.

The result of a system that fails to value evidence of effectiveness and gives up too early on setbacks is that we rush from one untested “miracle” to the next, learning nothing. The system discourages real innovation and rigorous evaluation; it’s safer to stick with modest improvements and not subject them to controlled experiments in real schools.

With the path that we are on, the U.S. will have a 13,000 mph airplane before it has a proven, replicable approach to teaching algebra. Yes, the huge resources going into the hypersonic plane would greatly accelerate the pace of innovation in education. But with far fewer resources than those being devoted to the HTV-2, we could make substantial gains in educational innovation. Senator Bennet (D-Colorado) is looking to change this. This week, he will introduce an amendment to the Elementary and Secondary Education Act that would create ARPA-ED, the education community’s own mechanism for dramatic, breakthrough developments in effective educational technology. I have joined forces with New Schools Venture Fund and others in sending a letter of support for the amendment to Senate HELP committee’s leadership. If the amendment should pass, it may be just the fuel we need to get proven education reforms on a faster track.

Evidence-Based Reform in England

Fans of evidence-based reform in education have likely been spending some time this week combing through Sen. Harkin’s draft proposal for any language that could encourage or bolster greater use of effective strategies and programs. Meanwhile, there are extraordinary developments taking place in England that can teach us some lessons on advancing evidence based reform here in the States.

The new Conservative-led coalition government has been slashing government expenditures in every area, including education. However, despite the same budget pressures we have in the U.S., David Cameron’s government is investing in proven programs in education, on the basis that, in a time of austerity, the government needs to make sure that every pound is making a difference, and on the basis that investing in effective programs for children saves money in the future.

Three major developments are under way in England. First, Labor member of Parliament Graham Allen has created a detailed plan for investments in effective programs for children in all areas of development-birth to five, social emotional learning, and delinquency prevention in addition to reading and math. Second, the U.K. government has placed the equivalent of $200 million in the care of two foundations, the Sutton Trust and the Impetus Trust, to create the Educational Endowment Foundation (EEF). The EEF will be making grants to elementary and secondary schools to help them adopt proven programs, such as those in the Graham Allen list. The EEF also will invest in capacity-building and evaluation. Third, the Mayor’s Fund for London, another private foundation, has begun a Flying Start program intended to help high-poverty schools in inner London adopt proven programs.

All of these initiatives resemble the U.S. Investing in Innovation (i3) program, which funds proven programs to build capacity and scale up. However, the initiatives in England offer funding to high-poverty schools throughout the country to adopt programs that can demonstrate effectiveness. Both capacity-building and direct funding for schools are indeed needed, and the U.S. and U.K. programs can learn a lot from each other as their different approaches are put to the test.