I recently saw an editorial in the May 29 Washington Post, entitled “Denying Poor Children a Chance,” a pro-charter school opinion piece that makes dire predictions about the damage to poor and minority students that would follow if charter expansion were to be limited. In education, it is common to see evidence-free opinions for and against charter schools, so I was glad to see actual data in the Post editorial. In my view, if charter schools could routinely and substantially improve student outcomes, especially for disadvantaged students, I’d be a big fan. My response to charter schools is the same as my response to everything else in education: Show me the evidence.
The Washington Post editorial cited a widely known 2015 Stanford CREDO study comparing urban charter schools to matched traditional public schools (TPS) in the same districts. Evidence always attracts my attention, so I decided to look into this and other large, multi-district studies. Despite the Post’s enthusiasm for the data, the average effect size was only +0.055 for math and +0.04 for reading. By anyone’s standards, these are very, very small outcomes. Outcomes for poor, urban, African American students were somewhat higher, at +0.08 for math and +0.06 for reading, but on the other hand, average effect sizes for White students were negative, averaging -0.05 for math and -0.02 for reading. Outcomes were also negative for Native American students: -0.10 for math, zero for reading. With effect sizes so low, these small differences are probably just different flavors of zero. A CREDO (2013) study of charter schools in 27 states, including non-urban as well as urban schools, found average effect sizes of +0.01 for math and -0.01 for reading. How much smaller can you get?
In fact, the CREDO studies have been widely criticized for using techniques that inflate test scores in charter schools. They compare students in charter schools to students in traditional public schools, matching on pretests and ethnicity. This ignores the obvious fact that students in charter schools chose to go there, or their parents chose for them to go. There is every reason to believe that students who choose to attend charter schools are, on average, higher-achieving, more highly motivated, and better behaved than students who stay in traditional public schools. Gleason et al. (2010) found that students who applied to charter schools started off 16 percentage points higher in reading and 13 percentage points higher in math than others in the same schools who did not apply. Applicants were more likely to be White and less likely to be African American or Hispanic, and they were less likely to qualify for free lunch. Self-selection is a particular problem in studies of students who choose or are sent to “no-excuses” charters, such as KIPP or Success Academies, because the students or their parents know students will be held to very high standards of behavior and accomplishment, and may be encouraged to leave the school if they do not meet those standards (this is not a criticism of KIPP or Success Academies, but when such charter systems use lotteries to select students, the students who show up for the lotteries were at least motivated to participate in a lottery to attend a very demanding school).
Well-designed studies of charter schools usually focus on schools that use lotteries to select students, and then they compare the students who were successful in the lottery to those who were not so lucky. This eliminates the self-selection problem, as students were selected by a random process. The CREDO studies do not do this, and this may be why their studies report higher (though still very small) effect sizes than those reported by syntheses of studies of students who all applied to charters, but may have been “lotteried in” or “lotteried out” at random. A very rigorous WWC synthesis of such studies by Gleason et al. (2010) found that middle school students who were lotteried into charter schools in 32 states performed non-significantly worse than those lotteried out, in math (ES=-0.06) and in reading (ES=-0.08). A 2015 update of the WWC study found very similar, slightly negative outcomes in reading and math.
It is important to note that “no-excuses” charter schools, mentioned earlier, have had more positive outcomes than other charters. A recent review of lottery studies by Cheng et al. (2017) found effect sizes of +0.25 for math and +0.17 for reading. However, such “no-excuses” charters are a tiny percentage of all charters nationwide.
Other meta-analyses of studies of achievement outcomes of charter schools also exist, but none found effect sizes as high as the CREDO urban study. The means of +0.055 for math and +0.04 for reading represent upper bounds for effects of urban charter schools.
Charter Schools or Smarter Schools?
So far, every study of achievement effects of charters has focused on impacts of charters on achievement compared to those of traditional public schools. However, this should not be the only question. “Charters” and “non-charters” do not exhaust the range of possibilities.
What if we instead ask this question: Among the range of programs available, which are most likely to be most effective at scale?
To illustrate the importance of this question, consider a study in England, which evaluated a program called Engaging Parents Through Mobile Phones. The program involves texting parents on cell phones to alert them to upcoming tests, inform them about whether students are completing their homework, and tell them what students were being taught in school. A randomized evaluation (Miller et al, 2017) found effect sizes of +0.06 for math and +0.03 for reading, remarkably similar to the urban charter school effects reported by CREDO (2015). The cost of the mobile phone program was £6 per student per year, or $7.80. If you like the outcomes of charter schools, might you prefer to get the same outcomes for $7.80 per child per year, without all the political, legal, and financial stresses of charter schools?
The point here is that rather than arguing about the size of small charter effects, one could consider charters a “treatment” and compare them to other proven approaches. In our Evidence for ESSA website, we list 112 reading and math programs that meet ESSA standards for “Strong,” “Moderate,” or “Promising” evidence of effectiveness. Of these, 107 had effect sizes larger than those CREDO (2015) reports for urban charter schools. In both math and reading, there are many programs with average effect sizes of +0.20, +0.30, up to more than +0.60. If applied as they were in the research, the best of these programs could, for example, entirely overcome Black-White and Hispanic-White achievement gaps in one or two years.
A few charter school networks have their own proven educational approaches, but the many charters that do not have proven programs should be looking for them. Most proven programs work just as well in charter schools as they do in traditional public schools, so there is no reason existing charter schools should not proactively seek proven programs to increase their outcomes. For new charters, wouldn’t it make sense for chartering agencies to encourage charter applicants to systematically search for and propose to adopt programs that have strong evidence of effectiveness? Many charter schools already use proven programs. In fact, there are several that specifically became charters to enable them to adopt or maintain our Success for All whole-school reform program.
There is no reason for any conflict between charter schools and smarter schools. The goal of every school, regardless of its governance, should be to help students achieve their full potential, and every leader of a charter or non-charter school would agree with this. Whatever we think about governance, all schools, traditional or charter, should get smarter, using proven programs of all sorts to improve student outcomes.
Cheng, A., Hitt, C., Kisida, B., & Mills, J. N. (2017). “No excuses” charter schools: A meta-analysis of the experimental evidence on student achievement. Journal of School Choice, 11 (2), 209-238.
Clark, M.A., Gleason, P. M., Tuttle, C. C., & Silverberg, M. K., (2015). Do charter schools improve student achievement? Educational Evaluation and Policy Analysis, 37 (4), 419-436.
Gleason, P.M., Clark, M. A., Tuttle, C. C., & Dwoyer, E. (2010).The evaluation of charter school impacts. Washington, DC: What Works Clearinghouse.
Miller, S., Davison, J, Yohanis, J., Sloan, S., Gildea, A., & Thurston, A. (2016). Texting parents: Evaluation report and executive summary. London: Education Endowment Foundation.
Washington Post: Denying poor children a chance. [Editorial]. (May 29, 2019). The Washington Post, A16.
This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.