Evidence-Based Reform and the Multi-Academy Trust

Recently, I was in England to visit Success for All (SFA) schools there. I saw two of the best SFA schools I’ve ever seen anywhere, Applegarth Primary School in Croyden, south of London, and Houldsworth Primary School in Sussex, southeast of London. Both are very high-poverty schools with histories of poor achievement, violence, and high staff turnover. Applegarth mostly serves the children of African immigrants, and Houldsworth mostly serves White students from very poor homes. Yet I saw every class in each school and in each one, children were highly engaged, excited, and learning like crazy. Both schools were once in the lowest one percent of achievement in England, yet both are now performing at or above national norms.

In my travels, I often see outstanding Success for All schools. However, in this case I learned about an important set of policies that goes beyond Success for All, but could have implications for evidence-based reform more broadly.

blog_12-12-19_UKschoolkids_500x334

Both Applegarth and Houldsworth are in multi-academy trusts (MATs), the STEP Trust and the Unity Trust, respectively. Academies are much like charter schools in the U.S., and multi-academy trusts are organizations that run more than one academy. Academies are far more common in the U.K. than the U.S., constituting 22% of primary (i.e., elementary) schools and 68% of secondary schools. There are 1,170 multi-academy trusts, managing more than 5,000 of Britain’s 32,000 schools, or 16%. Multi-academy trusts can operate within a single local authority (school district) (like Success Academies in New York City) or may operate in many local authorities. Quite commonly, poorly-performing schools in a local authority, or stand-alone academies, may be offered to a successful and capable multi-academy trust, and these hand-overs explain much of the growth in multi-academy trusts in recent years.

What I saw in the STEP and Unity Trusts was something extraordinary. In each case, the exceptional schools I saw were serving as lead schools for the dissemination of Success for All. Staff in these schools had an explicit responsibility to train and mentor future principals, facilitators, and teachers, who spend a year at the lead school learning about SFA and their role in it, and then taking on their roles in a new SFA school elsewhere in the multi-academy trust. Over time, there are multiple lead schools, each of which takes responsibility to mentor new SFA schools other than their own. This cascading dissemination strategy, carried out in close partnership with the national SFA-UK non-profit organization, is likely to produce exceptional implementations.

I’m sure there must be problems with multi-academy trusts that I don’t know about, and in the absence of data on MATs throughout Britain, I would not take a position on them in general. But based on my limited experience with the STEP and Unity Trusts, this policy has particular potential as a means of disseminating very effective forms of programs proven effective in rigorous research.

First, multi-academy trusts have the opportunity and motivation to establish themselves as effective. Ordinary U.S. districts want to do well, of course, but they do not grow (or shrink) because of their success (or lack of it). In contrast, a multi-academy trust in the U.K. is more likely to seek out proven programs and implement them with care and competence, both to increase student success and to establish a “brand” based on their effective use of proven programs. Both STEP and Unity Trusts are building a reputation for succeeding with difficult schools using methods known to be effective. Using cascading professional developing and mentoring from established schools to new ones, a multi-academy trust can build effectiveness and reputation.

Although the schools I saw were using Success for All, any multi-academy trust could use any proven program or programs to create positive outcomes and expand its reach and influence. As other multi-academy trusts see what the pioneers are accomplishing, they may decide to emulate them. One major advantage possessed by multi-academy trusts is that much in contrast to U.S. school districts, especially large, urban ones, multi-academy trusts are likely to remain under consistent leadership for many years. Leaders of multi-academy trusts, and their staff and supporters, are likely to have time to transform practices gradually over time, knowing that they have the stable leadership needed for long-term change.

There is no magic in school governance arrangements, and no guarantee that many multi-academy trusts will use the available opportunities to implement and perfect proven strategies. Yet by their nature, multi-academy trusts have the opportunity to make a substantial difference in the education provided to all students, especially those serving disadvantaged students. I look forward to watching plans unfold in the STEP and Unity Trusts, and to learn more about how the academy movement in the U.K. might provide a path toward widespread and thoughtful use of proven programs, benefiting very large numbers of students. And I’d love to see more U.S. charter networks and traditional school districts use cascading replication to scale up proven, whole-school approaches likely to improve outcomes in disadvantaged schools.

Photo credit: Kindermel [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)]

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

The Gap

Recently, Maryland released its 2019 state PARCC scores.  I read an article about the scores in the Baltimore Sun.  The pattern of scores was the same as usual, some up, some down. Baltimore City was in last place, as usual.  The Sun helpfully noted that this was probably due to high levels of poverty in Baltimore.  Then the article noted that there was a serious statewide gap between African American and White students, followed by the usual shocked but resolute statements about closing the gap from local superintendents.

Some of the superintendents said that in order to combat the gap, they were going to take a careful look at the curriculum.  There is nothing wrong with looking at curriculum.  All students should receive the best curriculum we can provide them.  However, as a means of reducing the gap, changing the curriculum is not likely to make much difference.

First, there is plentiful evidence from rigorous studies showing that changing from one curriculum to another, or one textbook to another, or one set of standards to another, makes little difference in student achievement.  Some curricula have more interesting or up to date content than others. Some meet currently popular standards better than others. But actual meaningful increases in achievement compared to a control group using the old curriculum?  This hardly ever happens. We once examined all of the textbooks rated “green” (the top ranking on EdReports, which reviews textbooks for alignment with college- and career-ready standards). Out of dozens of reading and math texts with this top rating,  two had small positive impacts on learning, compared to control groups.  In contrast, we have found more than 100 reading and math programs that are not textbooks or curricula that have been found to significantly increase student achievement more than control groups using current methods (see www.evidenceforessa.org).

But remember that at the moment, I am talking about reducing gaps, not increasing achievement overall.  I am unaware of any curriculum, textbook, or set of standards that is proven to reduce gaps. Why should they?  By definition, a curriculum or set of standards is for all students.  In the rare cases when a curriculum does improve achievement overall, there is little reason to expect it to increase performance for one  specific group or another.

The way to actually reduce gaps is to provide something extremely effective for struggling students. For example, the Sun article on the PARCC scores highlighted Lakeland Elementary/Middle, a Baltimore City school that gained 20 points on PARCC since 2015. How did they do it? The University of Maryland, Baltimore County (UMBC) sent groups of undergraduate education majors to Lakeland to provide tutoring and mentoring.  The Lakeland kids were very excited, and apparently learned a lot. I can’t provide rigorous evidence for the UMBC program, but there is quite a lot of evidence for similar programs, in which capable and motivated tutors without teaching certificates work with small groups of students in reading or math.

Tutoring programs and other initiatives that focus on the specific kids who are struggling have an obvious link to reducing gaps, because they go straight to where the problem is rather than doing something less targeted and less intensive.

blog_9-5-19_leap_500x375

Serious gap-reduction approaches can be used with any curriculum or set of standards. Districts focused on standards-based reform may also provide tutoring or other proven gap-reduction approaches along with new textbooks to students who need them.  The combination can be powerful. But the tutoring would most likely have worked with the old curriculum, too.

If all struggling students received programs effective enough to bring all of them to current national averages, the U.S. would be the highest-performing national school system in the world.  Social problems due to inequality, frustration, and inadequate skills would disappear. Schools would be happier places for kids and teachers alike.

The gap is a problem we can solve, if we decide to do so.  Given the stakes involved for our economy, society, and future, how could we not?

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Cost-Effectiveness of Small Solutions

Imagine that you were shopping for a reliable new car, one that is proven to last an average of at least 100,000 miles with routine maintenance and repairs. You are looking at a number of options that fit your needs for around $24,000.

You happen to be talking to your neighbor, an economist, about your plans. “$24,000?” she says. “That’s crazy. You can get a motorcycle that would go at least 100,000 miles for only $12,000, and save a lot on gas as well!”blog_8-22-19_vessuv_500x333

You point out to your neighbor that motorcycles might be nice for some purposes, but you need a car to go to the grocery store, transport the kids, and commute to work, even in rain or snow. “Sure,” says your neighbor, “but you posed a question of cost-effectiveness, and on that basis a motorcycle is the right choice. Or maybe a bicycle.”

In education, school leaders and policy makers are often faced with choices like this. They want to improve their students’ achievement, and they have limited resources. But the available solutions vary in cost, effectiveness, and many other factors.

To help leaders make good choices, economists have devised measures of cost-effectiveness, which means (when educational achievement is the goal) the amount of achievement gain you might expect from purchasing a given product or service divided by all costs of making that choice. Cost-effectiveness can be very useful in educational policy and practice in helping decision makers weigh the potential benefits of each of a set of choice available to them. The widespread availability of effect sizes indicating the outcomes and costs of various programs and practices, easily located in sources such as the What Works Clearinghouse and Evidence for ESSA, make it a lot easier to compare outcomes and costs of available programs. For example, a district might seek to improve high school math performance by adopting software and professional development for a proven technology program, or by adopting a proven professional development approach. All costs need to be considered as well as all benefits, and the school leaders might make the choice that produces the largest gains at the most affordable cost. Cost-effectiveness might not entirely determine which choice is made, but, one might argue, it should always be a key part of the decision-making process. Quantitative researchers in education and economics would agree. So far, so good.

But here is where things get a little dodgy. In recent years, there has arisen a lot of interest in super-cheap interventions that have super-small impacts, but the ratio between the benefits and the costs makes the super-cheap interventions look cost-effective. Such interventions are sometimes called “nudge strategies,” meaning that simple reminders or minimal actions activate a set of psychological process that can lead to important impacts. A very popular example right now is Carol Dweck’s Growth Mindset strategy, in which students are asked to write a brief essay stating a belief that intelligence is not a fixed attribute of people, but that learning comes from effort. Her work has found small impacts of this essentially cost-free treatment in several studies, although others have failed to find this effect.

Other examples include sending messages to students or parents on cell phones, or sending postcards to parents on the importance of regular attendance. These strategies can cost next to nothing, yet large-scale experiments often show positive effects in the range of +0.03 to +0.05, averaging across multiple studies.

Approaches of this kind, including Growth Mindset, are notoriously difficult to replicate by others. However, assume for the sake of argument that at least some of them do have reliably positive effects that are very small, but because of their extremely small cost, they appear very cost-effective. Should schools use them?

One might take a view that interventions like Growth Mindset are so inexpensive and so sensible that what the heck, go ahead. However, others take some time and effort on the part of staff.

Schools are charged with a very important responsibility, ensuring the academic success, psychological adjustment, and pro-social character of young people. Their financial resources are always limited, but even more limited is their schoolwide capacity to focus on a small number of essential goals and stick with those goals until they are achieved. The problem is that spending a lot of time on small solutions with small impacts may exhaust a school’s capacity to focus on what truly matters. If a school could achieve an effect size of +0.30 on important achievement measures with one comprehensive program, or (for half the price) could adopt ten small interventions with effect sizes averaging +0.03, which should it do? Any thoughtful educator would say, “Invest in the one program with the big effect.” The little programs are not likely to add up to a big effect, and any collection of unrelated, uncoordinated mini-reforms is likely to deplete the staff’s energy and enthusiasm over a period of time.

This is where the car-motorcycle analogy comes in. A motorcycle may appear more cost-effective than a car, but it just does not do what a car does. Motorcycles are fine for touring in nice weather, but for most people they do not solve essential problems. In school reform, large programs with large effects may be composed of smaller effective components, but because these components are an integrated part of a well-thought-out plan, they add up to something more likely to work and to keep working over time.

Cost-effectiveness is a useful concept for schools seeking to make big differences in achievement, using serious resources. For small interventions with small impacts, don’t bother to calculate cost-effectiveness, or if you do, don’t compare the results to those of big interventions with big impacts. To do so is like bragging about the gas mileage you get on your motorcycle driving Aunt Sally and the triplets to the grocery store. It just doesn’t make sense.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Educational Policies vs. Educational Programs: Evidence from France

Ask any parent what their kids say when they ask them what they did in school today. Invariably, they respond, “Nuffin,” or some equivalent. My four-year-old granddaughter always says, “I played with my fwends.” All well and good.

However, in educational policy, policy makers often give the very same answer when asked, “What did the schools not using the (insert latest policy darling) do?”

“Nuffin’”. Or they say, “Whatever they usually do.” There’s nothing wrong with the latter answer if it’s true. But given the many programs now known to improve student achievement (see www.evidenceforessa.org), why don’t evaluators compare outcomes of new policy initiatives to those of proven educational programs known to improve the same outcomes the policy innovation is supposed to improve, perhaps at far lower cost per student? The evaluations should also compare to “business as usual,” but adding proven programs to evaluations of large policy innovations would help avoid declaring policy innovations to be successful when they are in fact just slightly more effective than “business as usual,” and much less effective or less cost-effective than alternative proven approaches? For example, when evaluating charter schools, why not routinely compare them to whole-school reform models that have similar objectives? When evaluating extending the school day or school year to help high-poverty schools, why not compare these innovations to using the same amount of additional money to hiring tutors to use proven tutoring models to help struggling students? In evaluating policies in which students are held back if they do not read at grade level by third grade, why not compare these approaches to intensive phonics instruction and tutoring in grades K-3, which are known to greatly improve student reading achievement?

blog_7-25-19_LeoandAdaya_375x500
There is nuffin like a good fwend.

As one example of research comparing a policy intervention to a promising educational intervention, I recently saw a very interesting pair of studies from France. Ecalle, Gomes, Auphan, Cros, & Magnan (2019) compared two interventions applied in special priority areas with high poverty levels. Both interventions focused on reading in first grade.

One of the interventions involved halving class size, from approximately 24 students to 12. The other provided intensive reading instruction in small groups (4-6 children) to students who were struggling in reading, as well as less intensive interventions to larger groups (10-12 students). Low achievers got two 30-minute interventions each day for a year, while the higher-performing readers got one 30-minute intervention each day. In both cases, the focus of instruction was on phonics. In all cases, the additional interventions were provided by the students’ usual teachers.

The students in small classes were compared to students in ordinary-sized classes, while the students in the educational intervention were compared to students in same-sized classes who did not get the group interventions. Similar measures and analyses were used in both comparisons.

The results were nearly identical for the class size policy and the educational intervention. Halving class size had effect sizes of +0.14 for word reading and +0.22 for spelling. Results for the educational intervention were +0.13 for word reading, +0.12 for spelling, +0.14 for a group test of reading comprehension, +0.32 for an individual test of comprehension, and +0.19 for fluency.

These studies are less than perfect in experimental design, but they are nevertheless interesting. Most importantly, the class size policy required an additional teacher for each class of 24. Using Maryland annual teacher salaries and benefits ($84,000), that means the cost in our state would be about $3500 per student. The educational intervention required one day of training and some materials. There was virtually no difference in outcomes, but the differences in cost were staggering.

The class size policy was mandated by the Ministry of Education. The educational intervention was offered to schools and provided by a university and a non-profit. As is so often the case, the policy intervention was simplistic, easy to describe in the newspaper, and minimally effective. The class size policy reminds me of a Florida program that extended the school schedule by an hour every day in high-poverty schools, mainly to provide more time for reading instruction. The cost per child was about $800 per year. The outcomes were minimal (ES=+0.05).

After many years of watching what schools do and reviewing research on outcomes of innovations, I find it depressing that policies mandated on a substantial scale are so often found to be ineffective. They are usually far more expensive than much more effective, rigorously evaluated programs that are, however, a bit more difficult to describe, and rarely arouse great debate in the political arena. It’s not that anyone is opposed to the educational intervention, but it is a lot easier to carry a placard saying “Reduce Class Size Now!” than to carry one saying “Provide Intensive Phonics in Small Groups with More Supplemental Teaching for the Lowest Achievers Now!” The latter just does not fit on a placard, and though easy to understand if explained, it does not lend itself to easy communication. Actually, there are much more effective first grade interventions than the one evaluated in France (see www.evidenceforessa.org). At a cost much less than $3500 per student, several one-to-one tutoring programs using well-trained teaching assistants as tutors would have been able to produce an effect size of more than +0.50 for all first graders on average. This would even fit on a placard: “Tutoring Now!”

I am all in favor of trying out policy innovations. But when parents of kids in a proven-program comparison group are asked what they did in school today, they shouldn’t say “nuffin’”. They should say, “My tooter taught me to read. And I played with my fwends.”

References

Ecalle, J., Gomes, C., Auphan, P., Cros, L., & Magnan, A. (2019). Effects of policy and educational interventions intended to reduce difficulties in literacy skills in grade 1. Studies in Educational Evaluation, 61, 12-20.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Is ES=+0.50 Achievable?: Schoolwide Approaches That Might Meet This Standard

In a recent blog, “Make No Small Plans,” I proposed a system innovators could use to create very effective schoolwide programs.  I defined these as programs capable of making a difference in student achievement large enough to bring entire schools serving disadvantaged students to the levels typical of middle class schools.  On average, that would mean creating school models that could routinely add an effect size of +0.50 for entire disadvantaged schools.  +0.50, or half a standard deviation, is roughly the average difference between students who qualify for free lunch and those who do not, between African American and White students, and between Hispanic and non-Hispanic White students.

Today, I wanted to give some examples of approaches intended to meet the +0.50 goal. From prior work, my colleagues and I already have created a successful schoolwide reform model, Success for All, which, with adequate numbers of tutors (as many as six per school) achieved reading effect sizes in high-poverty Baltimore elementary schools of over +0.50 for all students and +0.75 for the lowest-achieving quarter of students (Madden et al, 1993).   These outcomes maintained through eighth grade, and showed substantial reductions in grade retentions and special education placements (Borman & Hewes, 2003).  Steubenville, in Ohio’s Rust Belt, uses Success for All in all of its Title I elementary schools, providing several tutors in each.  Each year, Steubenville schools score among the highest in Ohio on state tests, exceeding most wealthy suburban schools.  Other SFA schools with sufficient tutors are also exemplary in achievement gains.  Yet these schools face a dilemma.  Most cannot afford significant numbers of tutors.  They still get excellent results, but less than those typical of SFA schools that do have sufficient tutors.

blog_12-20-18_tutornkid_500x333

We are now planning another approach, also intended to produce schoolwide effect sizes of at least +0.50 in schools serving disadvantaged students.   However, in this case our emphasis is on tutoring, the most effective strategy known for improving the achievement of struggling readers (Inns et al., 2019).  We are calling this approach the Reading Safety Net.  Main components of this plan are as follows:

Tutoring

Like the most successful forms of Success for All, the Reading Safety Net places a substantial emphasis on tutoring.  Tutors will be well-qualified teaching assistants with BAs but not teaching certificates, extensively trained to provide one-to-four tutoring.   Tutors will use a proven computer-assisted model in which students do a lot of pair teaching.  This is what we now call our Tutoring With the Lightning Squad model, which achieved outcomes of +0.40 and +0.46 in two studies in the Baltimore City Public Schools (Madden & Slavin, 2017).  A high-poverty school of 500 students might engage about five tutors, providing extensive tutoring to the majority of students, for as many years as necessary.  One additional tutor or teacher will supervise the tutors and personally work with students having the most serious problems.   We will provide significant training and follow-up coaching to ensure that all tutors are effective.

blog_11-8-18_tutoring_500x333

Attendance and Health

Many students fail in reading or other outcomes because they have attendance problems or certain common health problems. We propose to provide a health aide to help solve these problems.

Attendance

Many students, especially those in high-poverty schools, fail because they do not attend school regularly. Yet there are several proven approaches for increasing attendance, and reducing chronic truancy (Shi, Inns, Lake, and Slavin, 2019).  Health aides will help teachers and other staff organize and manage effective attendance improvement approaches.

Vision Services

My colleagues and I have designed strategies to help ensure that all students who need eyeglasses receive them. A key problem in this work is ensuring that students who receive glasses use them, keep them safe, and replace them if they are lost or broken. Health aides will coordinate use of proven strategies to increase regular use of needed eyeglasses.

blog_4-19-18_tutoring_500x329

Asthma and other health problems

Many students in high-poverty schools suffer from chronic illnesses.  Cures or prevention are known for these, but the cures may not work if medications are not taken daily.   For example, asthma is common in high-poverty schools, where it is the top cause of hospital referrals and a leading cause of death for school-age children.  Inexpensive inhalers can substantially improve children’s health, yet many children do not regularly take their medicine. Studies suggest that having trained staff ensure that students take their medicine, and watch them doing so, can make a meaningful difference.  The same may be true of other chronic, easily treated diseases common among children but often not consistently treated in inner-city schools.  Health aides with special supplemental training may be able to play a key on-the-ground role in helping ensure effective treatment for asthma and other diseases.

Potential Impact

The Reading Safety Net is only a concept at present.  We are seeking funding to support its further development and evaluation.  As we work with front line educators, colleagues, and others to further develop this model, we are sure to find ways to make the approach more effective and cost-effective, and perhaps extend it to solve other key problems.

We cannot yet claim that the Reading Safety Net has been proven effective, although many of its components have been.  But we intend to do a series of pilots and component evaluations to progressively increase the impact, until that impact attains or surpasses the goal of ES=+0.50.  We hope that many other research teams will mobilize and obtain resources to find their own ways to +0.50.  A wide variety of approaches, each of which would be proven to meet this ambitious goal, would provide a range of effective choices for educational leaders and policy makers.  Each would be a powerful, replicable tool, capable of solving the core problems of education.

We know that with sufficient investment and encouragement from funders, this goal is attainable.  If it is in fact attainable, how could we accept anything less?

References

Borman, G., & Hewes, G. (2003).  Long-term effects and cost effectiveness of Success for All.  Educational Evaluation and Policy Analysis, 24 (2), 243-266.

Inns, A., Lake, C., Pellegrini, M., & Slavin, R. (2019). A synthesis of quantitative research on programs for struggling readers in elementary schools. Manuscript submitted for publication.

Madden, N. A., & Slavin, R. E. (2017). Evaluations of Technology-Assisted Small-Group Tutoring for Struggling Readers. Reading & Writing Quarterly, 1-8.

Madden, N. A., Slavin, R. E., Karweit, N. L., Dolan, L., & Wasik, B. (1993). Success for All:  Longitudinal effects of a schoolwide elementary restructuring program. American Educational Reseach Journal, 30, 123-148.

Shi, C., Inns, A., Lake, C., & Slavin, R. E. (2019). Effective school-based programs for K-12 students’ attendance: A best-evidence synthesis. Baltimore, MD: Center for Research and Reform in Education, Johns Hopkins University.

 

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Effect Sizes and Additional Months of Gain: Can’t We Just Agree That More is Better?

In the 1984 mockumentary This is Spinal Tap, there is a running joke about a hapless band, Spinal Tap, which proudly bills itself “Britain’s Loudest Band.”  A pesky reporter keeps asking the band’s leader, “But how can you prove that you are Britain’s loudest band?” The band leader explains, with declining patience, that while ordinary amplifiers’ sound controls only go up to 10, Spinal Tap’s go up to 11.  “But those numbers are arbitrary,” says the reporter.  “They don’t mean a thing!”  “Don’t you get it?” asks the band leader.  “ELEVEN is more than TEN!  Anyone can see that!”

In educational research, we have an ongoing debate reminiscent of Spinal Tap.  Educational researchers speaking to other researchers invariably express the impact of educational treatments as effect sizes (the difference in adjusted means for the experimental and control groups divided by the unadjusted standard deviation).  All else being equal, higher effect sizes are better than lower ones.

However, educators who are not trained in statistics often despise effect sizes.  “What do they mean?” they ask.  “Tell us how much difference the treatment makes in student learning!”

Researchers want to be understood, so they try to translate effect sizes into more educator-friendly equivalents.  The problem is that the friendlier the units, the more statistically problematic they are.  The friendliest of all is “additional months of learning.”  Researchers or educators can look on a chart and, for any particular effect size, they can find the number of “additional months of learning.”  The Education Endowment Foundation in England, which funds and reports on rigorous experiments, reports both effect sizes and additional months of learning, and provides tables to help people make the conversion.  But here’s the rub.  A recent article by Baird & Pane (2019) compared additional months of learning to three other translations of effect sizes.  Additional months of learning was rated highest in ease of use, but lowest in four other categories, such as transparency and consistency. For example, a month of learning clearly has a different meaning in kindergarten than it does in tenth grade.

The other translations rated higher by Baird and Pane were, at least to me, just as hard to understand as effect sizes.  For example, the What Works Clearinghouse presents, along with effect sizes, an “improvement index” that has the virtue of being equally incomprehensible to researchers and educators alike.

On one hand, arguing about outcome metrics is as silly as arguing the relative virtues of Fahrenheit and Celsius. If they can be directly transformed into the other unit, who cares?

However, additional months of learning is often used to cover up very low effect sizes. I recently ran into an example of this in a series of studies by the Stanford Center for Research on Education Outcomes (CREDO), in which disadvantaged urban African American students gained 59 more “days of learning” than matched students not in charters in math, and 44 more days in reading. These numbers were cited in an editorial praising charter schools in the May 29 Washington Post.

However, these “days of learning” are misleading. The effect size for this same comparison was only +0.08 for math, and +0.06 for reading. Any researcher will tell you that these are very small effects. They were only made to look big by reporting the gains in days. These not only magnify the apparent differences, but they also make them unstable. Would it interest you to know that White students in urban charter schools performed 36 days a year worse than matched students in math (ES= -0.05) and 14 days worse in reading (ES= -0.02)? How about Native American students in urban charter schools, whose scores were 70 days worse than matched students in non-charters in math (ES= -0.10), and equal in reading. I wrote about charter school studies in a recent blog. In the blog, I did not argue that charter schools are effective for disadvantaged African Americans but harmful for Whites and Native Americans. That seems unlikely. What I did argue is that the effects of charter schools are so small that the directions of the effects are unstable. The overall effects across all urban schools studied were only 40 days (ES=+0.055) in math and 28 days (ES=+0.04) in reading. These effects look big because of the “days of learning” transformation, but they are not.

blog_6-13-19_volume_500x375In This is Spinal Tap, the argument about whether or not Spinal Tap is Britain’s loudest band is absurd.  Any band can turn its amplifiers to the top and blow out everyone’s eardrums, whether the top is marked eleven or ten.  In education, however, it does matter a great deal that educators are taking evidence into account in their decisions about educational programs. Using effect sizes, perhaps supplemented by additional months of learning, is one way to help readers understand outcomes of educational experiments. Using “days of learning,” however, is misleading, making very small impacts look important. Why not additional hours or minutes of learning, while we’re at it? Spinal Tap would be proud.

References

Baird, M., & Paine, J. (2019). Translating standardized effects of education programs into more interpretable metrics. Educational Researcher. Advance online publication. doi.org/10.3102/0013189X19848729

CREDO (2015). Overview of the Urban Charter School Study. Stanford, CA: Author.

Washington Post: Denying poor children a chance. [Editorial]. (May 29, 2019). The Washington Post, A16.

 

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Charter Schools? Smarter Schools? Why Not Both?

I recently saw an editorial in the May 29 Washington Post, entitled “Denying Poor Children a Chance,” a pro-charter school opinion piece that makes dire predictions about the damage to poor and minority students that would follow if charter expansion were to be limited.  In education, it is common to see evidence-free opinions for and against charter schools, so I was glad to see actual data in the Post editorial.   In my view, if charter schools could routinely and substantially improve student outcomes, especially for disadvantaged students, I’d be a big fan.  My response to charter schools is the same as my response to everything else in education: Show me the evidence.

The Washington Post editorial cited a widely known 2015 Stanford CREDO study comparing urban charter schools to matched traditional public schools (TPS) in the same districts.  Evidence always attracts my attention, so I decided to look into this and other large, multi-district studies. Despite the Post’s enthusiasm for the data, the average effect size was only +0.055 for math and +0.04 for reading.  By anyone’s standards, these are very, very small outcomes.  Outcomes for poor, urban, African American students were somewhat higher, at +0.08 for math and +0.06 for reading, but on the other hand, average effect sizes for White students were negative, averaging -0.05 for math and -0.02 for reading.  Outcomes were also negative for Native American students: -0.10 for math, zero for reading.  With effect sizes so low, these small differences are probably just different flavors of zero.  A CREDO (2013) study of charter schools in 27 states, including non-urban as well as urban schools, found average effect sizes of +0.01 for math and -0.01 for reading. How much smaller can you get?

In fact, the CREDO studies have been widely criticized for using techniques that inflate test scores in charter schools.  They compare students in charter schools to students in traditional public schools, matching on pretests and ethnicity.  This ignores the obvious fact that students in charter schools chose to go there, or their parents chose for them to go.  There is every reason to believe that students who choose to attend charter schools are, on average, higher-achieving, more highly motivated, and better behaved than students who stay in traditional public schools.  Gleason et al. (2010) found that students who applied to charter schools started off 16 percentage points higher in reading and 13 percentage points higher in math than others in the same schools who did not apply.  Applicants were more likely to be White and less likely to be African American or Hispanic, and they were less likely to qualify for free lunch.  Self-selection is a particular problem in studies of students who choose or are sent to “no-excuses” charters, such as KIPP or Success Academies, because the students or their parents know students will be held to very high standards of behavior and accomplishment, and may be encouraged to leave the school if they do not meet those standards (this is not a criticism of KIPP or Success Academies, but when such charter systems use lotteries to select students, the students who show up for the lotteries were at least motivated to participate in a lottery to attend a very demanding school).

Well-designed studies of charter schools usually focus on schools that use lotteries to select students, and then they compare the students who were successful in the lottery to those who were not so lucky.  This eliminates the self-selection problem, as students were selected by a random process.  The CREDO studies do not do this, and this may be why their studies report higher (though still very small) effect sizes than those reported by syntheses of studies of students who all applied to charters, but may have been “lotteried in” or “lotteried out” at random.  A very rigorous WWC synthesis of such studies by Gleason et al. (2010) found that middle school students who were lotteried into charter schools in 32 states performed non-significantly worse than those lotteried out, in math (ES=-0.06) and in reading (ES=-0.08).  A 2015 update of the WWC study found very similar, slightly negative outcomes in reading and math.

It is important to note that “no-excuses” charter schools, mentioned earlier, have had more positive outcomes than other charters.  A recent review of lottery studies by Cheng et al. (2017) found effect sizes of +0.25 for math and +0.17 for reading.  However, such “no-excuses” charters are a tiny percentage of all charters nationwide.

blog_6-5-19_schoolmortorbd_500x422

Other meta-analyses of studies of achievement outcomes of charter schools also exist, but none found effect sizes as high as the CREDO urban study.  The means of +0.055 for math and +0.04 for reading represent upper bounds for effects of urban charter schools.

Charter Schools or Smarter Schools?

So far, every study of achievement effects of charters has focused on impacts of charters on achievement compared to those of traditional public schools.  However, this should not be the only question.  “Charters” and “non-charters” do not exhaust the range of possibilities.

What if we instead ask this question: Among the range of programs available, which are most likely to be most effective at scale?

To illustrate the importance of this question, consider a study in England, which evaluated a program called Engaging Parents Through Mobile Phones.  The program involves texting parents on cell phones to alert them to upcoming tests, inform them about whether students are completing their homework, and tell them what students were being taught in school.  A randomized evaluation (Miller et al, 2017) found effect sizes of +0.06 for math and +0.03 for reading, remarkably similar to the urban charter school effects reported by CREDO (2015).  The cost of the mobile phone program was £6 per student per year, or $7.80.  If you like the outcomes of charter schools, might you prefer to get the same outcomes for $7.80 per child per year, without all the political, legal, and financial stresses of charter schools?

The point here is that rather than arguing about the size of small charter effects, one could consider charters a “treatment” and compare them to other proven approaches.  In our Evidence for ESSA website, we list 112 reading and math programs that meet ESSA standards for “Strong,” “Moderate,” or “Promising” evidence of effectiveness.  Of these, 107 had effect sizes larger than those CREDO (2015) reports for urban charter schools.  In both math and reading, there are many programs with average effect sizes of +0.20, +0.30, up to more than +0.60.  If applied as they were in the research, the best of these programs could, for example, entirely overcome Black-White and Hispanic-White achievement gaps in one or two years.

A few charter school networks have their own proven educational approaches, but the many charters that do not have proven programs should be looking for them.  Most proven programs work just as well in charter schools as they do in traditional public schools, so there is no reason existing charter schools should not proactively seek proven programs to increase their outcomes.  For new charters, wouldn’t it make sense for chartering agencies to encourage charter applicants to systematically search for and propose to adopt programs that have strong evidence of effectiveness?  Many charter schools already use proven programs.  In fact, there are several that specifically became charters to enable them to adopt or maintain our Success for All whole-school reform program.

There is no reason for any conflict between charter schools and smarter schools.  The goal of every school, regardless of its governance, should be to help students achieve their full potential, and every leader of a charter or non-charter school would agree with this. Whatever we think about governance, all schools, traditional or charter, should get smarter, using proven programs of all sorts to improve student outcomes.

References

Cheng, A., Hitt, C., Kisida, B., & Mills, J. N. (2017). “No excuses” charter schools: A meta-analysis of the experimental evidence on student achievement. Journal of School Choice, 11 (2), 209-238.

Clark, M.A., Gleason, P. M., Tuttle, C. C., & Silverberg, M. K., (2015). Do charter schools improve student achievement? Educational Evaluation and Policy Analysis, 37 (4), 419-436.

Gleason, P.M., Clark, M. A., Tuttle, C. C., & Dwoyer, E. (2010).The evaluation of charter school impacts. Washington, DC: What Works Clearinghouse.

Miller, S., Davison, J, Yohanis, J., Sloan, S., Gildea, A., & Thurston, A. (2016). Texting parents: Evaluation report and executive summary. London: Education Endowment Foundation.

Washington Post: Denying poor children a chance. [Editorial]. (May 29, 2019). The Washington Post, A16.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.