Why Not the Best?

In 1879, Thomas Edison invented the first practical lightbulb. The main problem he faced was in finding a filament that would glow, but not burn out too quickly. To find it, he tried more than 6000 different substances that had some promise as filaments. The one he found was carbonized cotton, which worked far better than all the others (tungsten, which we use now, came much later).

Of course, the incandescent light changed the world. It replaced far more expensive gas lighting systems, and was much more versatile. The lightbulb captured the evening and nighttime hours for every kind of human activity.

blog_9-19-19_lightbulb_500x347Yet if the lightbulb had been an educational innovation, it probably would have been proclaimed a dismal failure. Skeptics would have noted that only one out of six thousand filaments worked. Meta-analysts would have averaged the effect sizes for all 6000 experiments and concluded that the average effect size across the 6000 filaments was only +0.000000001. Hardly worthwhile. If Edison’s experiments were funded by government, politicians would have complained that 5,999 of Edison’s filaments were a total waste of taxpayers’ money. Economists would have computed benefit-cost ratios and concluded that even if Edison’s light worked, the cost of making the first one was astronomical, not to mention the untold cost of setting up electrical generation and wiring systems.

This is all ridiculous, you must be saying. But in the world of evidence-based education, comparable things happen all the time. In 2003, Borman et al. did a meta-analysis of 300 studies of 29 comprehensive (whole-school) reform designs. They identified three as having solid evidence of effectiveness. Rather than celebrating and disseminating those three (and continuing research and development to identify more of them), the U.S. Congress ended its funding for dissemination of comprehensive school reform programs. Turn out the light before you leave, Mr. Edison!

Another common practice in education is to do meta-analyses averaging outcomes across an entire category of programs or policies, and ignoring the fact that some distinctively different and far more effective programs are swallowed up in the averages. A good example is charter schools. Large-scale meta-analyses by Stanford’s CREDO (2013) found that the average effect sizes for charter schools are effectively zero. A 2015 analysis found better, but still very small effect sizes in urban districts (ES = +0.04 in reading, +0.05 in math). The What Works Clearinghouse published a 2010 review that found slight negative effects of middle school charters. These findings are useful in disabusing us of the idea that charter schools are magic, and get positive outcomes just because they are charter schools. However, they do nothing to tell us about extraordinary charter schools using methods that other schools (perhaps including non-charters) could also use. There is more positive evidence relating to “no-excuses” schools, such as KIPP and Success Academies, but among the thousands of charters that now exist, is this the only type of charter worth replicating? There must be some bright lights among all these bulbs.

As a third example, there are now many tutoring programs used in elementary reading and math with struggling learners. The average effect sizes for all forms of tutoring average about +0.30, in both reading and math. But there are reading tutoring approaches with effect sizes of +0.50 or more. If these programs are readily available, why would schools adopt programs less effective than the best? The average is useful for research purposes, and there are always considerations of costs and availability, but I would think any school would want to ignore the average for all types of programs and look into the ones that can do the most for their kids, at a reasonable cost.

I’ve often heard teachers and principals point out that “parents send us the best kids they have.” Yes they do, and for this reason it is our responsibility as educators to give those kids the best programs we can. We often describe educating students as enlightening them, or lifting the lamp of learning, or fiat lux. Perhaps the best way to fiat a little more lux is to take a page from Edison, the great luxmeister: Experiment tirelessly until we find what works. Then use the best we have.

Reference

Borman, G.D., Hewes, G. M., Overman, L.T., & Brown, S. (2003). Comprehensive school reform and achievement: A meta-analysis. Review of Educational Research, 73(2), 125-230.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

 

Advertisements

The Gap

Recently, Maryland released its 2019 state PARCC scores.  I read an article about the scores in the Baltimore Sun.  The pattern of scores was the same as usual, some up, some down. Baltimore City was in last place, as usual.  The Sun helpfully noted that this was probably due to high levels of poverty in Baltimore.  Then the article noted that there was a serious statewide gap between African American and White students, followed by the usual shocked but resolute statements about closing the gap from local superintendents.

Some of the superintendents said that in order to combat the gap, they were going to take a careful look at the curriculum.  There is nothing wrong with looking at curriculum.  All students should receive the best curriculum we can provide them.  However, as a means of reducing the gap, changing the curriculum is not likely to make much difference.

First, there is plentiful evidence from rigorous studies showing that changing from one curriculum to another, or one textbook to another, or one set of standards to another, makes little difference in student achievement.  Some curricula have more interesting or up to date content than others. Some meet currently popular standards better than others. But actual meaningful increases in achievement compared to a control group using the old curriculum?  This hardly ever happens. We once examined all of the textbooks rated “green” (the top ranking on EdReports, which reviews textbooks for alignment with college- and career-ready standards). Out of dozens of reading and math texts with this top rating,  two had small positive impacts on learning, compared to control groups.  In contrast, we have found more than 100 reading and math programs that are not textbooks or curricula that have been found to significantly increase student achievement more than control groups using current methods (see www.evidenceforessa.org).

But remember that at the moment, I am talking about reducing gaps, not increasing achievement overall.  I am unaware of any curriculum, textbook, or set of standards that is proven to reduce gaps. Why should they?  By definition, a curriculum or set of standards is for all students.  In the rare cases when a curriculum does improve achievement overall, there is little reason to expect it to increase performance for one  specific group or another.

The way to actually reduce gaps is to provide something extremely effective for struggling students. For example, the Sun article on the PARCC scores highlighted Lakeland Elementary/Middle, a Baltimore City school that gained 20 points on PARCC since 2015. How did they do it? The University of Maryland, Baltimore County (UMBC) sent groups of undergraduate education majors to Lakeland to provide tutoring and mentoring.  The Lakeland kids were very excited, and apparently learned a lot. I can’t provide rigorous evidence for the UMBC program, but there is quite a lot of evidence for similar programs, in which capable and motivated tutors without teaching certificates work with small groups of students in reading or math.

Tutoring programs and other initiatives that focus on the specific kids who are struggling have an obvious link to reducing gaps, because they go straight to where the problem is rather than doing something less targeted and less intensive.

blog_9-5-19_leap_500x375

Serious gap-reduction approaches can be used with any curriculum or set of standards. Districts focused on standards-based reform may also provide tutoring or other proven gap-reduction approaches along with new textbooks to students who need them.  The combination can be powerful. But the tutoring would most likely have worked with the old curriculum, too.

If all struggling students received programs effective enough to bring all of them to current national averages, the U.S. would be the highest-performing national school system in the world.  Social problems due to inequality, frustration, and inadequate skills would disappear. Schools would be happier places for kids and teachers alike.

The gap is a problem we can solve, if we decide to do so.  Given the stakes involved for our economy, society, and future, how could we not?

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Superman and Statistics

In the 1978 movie “Superman,” Lois Lane, star journalist, crash-lands in a helicopter on top of a 50-story skyscraper.   The helicopter is hanging by a strut to the edge of the roof, and Lois is hanging on to a microphone cord.  Finally, the cord breaks, and Lois falls 45 floors before (of course) she is swooped up by Superman, who flies her back to the roof and sets her down gently. Then he says to her:

“I hope this doesn’t put you off of flying. Statistically speaking, it is the safest form of travel.”

She faints.

blog_8-29-19_superman_333x500
Don’t let the superhero thing fool you: The “S” is for “statistics.”

I’ve often had the very same problem whenever I do public speaking.  As soon as I mention statistics, some of the audience faints dead away. Or perhaps they are falling asleep. But either way, saying the word “statistics” is not usually a good way to make friends and influence people.

 

The fact is, most people don’t like statistics.  Or more accurately, people don’t like statistics except when the statistical findings agree with their prejudices.  At an IES meeting several years ago, a well-respected superintendent was invited to speak to what is perhaps the nerdiest, most statistically-minded group in all of education, except for an SREE conference.  He actually said, without the slightest indication of humor or irony, that “GOOD research is that which confirms what I have always believed.  BAD research is that which disagrees with what I have always believed.”  I’d guess that the great majority of superintendents and other educational leaders would agree, even if few would say so out loud to an IES meeting.

If educational leaders only attend to statistics that confirm their prior beliefs, one might argue that, well, at least they do attend to SOME research.  But research in an applied field like education is of value only if it leads to positive changes in practice.  If influential educators only respect research that confirms their previous beliefs, then they never change their practices or policies because of research, and policies and practices stay the same forever, or change only due to politics, marketing, and fads. Which is exactly how most change does in fact happen in education.  If you wonder why educational outcomes change so slowly, if at all, you need look no further than this.

Why is it that educators pay so little attention to research, whatever its outcomes, much in contrast to the situation in many other fields?  Some people argue that, unlike medicine, where doctors are well trained in research, educators lack such training.  Yet agriculture makes far more practical use of evidence than education does, and most farmers, while outstanding in their fields, are not known for their research savvy.

Farmers are, however, very savvy business owners, and they can clearly see that their financial success depends on using seeds, stock, methods, fertilizers, and insecticides proven to be effective, cost-effective, and sustainable.  Similarly, research plays a crucial role in technology, engineering, materials science, and every applied field in which better methods, with proven outcomes, lead to increased profits.

So one major reason for limited use of research in education is that adopting proven methods in education rarely leads to enhanced profit.  Even in parts of the educational enterprise where profit is involved, economic success still depends far more on politics, marketing, and fads, than on evidence. Outcomes of adopting proven programs or practices may not have an obvious impact on overall school outcomes because achievement is invariably tangled up with factors such as social class of children and schools’ abilities to attract skilled teachers and principals.  Ask parents whether they would rather have their child to go to a school in which all students have educated, upper-middle class parents, or to a school that uses proven instructional strategies in every subject and grade level.  The problem is that there are only so many educated, upper-middle class parents to go around, so schools and parents often focus on getting the best possible demographics in their school rather than on adopting proven teaching methods.

How can education begin to make the rapid, irreversible improvements characteristic of agriculture, technology, and medicine?  The answer has to take into account the fundamental fact that education is a government monopoly.  I’m not arguing whether or not this is a good thing, but it is certain to be true for many years, perhaps forever.  The parts of education that are not part of government are private schools, and these are very few in number (charter schools are funded by government, of course).

Because government funds nearly all schools, it has both the responsibility and the financial capacity to do whatever is feasible to make schools as effective as it possibly can.  This is true of all levels of government, federal, state, and local.  Because it is in charge of all federal research funding, the federal government is the most logical organization to lead any efforts to increase use of proven programs and practices in education, but forward-looking state and local government could also play a major role if they chose to do so.

Government can and must take on the role that profit plays in other research-focused fields, such as agriculture, medicine, and engineering.   As I’ve argued many times, government should use national funding to incentivize schools to adopt proven programs.  For example, the federal government could provide funding to schools to enable them to pay the costs of adopting programs found to be effective in rigorous research.  Under ESSA, it is already doing this, but right now the main focus is only on Title I school improvement grants.   These go to schools that are among the lowest performers in their states.  School improvement is a good place to start, but it affects a modest number of extremely disadvantaged schools.  Such schools do need substantial funding and expertise to make the substantial gains they are asked to make, but they are so unlike the majority of Title I schools that they are not sufficient examples of what evidence-based reform could achieve.  Making all Title I schools eligible for incentive funding to implement proven programs, or at least working toward this goal over time, would arouse the interest and enthusiasm of a much greater set of schools, virtually all of which need major changes in practices to reach national standards.

To make this policy work, the federal government would need to add considerably to the funding it provides for educational research and development, and it would need to rigorously evaluate programs that show the greatest promise to make large, pragmatically important differences in schools’ outcomes in key areas, such as reading, mathematics, science, and English for English learners.  One way to do this cost-effectively would be to allow districts (or consortia of districts) to put forward pairs of matched schools for potential funding.   Districts or consortia awarded grants might then be evaluated by federal contractors, who would randomly assign one school in each pair to receive the program, while the pair members not selected would serve as a control group.  In this way, programs that had been found effective in initial research might have their evaluations replicated many times, at a very low evaluation cost.  This pair evaluation design could greatly increase the number of schools using proven programs, and could add substantially to the set of programs known to be effective.  This design could also give many more districts experience with top-quality experimental research, building support for the idea that research is of value to educators and students.

Getting back to Superman and Lois Lane, it is only natural to expect that Lois might be reluctant to get on another helicopter anytime soon, no matter what the evidence says.  However, when we are making decisions on behalf of children, it’s not enough to just pay attention to our own personal experience.  Listen to Superman.  The evidence matters.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Educational Policies vs. Educational Programs: Evidence from France

Ask any parent what their kids say when they ask them what they did in school today. Invariably, they respond, “Nuffin,” or some equivalent. My four-year-old granddaughter always says, “I played with my fwends.” All well and good.

However, in educational policy, policy makers often give the very same answer when asked, “What did the schools not using the (insert latest policy darling) do?”

“Nuffin’”. Or they say, “Whatever they usually do.” There’s nothing wrong with the latter answer if it’s true. But given the many programs now known to improve student achievement (see www.evidenceforessa.org), why don’t evaluators compare outcomes of new policy initiatives to those of proven educational programs known to improve the same outcomes the policy innovation is supposed to improve, perhaps at far lower cost per student? The evaluations should also compare to “business as usual,” but adding proven programs to evaluations of large policy innovations would help avoid declaring policy innovations to be successful when they are in fact just slightly more effective than “business as usual,” and much less effective or less cost-effective than alternative proven approaches? For example, when evaluating charter schools, why not routinely compare them to whole-school reform models that have similar objectives? When evaluating extending the school day or school year to help high-poverty schools, why not compare these innovations to using the same amount of additional money to hiring tutors to use proven tutoring models to help struggling students? In evaluating policies in which students are held back if they do not read at grade level by third grade, why not compare these approaches to intensive phonics instruction and tutoring in grades K-3, which are known to greatly improve student reading achievement?

blog_7-25-19_LeoandAdaya_375x500
There is nuffin like a good fwend.

As one example of research comparing a policy intervention to a promising educational intervention, I recently saw a very interesting pair of studies from France. Ecalle, Gomes, Auphan, Cros, & Magnan (2019) compared two interventions applied in special priority areas with high poverty levels. Both interventions focused on reading in first grade.

One of the interventions involved halving class size, from approximately 24 students to 12. The other provided intensive reading instruction in small groups (4-6 children) to students who were struggling in reading, as well as less intensive interventions to larger groups (10-12 students). Low achievers got two 30-minute interventions each day for a year, while the higher-performing readers got one 30-minute intervention each day. In both cases, the focus of instruction was on phonics. In all cases, the additional interventions were provided by the students’ usual teachers.

The students in small classes were compared to students in ordinary-sized classes, while the students in the educational intervention were compared to students in same-sized classes who did not get the group interventions. Similar measures and analyses were used in both comparisons.

The results were nearly identical for the class size policy and the educational intervention. Halving class size had effect sizes of +0.14 for word reading and +0.22 for spelling. Results for the educational intervention were +0.13 for word reading, +0.12 for spelling, +0.14 for a group test of reading comprehension, +0.32 for an individual test of comprehension, and +0.19 for fluency.

These studies are less than perfect in experimental design, but they are nevertheless interesting. Most importantly, the class size policy required an additional teacher for each class of 24. Using Maryland annual teacher salaries and benefits ($84,000), that means the cost in our state would be about $3500 per student. The educational intervention required one day of training and some materials. There was virtually no difference in outcomes, but the differences in cost were staggering.

The class size policy was mandated by the Ministry of Education. The educational intervention was offered to schools and provided by a university and a non-profit. As is so often the case, the policy intervention was simplistic, easy to describe in the newspaper, and minimally effective. The class size policy reminds me of a Florida program that extended the school schedule by an hour every day in high-poverty schools, mainly to provide more time for reading instruction. The cost per child was about $800 per year. The outcomes were minimal (ES=+0.05).

After many years of watching what schools do and reviewing research on outcomes of innovations, I find it depressing that policies mandated on a substantial scale are so often found to be ineffective. They are usually far more expensive than much more effective, rigorously evaluated programs that are, however, a bit more difficult to describe, and rarely arouse great debate in the political arena. It’s not that anyone is opposed to the educational intervention, but it is a lot easier to carry a placard saying “Reduce Class Size Now!” than to carry one saying “Provide Intensive Phonics in Small Groups with More Supplemental Teaching for the Lowest Achievers Now!” The latter just does not fit on a placard, and though easy to understand if explained, it does not lend itself to easy communication. Actually, there are much more effective first grade interventions than the one evaluated in France (see www.evidenceforessa.org). At a cost much less than $3500 per student, several one-to-one tutoring programs using well-trained teaching assistants as tutors would have been able to produce an effect size of more than +0.50 for all first graders on average. This would even fit on a placard: “Tutoring Now!”

I am all in favor of trying out policy innovations. But when parents of kids in a proven-program comparison group are asked what they did in school today, they shouldn’t say “nuffin’”. They should say, “My tooter taught me to read. And I played with my fwends.”

References

Ecalle, J., Gomes, C., Auphan, P., Cros, L., & Magnan, A. (2019). Effects of policy and educational interventions intended to reduce difficulties in literacy skills in grade 1. Studies in Educational Evaluation, 61, 12-20.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Is ES=+0.50 Achievable?: Schoolwide Approaches That Might Meet This Standard

In a recent blog, “Make No Small Plans,” I proposed a system innovators could use to create very effective schoolwide programs.  I defined these as programs capable of making a difference in student achievement large enough to bring entire schools serving disadvantaged students to the levels typical of middle class schools.  On average, that would mean creating school models that could routinely add an effect size of +0.50 for entire disadvantaged schools.  +0.50, or half a standard deviation, is roughly the average difference between students who qualify for free lunch and those who do not, between African American and White students, and between Hispanic and non-Hispanic White students.

Today, I wanted to give some examples of approaches intended to meet the +0.50 goal. From prior work, my colleagues and I already have created a successful schoolwide reform model, Success for All, which, with adequate numbers of tutors (as many as six per school) achieved reading effect sizes in high-poverty Baltimore elementary schools of over +0.50 for all students and +0.75 for the lowest-achieving quarter of students (Madden et al, 1993).   These outcomes maintained through eighth grade, and showed substantial reductions in grade retentions and special education placements (Borman & Hewes, 2003).  Steubenville, in Ohio’s Rust Belt, uses Success for All in all of its Title I elementary schools, providing several tutors in each.  Each year, Steubenville schools score among the highest in Ohio on state tests, exceeding most wealthy suburban schools.  Other SFA schools with sufficient tutors are also exemplary in achievement gains.  Yet these schools face a dilemma.  Most cannot afford significant numbers of tutors.  They still get excellent results, but less than those typical of SFA schools that do have sufficient tutors.

blog_12-20-18_tutornkid_500x333

We are now planning another approach, also intended to produce schoolwide effect sizes of at least +0.50 in schools serving disadvantaged students.   However, in this case our emphasis is on tutoring, the most effective strategy known for improving the achievement of struggling readers (Inns et al., 2019).  We are calling this approach the Reading Safety Net.  Main components of this plan are as follows:

Tutoring

Like the most successful forms of Success for All, the Reading Safety Net places a substantial emphasis on tutoring.  Tutors will be well-qualified teaching assistants with BAs but not teaching certificates, extensively trained to provide one-to-four tutoring.   Tutors will use a proven computer-assisted model in which students do a lot of pair teaching.  This is what we now call our Tutoring With the Lightning Squad model, which achieved outcomes of +0.40 and +0.46 in two studies in the Baltimore City Public Schools (Madden & Slavin, 2017).  A high-poverty school of 500 students might engage about five tutors, providing extensive tutoring to the majority of students, for as many years as necessary.  One additional tutor or teacher will supervise the tutors and personally work with students having the most serious problems.   We will provide significant training and follow-up coaching to ensure that all tutors are effective.

blog_11-8-18_tutoring_500x333

Attendance and Health

Many students fail in reading or other outcomes because they have attendance problems or certain common health problems. We propose to provide a health aide to help solve these problems.

Attendance

Many students, especially those in high-poverty schools, fail because they do not attend school regularly. Yet there are several proven approaches for increasing attendance, and reducing chronic truancy (Shi, Inns, Lake, and Slavin, 2019).  Health aides will help teachers and other staff organize and manage effective attendance improvement approaches.

Vision Services

My colleagues and I have designed strategies to help ensure that all students who need eyeglasses receive them. A key problem in this work is ensuring that students who receive glasses use them, keep them safe, and replace them if they are lost or broken. Health aides will coordinate use of proven strategies to increase regular use of needed eyeglasses.

blog_4-19-18_tutoring_500x329

Asthma and other health problems

Many students in high-poverty schools suffer from chronic illnesses.  Cures or prevention are known for these, but the cures may not work if medications are not taken daily.   For example, asthma is common in high-poverty schools, where it is the top cause of hospital referrals and a leading cause of death for school-age children.  Inexpensive inhalers can substantially improve children’s health, yet many children do not regularly take their medicine. Studies suggest that having trained staff ensure that students take their medicine, and watch them doing so, can make a meaningful difference.  The same may be true of other chronic, easily treated diseases common among children but often not consistently treated in inner-city schools.  Health aides with special supplemental training may be able to play a key on-the-ground role in helping ensure effective treatment for asthma and other diseases.

Potential Impact

The Reading Safety Net is only a concept at present.  We are seeking funding to support its further development and evaluation.  As we work with front line educators, colleagues, and others to further develop this model, we are sure to find ways to make the approach more effective and cost-effective, and perhaps extend it to solve other key problems.

We cannot yet claim that the Reading Safety Net has been proven effective, although many of its components have been.  But we intend to do a series of pilots and component evaluations to progressively increase the impact, until that impact attains or surpasses the goal of ES=+0.50.  We hope that many other research teams will mobilize and obtain resources to find their own ways to +0.50.  A wide variety of approaches, each of which would be proven to meet this ambitious goal, would provide a range of effective choices for educational leaders and policy makers.  Each would be a powerful, replicable tool, capable of solving the core problems of education.

We know that with sufficient investment and encouragement from funders, this goal is attainable.  If it is in fact attainable, how could we accept anything less?

References

Borman, G., & Hewes, G. (2003).  Long-term effects and cost effectiveness of Success for All.  Educational Evaluation and Policy Analysis, 24 (2), 243-266.

Inns, A., Lake, C., Pellegrini, M., & Slavin, R. (2019). A synthesis of quantitative research on programs for struggling readers in elementary schools. Manuscript submitted for publication.

Madden, N. A., & Slavin, R. E. (2017). Evaluations of Technology-Assisted Small-Group Tutoring for Struggling Readers. Reading & Writing Quarterly, 1-8.

Madden, N. A., Slavin, R. E., Karweit, N. L., Dolan, L., & Wasik, B. (1993). Success for All:  Longitudinal effects of a schoolwide elementary restructuring program. American Educational Reseach Journal, 30, 123-148.

Shi, C., Inns, A., Lake, C., & Slavin, R. E. (2019). Effective school-based programs for K-12 students’ attendance: A best-evidence synthesis. Baltimore, MD: Center for Research and Reform in Education, Johns Hopkins University.

 

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Make No Small Plans

“Make no little plans; they have no magic to stir men’s blood, and probably themselves will not be realized. Make big plans, aim high in hope and work, remembering that a noble, logical diagram, once recorded, will never die…”

-Daniel Burnham, American architect, 1910

More than 100 years ago, architect Daniel Burnham expressed an important insight. “Make no little plans,” he said. Many people have said that, one way or another. But Burnham’s insight was that big plans matter because they “have magic to stir men’s blood.” Small plans do not, and for this reason may never even be implemented. Burnham believed that even if big plans fail, they have influence into the future, as little plans do not.

blog_6-27-19_Great Wall of China
Make no small plans.

In education, we sometimes have big plans. Examples include comprehensive school reform in the 1990s, charter schools in the 2000s, and evidence-based reform today. None of these have yet produced revolutionary positive outcomes, but all of them have captured the public imagination. Even if you are not an advocate of any of these, you cannot ignore them, as they take on a life of their own. When conditions are right, they will return many times, in many forms, and may eventually lead to substantial impacts. In medicine, it was demonstrated in the mid-1800s that germs caused disease and that medicine could advance through rigorous experimentation (think Lister and Pasteur, for example). Yet sterile procedures in operations and disciplined research on practical treatments took 100 years to prevail. The medical profession resisted sterile procedures and evidence-based medicine for many years. Sterile procedures and evidence-based medicine were big ideas. It took a long time for them to take hold, but they did prevail, and remained big ideas through all that time.

Big Plans in Education

In education, as in medicine long ago, we have thousands of important problems, and good work continues (and needs to continue) on most of them. However, at least in American education, there is one crucial problem that dwarfs all others and lends itself to truly big plans. This is the achievement gap between students from middle class backgrounds and those from disadvantaged backgrounds. As noted in my April 25 blog, the achievement gap between students who qualify for free lunch and those who do not, between African American and White students, and between Hispanic students and non-Hispanic White students, all average an effect size of about 0.50. This presents a serious challenge. However, as I pointed out in that blog, there are several programs in existence today capable of adding an effect size of +0.50 to the reading or math achievement of students at risk. All programs that can do this involve one-to-one or one-to-small group tutoring. Tutoring is expensive, but recent research has found that well-trained and well-supervised tutors with BAs, but not necessarily teaching certificates, can obtain the same outcomes as certified teachers do, at half the cost. Using our own Success for All program with six tutors per school (K-5), high-poverty African American elementary schools in Baltimore obtained effect sizes averaging +0.50 for all students and +0.75 for students in the lowest 25% of their grades (Madden et al., 1993). A follow-up to eighth grade found that achievement outcomes maintained and both retentions and special education placements were cut in half (Borman & Hewes, 2003). We have not had the opportunity to once again implement Success for All with so much tutoring included, but even with fewer tutors, Success for All has had substantial impacts. Cheung et al. (2019) found an average effect size of +0.27 across 28 randomized and matched studies, a more than respectable outcome for a whole-school intervention. For the lowest-achieving students, the average was +0.56.

Knowing that Success for All can achieve these outcomes is important in itself, but it is also an indication that substantial positive effects can be achieved for whole schools, and with sufficient tutors, can equal the entire achievement gaps according to socio-economic status and race. If one program can do this, why not many others?

Imagine that the federal government or other large funders decided to support the development and evaluation of several different ideas. Funders might establish a goal of increasing reading achievement by an effect size of +0.50, or as close as possible to this level, working with high-poverty schools. Funders would seek organizations that have already demonstrated success at an impressive level, but not yet +0.50, who could describe a compelling strategy to increase their impact to +0.50 or more. Depending on the programs’ accomplishments and needs, they might be funded to experiment with enhancements to their promising model. For example, they might add staff, add time (e.g., continue for multiple years), or add additional program components likely to strengthen the overall model. Once programs could demonstrate substantial outcomes in pilots, they might be funded to do a cluster randomized trial. If this experiment shows positive effects approaching +0.50 or more, the developers might receive funding for scale-up. If the outcomes are substantially positive but significantly less than +0.50, the funders might decide to help the developers make changes leading up to a second randomized experiment.

There are many details to be worked out, but the core idea could capture the imagination and energy of educators and public-spirited citizens alike. This time, we are not looking for marginal changes that can be implemented cheaply. This time, we will not quit until we have many proven, replicable programs, each of which is so powerful that it can, over a period of years, remedy the entire achievement gap. This time, we are not making changes in policy or governance and hoping for the best. This time, we are going directly to the schools where the disadvantaged kids are, and we are not declaring victory until we can guarantee such students gains that will give them the same outcomes as those of the middle class kids in the suburbs.

Perhaps the biggest idea of all is the idea that we need big ideas with big outcomes!

Anyway, this is my big plan. What’s yours?

————

Note: Just as I was starting on this blog, I got an email from Ulrich Boser at the Center for American Progress. CAP and the Thomas Fordham Foundation are jointly sponsoring an “Education Moonshot,” including a competition with a grand prize of $10,000 for a “moonshot idea that will revolutionize schooling and dramatically improve student outcomes.” For more on this, please visit the announcement site. Submissions are due August 1st at this online portal and involve telling them in 500 words your, well, big plan.

 

References

Borman, G., & Hewes, G. (2003).  Long-term effects and cost effectiveness of Success for All.  Educational Evaluation and Policy Analysis, 24 (2), 243-266.

Cheung, A., Xie, C., Zhuang, T., & Slavin, R. E. (2019). Success for All: A quantitative synthesis of evaluations. Manuscript submitted for publication.

Madden, N.A., Slavin, R.E., Karweit, N.L., Dolan, L.J., & Wasik, B.A. (1993).  Success for All:  Longitudinal effects of a restructuring program for inner-city elementary schools.  American Educational Research Journal, 30, 123-148.

 

 

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Evidence For Revolution

In the 1973 movie classic “Sleeper,” Woody Allen plays a New York health food store owner who wakes up 200 years in the future, in a desolate environment.

“What happened to New York?” he asks the character played by Diane Keaton.  She replies, “It was destroyed.  Some guy named Al Shanker got hold of a nuclear weapon.”

I think every member of the American Federation of Teachers knows this line.  Firebrand educator Al Shanker, founder of the AFT, would never have hurt anyone.  But short of that, he would do whatever it took to fight for teachers’ rights, and most importantly, for the rights of students to receive a great education.  In fact, he saw that the only way for teachers to receive the respect, fair treatment, and adequate compensation they deserved, and still deserve, was to demonstrate that they had skills not possessed by the general public that could have powerful impacts on students’ learning.  Physicians are much respected and well paid because they have special knowledge of how to prevent and cure disease, and to do this they have available a vast armamentarium of drugs, devices, and procedures, all proven to work in rigorous research.

Shanker was a huge fan of evidence in education, first because evidence-based practice helps students succeed, but also because teachers using proven programs and practices show that they deserve respect and fair compensation because they have specialized knowledge backed by proven methods able to ensure the success of students.

The Revolutionary Potential of Evidence in Education

The reality is that in most school districts, especially large ones, most power resides in the central office, not in individual schools.  The district chooses textbooks, computer technology, benchmark assessments, and much more.  There are probably principals and teachers on the committees that make these decisions, but once the decisions are made, the building-level staff is supposed to fall in line and do as they are told.  When I speak to principals and teachers, they are astonished to learn that they can easily look up on www.evidenceforessa.org just about any program their district is using and find out what the evidence base for that program is.  Most of the time, the programs they have been required to use by their school administrations either have no valid evidence of effectiveness, or they have concrete evidence that they do not work.  Further, in almost all categories, effective programs or approaches do exist, and could have been selected as practical alternatives to the ones that were adopted.  Individual schools could have been allowed to choose proven programs, instead of being required to use programs they know not to be proven effective.

Perhaps schools should always be given the freedom to select and implement programs other than those mandated by the district, as long as the programs they want to implement have stronger evidence of effectiveness than the district’s programs.

blog_6-27-19_delacroix_500x391

How the Revolution Might Happen

Imagine that principals, teachers, parent activists, enlightened school board members, and others in a given district were all encouraged to use Evidence for ESSA or other reviews of evaluations of educational programs.  Imagine that many of these people just wrote letters to the editor, or letters to district leaders, letters to education reporters, or perhaps, if these are not sufficient, they might march on the district offices with placards reading something like “Use What Works” or “Our Children Deserve Proven Programs.”  Who could be against that?

One of three things might happen.  First, the district might allow individual schools to use proven programs in place of the standard programs, and encourage any school to come forward with evidence from a reliable source if its staff or leadership wants to use a proven program not already in use.  That would be a great outcome.  Second, the district leadership might start using proven programs districtwide, and working with school leaders and teachers to ensure successful implementation.  This retains the top-down structure, but it could greatly improve student outcomes.  Third, the district might ignore the protesters and the evidence, or relegate the issue to a very slow study committee, which may be the same thing.  That would be a distressing outcome, though no worse than what probably happens now in most places.  It could still be the start of a positive process, if principals, teachers, school board members, and parent activists keep up the pressure, helpfully informing the district leaders about proven programs they could select when they are considering a change.

If this process took place around the country, it could have a substantial positive impact beyond the individual districts involved, because it could scare the bejabbers out of publishers, who would immediately see that if they are going to succeed in the long run, they need to design programs that will likely work in rigorous evaluations, and then market them based on real evidence.  That would be revolutionary indeed.  Until the publishers get firmly on board, the evidence movement is just tapping at the foundations of a giant fortress with a few ball peen hammers.  But there will come a day when that fortress will fall, and all will be beautiful. It will not require a nuclear weapon, just a lot of committed and courageous educators and advocates, with a lot of persistence, a lot of information on what works in education, and a lot of ball peen hammers.

Picture Credit: Liberty Leading the People, Eugène Delacroix [Public domain]

 This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.