What’s the Evidence that Evidence Works?

I recently gave a couple of speeches on evidence-based reform in education in Barcelona.  In preparing for them, one of the organizers asked me an interesting question: “What is your evidence that evidence works?”

At one level, this is a trivial question. If schools select proven programs and practices aligned with their needs and implement them with fidelity and intelligence, with levels of resources similar to those used in the original successful research, then of course they’ll work, right? And if a school district adopts proven programs, encourages and funds them, and monitors their implementation and outcomes, then of course the appropriate use of all these programs is sure to enhance achievement district-wide, right?

Although logic suggests that a policy of encouraging and funding proven programs is sure to increase achievement on a broad scale, I like to be held to a higher standard: Evidence. And, it so happens, I happen to have some evidence on this very topic. This evidence came from a large-scale evaluation of an ambitious, national effort to increase use of proven and promising schoolwide programs in elementary and middle schools, in a research center funded by the Institute for Education Sciences (IES) called the Center for Data-Driven Reform in Education, or CDDRE (see Slavin, Cheung, Holmes, Madden, & Chamberlain, 2013). The name of the program the experimental schools used was Raising the Bar.

How Raising the Bar Raised the Bar

The idea behind Raising the Bar was to help schools analyze their own needs and strengths, and then select whole-school reform models likely to help them meet their achievement goals. CDDRE consultants provided about 30 days of on-site professional development to each district over a 2-year period. The PD focused on review of data, effective use of benchmark assessments, school walk-throughs by district leaders to see the degree to which schools were already using the programs they claimed to be using, and then exposing district and school leaders to information and data on schoolwide programs available to them, from several providers. If districts selected a program to implement, their district and school received PD on ensuring effective implementation and principals and teachers received PD on the programs they chose.

blog_7-26-18_polevault_375x500

Evaluating Raising the Bar

In the study of Raising the Bar we recruited a total of 397 elementary and 225 middle schools in 59 districts in 7 states (AL, AZ, IN, MS, OH, TN). All schools were Title I schools in rural and mid-sized urban districts. Overall, 30% of students were African-American, 20% were Hispanic, and 47% were White. Across three cohorts, starting in 2005, 2006, or 2007, schools were randomly assigned to either use Raising the Bar, or to continue with what they were doing. The study ended in 2009, so schools could have been in the Raising the Bar group for two, three, or four years.

Did We Raise the Bar?

State test scores were obtained from all schools and transformed to z-scores so they could be combined across states. The analyses focused on grades 5 and 8, as these were the only grades tested in some states at the time. Hierarchical linear modeling, with schools nested within districts, were used for analysis.

For reading in fifth grade, outcomes were very good. By Year 3, the effect sizes were significant, with significant individual-level effect sizes of +0.10 in Year 3 and +0.19 in Year 4. In middle school reading, effect sizes reached an effect size of +0.10 by Year 4.

Effects were also very good in fifth grade math, with significant effects of +0.10 in Year 3 and +0.13 in Year 4. Effect sizes in middle school math were also significant in Year 4 (ES=+0.12).

Note that these effects are for all schools, whether they adopted a program or not. Non-experimental analyses found that by Year 4, elementary schools that had chosen and implemented a reading program (33% of schools by Year 3, 42% by Year 4) scored better than matched controls in reading. Schools that chose any reading program usually chose our Success for All reading program, but some chose other models. Even in schools that did not adopt reading or math programs, scores were always higher, on average, (though not always significantly higher) than for schools that did not choose programs.

How Much Did We Raise the Bar?

The CDDRE project was exceptional because of its size and scope. The 622 schools, in 59 districts in 7 states, were collectively equivalent to a medium-sized state. So if anyone asks what evidence-based reform could do to help an entire state, this study provides one estimate. The student-level outcome in elementary reading, an effect size of +0.19, applied to NAEP scores, would be enough to move 43 states to the scores now only attained by the top 10. If applied successfully to schools serving mostly African American and Hispanic students or to students receiving free- or reduced-price lunches regardless of ethnicity, it would reduce the achievement gap between these and White or middle-class students by about 38%. All in four years, at very modest cost.

Actually, implementing something like Raising the Bar could be done much more easily and effectively today than it could in 2005-2009. First, there are a lot more proven programs to choose from than there were then. Second, the U.S. Congress, in the Every Student Succeeds Act (ESSA), now has definitions of strong, moderate, and promising levels of evidence, and restricts school improvement grants to schools that choose such programs. The reason only 42% of Raising the Bar schools selected a program is that they had to pay for it, and many could not afford to do so. Today, there are resources to help with this.

The evidence is both logical and clear: Evidence works.

Reference

Slavin, R. E., Cheung, A., Holmes, G., Madden, N. A., & Chamberlain, A. (2013). Effects of a data-driven district reform model on state assessment outcomes. American Educational Research Journal, 50 (2), 371-396.

Photo by Sebastian Mary/Gio JL [CC BY-SA 2.0  (https://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Advertisements

First There Must be Love. Then There Must be Technique.

I recently went to Barcelona. This was my third time in this wonderful city, and for the third time I visited La Sagrada Familia, Antoni Gaudi’s breathtaking church. It was begun in the 1880s, and Gaudi worked on it from the time he was 31 until he died in 1926 at 74. It is due to be completed in 2026.

Every time I go, La Sagrada Familia has grown even more astonishing. In the nave, massive columns branching into tree shapes hold up the spectacular roof. The architecture is extremely creative, and wonders lie around every corner.

blog_7-19-18_Barcelona_333x500

I visited a new museum under the church. At the entrance, it had a Gaudi quote:

First there must be love.

Then there must be technique.

This quote sums up La Sagrada Familia. Gaudi used complex mathematics to plan his constructions. He was a master of technique. But he knew that it all meant nothing without love.

In writing about educational research, I try to remind my readers of this from time to time. There is much technique to master in creating educational programs, evaluating them, and fairly summarizing their effects. There is even more technique in implementing proven programs in schools and classrooms, and in creating policies to support use of proven programs. But what Gaudi reminds us of is just as essential in our field as it was in his. We must care about technique because we care about children. Caring about technique just for its own sake is of little value. Too many children in our schools are failing to learn adequately. We cannot say, “That’s not my problem, I’m a statistician,” or “that’s not my problem, I’m a policymaker,” or “that’s not my problem, I’m an economist.” If we love children and we know that our research can help them, then it’s all of our problems. All of us go into education to solve real problems in real classrooms. That’s the structure we are all building together over many years. Building this structure takes technique, and the skilled efforts of many researchers, developers, statisticians, superintendents, principals, and teachers.

Each of us brings his or her own skills and efforts to this task. None of us will live to see our structure completed, because education keeps growing in techniques and capability. But as Gaudi reminds us, it’s useful to stop from time to time and remember why we do what we do, and for whom.

Photo credit: By Txllxt TxllxT [CC BY-SA 4.0  (https://creativecommons.org/licenses/by-sa/4.0)], from Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Fads and Evidence in Education

York, England, has a famous racecourse. When I lived there I never saw a horse race, but I did see women in town for the race all dressed up and wearing very strange contraptions in their hair, called fascinators. The picture below shows a couple of examples. They could be twisted pieces of metal or wire or feathers or just about anything as long as they were . . . well, fascinating. The women paraded down Mickelgate, York’s main street, showing off their fancy clothes and especially, I’d guess, their fascinators.

blog_6-14-18_fascinators_500x333

The reason I bring up fascinators is to contrast the world of fashion and the world of science. In fashion, change happens constantly, but it is usually change for the sake of change. Fascinators, I’d assume, derived from hats, which women have been wearing to fancy horse races as long as there have been fancy horse races. Hats themselves change all the time. I’m guessing that what’s fascinating about a fascinator is that it maintains the concept of a racing-day hat in the most minimalist way possible, almost mocking the hat tradition while at the same time honoring it. The point is, fascinators get thinner because hats used to be giant, floral contraptions. In art, there was realism and then there were all sorts of non-realism. In music there was Frank Sinatra and then Elvis and then Beatles and then disco. Eventually there was hip hop. Change happens, but it’s all about taste. People get tired of what once was popular, so something new comes along.

Science-based fields have a totally different pattern of change. In medicine, engineering, agriculture, and other fields, evidence guides changes. These fields are not 100% fad-free, but ultimately, on big issues, evidence wins out. In these fields, there is plenty of high-quality evidence, and there are very serious consequences for making or not making evidence-based policies and practices. If someone develops an artificial heart valve that is 2% more effective than the existing valves, with no more side effects, surgeons will move toward that valve to save lives (and avoid lawsuits).

In education, which model do we follow? Very, very slowly we are beginning to consider evidence. But most often, our model of change is more like the fascinators. New trends in education take the schools by storm, and often a few years later, the opposite policy or practice will become popular. Over long periods, very similar policies and practices keep appearing, disappearing, and reappearing, perhaps under a different name.

It’s not that we don’t have evidence. We do, and more keeps coming every year. Yet our profession, by and large, prefers to rush from one enthusiasm to another, without the slightest interest in evidence.

Here’s an exercise you might enjoy. List the top ten things schools and districts are emphasizing right now. Put your list into a “time capsule” envelope and file it somewhere. Then take it out in five years, and then ten years. Will those same things be the emphasis in schools in districts then? To really punish yourself, write the NAEP reading and math scores overall and by ethnic groups at fourth and eighth grade. Will those scores be a lot better in five or ten years? Will gaps be diminishing? Not if current trends continue and if we continue to give only lip service to evidence.

Change + no evidence = fashion

Change + evidence = systematic improvement

We can make a different choice. But it will take real leadership. Until that leadership appears, we’ll be doing what we’ve always done, and the results will not change.

Isn’t that fascinating?

Photo credit: Both photos by Chris Phutully [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Ensuring That Proven Programs Stay Effective in Practice

On a recent trip to Scotland, I visited a ruined abbey. There, in what remained of its ancient cloister, was a sign containing a rule from the 1459 Statute of the Strasbourg Stonecutters’ Guild:

If a master mason has agreed to build a work and has made a drawing of the work as it is to be executed, he must not change this original. But he must carry out the work according to the plan that he has presented to the lords, towns, or villages, in such a way that the work will not be diminished or lessened in value.

Although the Stonecutters’ Guild was writing more than five centuries ago, it touched on an issue we face right now in evidence-based reform in education. Providers of educational programs may have excellent evidence that meets ESSA standards and demonstrates positive effects on educational outcomes. That’s terrific, of course. But the problem is that after a program has gone into dissemination, its developers may find that schools are not willing or able to pay for all of the professional development or software or materials used in the experiments that validated the program. So they may provide less, sometimes much less, to make the program cheaper or easier to adopt. This is the problem that concerned the Stonecutters of Strasbourg: Grand plans followed by inadequate construction.

blog_5-17-18_MedBuilding_500x422

In our work on Evidence for ESSA, we see this problem all the time. A study or studies show positive effects for a program. In writing up information on costs, personnel, and other factors, we usually look at the program’s website. All too often, we find that the program on the website provides much less than the program that was evaluated.  The studies might have provided weekly coaching, but the website promises two visits a year. A study of a tutoring program might have involved one-to-two tutoring, but the website sells or licenses the materials in sets of 20 for use with groups of that size. A study of a technology program may have provided laptops to every child and a full-time technology coordinator, while the website recommends one device for every four students and never mentions a technology coordinator.

Whenever we see this, we take on the role of the Stonecutters’ Guild, and we have to be as solid as a rock. We tell developers that we are planning to describe their program as it was implemented in their successful studies. This sometimes causes a ruckus, with vendors arguing that providing what they did in the study would make the program too expensive. “So would you like us to list your program (as it is in your website) as unevaluated?” we say. We are not unreasonable, but we are tough, because we see ourselves as helping schools make wise and informed choices, not helping vendors sell programs that may have little resemblance to the programs that were evaluated.

This is hard work, and I’m sure we do not get it right 100% of the time. And a developer may agree to an honest description but then quietly give discounts and provide less than what our descriptions say. All we can do is state the truth on our website about what was provided in the successful studies as best as we can, and the schools have to insist that they receive the program as described.

The Stonecutters’ Guild, and many other medieval guilds, represented the craftsmen, not the customers. Yet part of their function was to uphold high standards of quality. It was in the collective interest of all members of the guild to create and maintain a “brand,” indicating that any product of the guild’s members met the very highest standards. Someday, we hope publishers, software developers, professional development providers, and others who work with schools will themselves insist on an evidence base for their products, and then demand that providers ensure that their programs continue to be implemented in ways that maximize the probability that they will produce positive outcomes for children.

Stonecutters only build buildings. Educators affect the lives of children, which in turn affect families, communities, and societies. Long after a stonecutter’s work has fallen into ruin, well-educated people and their descendants and communities will still be making a difference. As researchers, developers, and educators, we have to take this responsibility at least as seriously as did the stone masons of long ago.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Lessons from China

blog_3-22-18_Confucius_344x500Recently I gave a series of speeches in China, organized by the Chinese University of Hong Kong and Nanjing Normal University. I had many wonderful and informative experiences, but one evening stood out.

I was in Nanjing, the ancient capital, and it was celebrating the weeks after the Chinese New Year. The center of the celebration was the Temple of Confucius. In and around it were lighted displays exhorting Chinese youth to excel on their exams. Children stood in front of these displays to have their pictures taken next to characters saying “first in class,” never second. A woman with a microphone recited blessings and hopes that students would do well on exams. After each one, students hit a huge drum with a long stick, as an indication of accepting the blessing. Inside the temple were thousands of small silk messages, bright red, expressing the wishes of parents and students that students will do well on their exams. Chinese friends explained what was going on, and told me how pervasive this spirit was. Children all know a saying to the effect that the path to riches and a beautiful wife was through books. I heard that perhaps 70% of urban Chinese students go to after-school cram schools to ensure their performance on exams.

The reason Chinese parents and students take test scores so seriously is obvious in every aspect of Chines culture. On an earlier trip to China I toured a beautiful house, from hundreds of years ago, in a big city. The only purpose of the house was to provide a place for young men of a large clan to stay while they prepared for their exams, which determined their place in the Confucian hierarchy.

As everyone knows, Chinese students do, in fact, do very well on their exams. I would note that these data come in particular from urban Eastern China, such as Shanghai. I’d heard about but did not fully understand policies that contribute to these outcomes. In all big cities in China, students can only attend schools in their city neighborhoods, where the best schools in the country are, if they were born there or own apartments. In a country where a small apartment in a big city can easily cost a half million dollars (U.S.), this is no small selection factor. If parents work in the city but do not own an apartment, their children may have to remain in the village or small city they came from, living with grandparents and attending non-elite schools. Chinese cities are growing so fast that the majority of their inhabitants come from the rest of China. This matters because admirers of Chinese education often cite the amazing statistics from the rich and growing Eastern Chinese cities, not the whole country. It’s as though the U.S. only reported test scores on international comparisons from suburbs in the Northeastern states from Maryland to New England, the wealthiest and highest-achieving part of our country.

I do not want to detract in any way from the educational achievements of the Chinese, but just to put it in context. First, the Chinese themselves have doubts about test scores as the only important indicators, and admire Western education for its broader focus. But just sticking to test scores, China and other Confucian cultures such as Japan, South Korea, and Singapore have been creating a culture valuing test scores since Confucius, about 2500 years ago. It would be a central focus of Chinese culture even if PISA and TIMSS did not exist to show it off to the world.

My only point is that when American or European observers hold up East Asian achievements as a goal to aspire to, these achievements do not exist in a cultural vacuum. Other countries can potentially achieve what China has achieved, in terms of test scores and other indicators, but they cannot achieve it in the same way. Western culture is just not going to spend the next 2500 years raising its children the way the Chinese do. What we can do, however, is to use our own strengths, in research, development, and dissemination, to progressively enhance educational outcomes. The Chinese can and will do this, too; that’s what I was doing traveling around China speaking about evidence-based reform. We need not be in competition with any nation or society, as expanding educational opportunity and success throughout the world is in the interests of everyone on Earth. But engaging in fantasies about how we can move ahead by emulating parts of Chinese culture that they have been refining since Confucius is not sensible.

Precisely because of their deep respect for scholarship and learning and their eagerness to continue to improve their educational achievements, the Chinese are ideal collaborators in the worldwide movement toward evidence-based reform in education. Colleagues at the Chinese University of Hong Kong and the Nanjing Normal University are launching Chinese-language and Asian-focused versions of our newsletter on evidence in education, Best Evidence in Brief (BEiB). We and our U.K. colleagues have been distributing BEIB for several years. We welcome the opportunity to share ideas and resources with our Chinese colleagues to enrich the evidence base for education for children everywhere.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

CDC Told to Avoid Use of “Evidence-Based”: Is the Earth Flat Again?

In this blog series, I generally try to stay non-partisan, avoiding issues that, though important, do not relate to evidence-based reform in education. However, the current administration has just crossed that line.

In a December 16 article in the Washington Post, Lena Sun and Juliet Eilperin reported that the Trump Administration has prohibited employees of the Centers for Disease Control and Prevention (CDC) from using seven words or phrases in their reports. Two of these are “evidence-based” and “science-based.” Admittedly, this relates to health, not education, but who could imagine that education will not be next?

I’m not sure exactly why “evidence-based” and “science-based” are included among a set of banned words that otherwise consist of words such as “fetus,” “transgender,” and “diversity” that have more obvious political overtones. The banning of “evidence-based” and “science-based” is particularly upsetting because evidence, especially in medicine, has up to now been such a non-partisan, good-government concept. Ultimately, Republicans and Democrats and their family members and friends get sick or injured, or want to prevent disease, and perhaps as a result, evidence-based health care has been popular on both sides of the aisle. In education, Republican House Speaker Paul Ryan and Democratic Senator Patty Murray have worked together as forceful advocates for evidence-based reform, as have many others. George W. Bush and Barak Obama both took personal and proactive roles in advancing evidence in education.

You have to go back a long time to find governments banning evidence itself. Perhaps you have to go back to Pope Paul V, whose Cardinal Bellarmine ordered Galileo in 1615 to: “…abandon completely the opinion that the sun stands still at the center of the world and the Earth moves…”

In fear for his life, Galileo agreed, but in 1633, Galileo was accused of breaking his promise. He was threatened with torture, and had to agree again to the Pope’s demand. He was placed under house arrest for the rest of his life.

After his 1633 banishment, Galileo was said to have muttered, “E pur si muove” (And yet it moves). If he did (historians doubt it), he was expressing defiance, but also a key principle of science: “Proven principles remain true even if we are no longer allowed to speak of them.”

The CDC officials were offered a new formulation to use instead of “evidence-based” and “science-based.” It was: “CDC bases its recommendations on science in consideration with community standards and wishes.”

This is of course the antithesis of evidence or science. Does the Earth circle the sun in some states or counties, but it’s the other way around in others? Who decides which scientific principles apply in a given location? Does objective science have any role at all or are every community’s beliefs as valid as every other’s? Adopting the ban would hold back research and applications of settled research, harming millions of potential beneficiaries and making the U.S. a laughingstock among advanced nations. Banning the words “evidence-based” and “science-based” will not change scientific reality. Yet it will perhaps slow down funding for research and dissemination of proven treatments, and that would be disastrous, both in medicine and in education. I hope and expect that scientists in both fields will continue to find the truth and make it known whatever the consequences, and that our leaders of both parties see the folly of this action and reverse it immediately.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

 

Title I: A 20% Solution

Here’s an idea that would cost nothing and profoundly shift education funding and the interest of educators and policy makers toward evidence-proven programs. Simply put, the idea is to require that schools receiving Title I funds use 20% of the total on programs that meet at least a moderate standard of evidence. Two thin dimes on the dollar could make a huge difference in all of education.

In terms of federal education policy, Title I is the big kahuna. At $15 billion per year, it is the largest federal investment in elementary and secondary education, and it has been very politically popular on both sides of the aisle since the Johnson administration in 1965, when the Elementary and Secondary Education Act (ESEA) was first passed. Title I has been so popular because it goes to every congressional district, and provides much-needed funding by formula to help schools serving children living in poverty. Since the reauthorization of ESEA as the Every Student Succeeds Act in 2015, Title I remains the largest expenditure.

In ESSA and other federal legislation, there are two kinds of funding. One is formula funding, like Title I, where money usually goes to states and is then distributed to districts and schools. The formula may adjust for levels of poverty and other factors, but every eligible school gets its share. The other kind of funding is called competitive, or discretionary funding. Schools, districts, and other entities have to apply for competitive funding, and no one is guaranteed a share. In many cases, federal funds are first given to states, and then schools or districts apply to their state departments of education to get a portion of it, but the state has to follow federal rules in awarding the funds.

Getting proven programs into widespread use can be relatively easy in competitive grants. Competitive grants are usually evaluated on a 100-point scale, with all sorts of “competitive preference points” for certain categories of applicants, such as for rural locations, inclusion of English language learners or children of military families, and so on. These preferences add perhaps one to five points to a proposal’s score, giving such applicants a leg up but not a sure thing. In the same way, I and others have proposed adding competitive preference points in competitive proposals for applicants who propose to adopt programs that meet established standards of evidence. For example, Title II SEED grants for professional development now require that applicants propose to use programs found to be effective in at least one rigorous study, and give five points if the programs have been proven effective in at least two rigorous studies. Schools qualifying for school improvement funding under ESSA are now required to select programs that meet ESSA evidence standards.

Adding competitive preference points for using proven programs in competitive grants is entirely sensible and pain-free. It costs nothing, and does not require applicants to use any particular program. In fact, applicants can forego the preference points entirely, and hope to win without them. Preference points for proven programs is an excellent way to nudge the field toward evidence-based reform without top-down mandates or micromanagement. The federal government states a preference for proven programs, which will at least raise their profile among grant writers, but no school or district has to do anything different.

The much more difficult problem is how to get proven programs into formula funding (such as Title I). The great majority of federal funds are awarded by formula, so restricting evidence-based reform to competitive grants is only nibbling at the edges of practice. One solution to this would be to allocate incentive grants to districts if they agree to use formula funds to adopt and implement proven programs.

However, incentives cost money. Instead, imagine that districts and schools get their Title I formula funds, as they have since 1965. However, Congress might require that districts use at least 20% of their Title I, Part A funding to adopt and implement programs that meet a modest standard of evidence, similar to the “moderate” level in ESSA (which requires one quasi-experimental study with positive effects). The adopted program could be anything that meets other Title I requirements—reading, math, tutoring, technology—except that the program has to have evidence of effectiveness. The funds could pay for necessary staffing, professional development, materials, software, hardware, and so on. Obviously, schools could devote more than 20% if they choose to do so.

There are several key advantages to this 20% solution. First, of course, children would immediately benefit from receiving programs with at least moderate evidence of effectiveness. Second, the process would instantly make leaders of the roughly 55,000 Title I schools intensely interested in evidence. Third, the process could gradually shift discussion about Title I away from its historical focus on “how much?” to an additional focus on “for what purpose?” Publishers, software developers, academics, philanthropy, and government itself would perceive the importance of evidence, and would commission or carry out far more high-quality studies to meet the new standards. Over time, the standards of evidence might increase.

All of this would happen at no additional cost, and with a minimum of micromanagement. There are now many programs that would meet the “moderate” standards of evidence in reading, math, tutoring, whole-school reform, and other approaches, so schools would have a wide choice. No Child Left Behind required that low-performing schools devote 20% of their Title I funding to after-school tutoring programs and student transfer policies that research later showed to make little or no difference in outcomes. Why not spend the same on programs that are proven to work in advance, instead of once again rolling the dice with the educational futures of at-risk children?

20% of Title I is a lot of money, but if it can make 100% of Title I more impactful, it is more than worthwhile.