Ensuring That Proven Programs Stay Effective in Practice

On a recent trip to Scotland, I visited a ruined abbey. There, in what remained of its ancient cloister, was a sign containing a rule from the 1459 Statute of the Strasbourg Stonecutters’ Guild:

If a master mason has agreed to build a work and has made a drawing of the work as it is to be executed, he must not change this original. But he must carry out the work according to the plan that he has presented to the lords, towns, or villages, in such a way that the work will not be diminished or lessened in value.

Although the Stonecutters’ Guild was writing more than five centuries ago, it touched on an issue we face right now in evidence-based reform in education. Providers of educational programs may have excellent evidence that meets ESSA standards and demonstrates positive effects on educational outcomes. That’s terrific, of course. But the problem is that after a program has gone into dissemination, its developers may find that schools are not willing or able to pay for all of the professional development or software or materials used in the experiments that validated the program. So they may provide less, sometimes much less, to make the program cheaper or easier to adopt. This is the problem that concerned the Stonecutters of Strasbourg: Grand plans followed by inadequate construction.

blog_5-17-18_MedBuilding_500x422

In our work on Evidence for ESSA, we see this problem all the time. A study or studies show positive effects for a program. In writing up information on costs, personnel, and other factors, we usually look at the program’s website. All too often, we find that the program on the website provides much less than the program that was evaluated.  The studies might have provided weekly coaching, but the website promises two visits a year. A study of a tutoring program might have involved one-to-two tutoring, but the website sells or licenses the materials in sets of 20 for use with groups of that size. A study of a technology program may have provided laptops to every child and a full-time technology coordinator, while the website recommends one device for every four students and never mentions a technology coordinator.

Whenever we see this, we take on the role of the Stonecutters’ Guild, and we have to be as solid as a rock. We tell developers that we are planning to describe their program as it was implemented in their successful studies. This sometimes causes a ruckus, with vendors arguing that providing what they did in the study would make the program too expensive. “So would you like us to list your program (as it is in your website) as unevaluated?” we say. We are not unreasonable, but we are tough, because we see ourselves as helping schools make wise and informed choices, not helping vendors sell programs that may have little resemblance to the programs that were evaluated.

This is hard work, and I’m sure we do not get it right 100% of the time. And a developer may agree to an honest description but then quietly give discounts and provide less than what our descriptions say. All we can do is state the truth on our website about what was provided in the successful studies as best as we can, and the schools have to insist that they receive the program as described.

The Stonecutters’ Guild, and many other medieval guilds, represented the craftsmen, not the customers. Yet part of their function was to uphold high standards of quality. It was in the collective interest of all members of the guild to create and maintain a “brand,” indicating that any product of the guild’s members met the very highest standards. Someday, we hope publishers, software developers, professional development providers, and others who work with schools will themselves insist on an evidence base for their products, and then demand that providers ensure that their programs continue to be implemented in ways that maximize the probability that they will produce positive outcomes for children.

Stonecutters only build buildings. Educators affect the lives of children, which in turn affect families, communities, and societies. Long after a stonecutter’s work has fallen into ruin, well-educated people and their descendants and communities will still be making a difference. As researchers, developers, and educators, we have to take this responsibility at least as seriously as did the stone masons of long ago.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Advertisements

Lessons from China

blog_3-22-18_Confucius_344x500Recently I gave a series of speeches in China, organized by the Chinese University of Hong Kong and Nanjing Normal University. I had many wonderful and informative experiences, but one evening stood out.

I was in Nanjing, the ancient capital, and it was celebrating the weeks after the Chinese New Year. The center of the celebration was the Temple of Confucius. In and around it were lighted displays exhorting Chinese youth to excel on their exams. Children stood in front of these displays to have their pictures taken next to characters saying “first in class,” never second. A woman with a microphone recited blessings and hopes that students would do well on exams. After each one, students hit a huge drum with a long stick, as an indication of accepting the blessing. Inside the temple were thousands of small silk messages, bright red, expressing the wishes of parents and students that students will do well on their exams. Chinese friends explained what was going on, and told me how pervasive this spirit was. Children all know a saying to the effect that the path to riches and a beautiful wife was through books. I heard that perhaps 70% of urban Chinese students go to after-school cram schools to ensure their performance on exams.

The reason Chinese parents and students take test scores so seriously is obvious in every aspect of Chines culture. On an earlier trip to China I toured a beautiful house, from hundreds of years ago, in a big city. The only purpose of the house was to provide a place for young men of a large clan to stay while they prepared for their exams, which determined their place in the Confucian hierarchy.

As everyone knows, Chinese students do, in fact, do very well on their exams. I would note that these data come in particular from urban Eastern China, such as Shanghai. I’d heard about but did not fully understand policies that contribute to these outcomes. In all big cities in China, students can only attend schools in their city neighborhoods, where the best schools in the country are, if they were born there or own apartments. In a country where a small apartment in a big city can easily cost a half million dollars (U.S.), this is no small selection factor. If parents work in the city but do not own an apartment, their children may have to remain in the village or small city they came from, living with grandparents and attending non-elite schools. Chinese cities are growing so fast that the majority of their inhabitants come from the rest of China. This matters because admirers of Chinese education often cite the amazing statistics from the rich and growing Eastern Chinese cities, not the whole country. It’s as though the U.S. only reported test scores on international comparisons from suburbs in the Northeastern states from Maryland to New England, the wealthiest and highest-achieving part of our country.

I do not want to detract in any way from the educational achievements of the Chinese, but just to put it in context. First, the Chinese themselves have doubts about test scores as the only important indicators, and admire Western education for its broader focus. But just sticking to test scores, China and other Confucian cultures such as Japan, South Korea, and Singapore have been creating a culture valuing test scores since Confucius, about 2500 years ago. It would be a central focus of Chinese culture even if PISA and TIMSS did not exist to show it off to the world.

My only point is that when American or European observers hold up East Asian achievements as a goal to aspire to, these achievements do not exist in a cultural vacuum. Other countries can potentially achieve what China has achieved, in terms of test scores and other indicators, but they cannot achieve it in the same way. Western culture is just not going to spend the next 2500 years raising its children the way the Chinese do. What we can do, however, is to use our own strengths, in research, development, and dissemination, to progressively enhance educational outcomes. The Chinese can and will do this, too; that’s what I was doing traveling around China speaking about evidence-based reform. We need not be in competition with any nation or society, as expanding educational opportunity and success throughout the world is in the interests of everyone on Earth. But engaging in fantasies about how we can move ahead by emulating parts of Chinese culture that they have been refining since Confucius is not sensible.

Precisely because of their deep respect for scholarship and learning and their eagerness to continue to improve their educational achievements, the Chinese are ideal collaborators in the worldwide movement toward evidence-based reform in education. Colleagues at the Chinese University of Hong Kong and the Nanjing Normal University are launching Chinese-language and Asian-focused versions of our newsletter on evidence in education, Best Evidence in Brief (BEiB). We and our U.K. colleagues have been distributing BEIB for several years. We welcome the opportunity to share ideas and resources with our Chinese colleagues to enrich the evidence base for education for children everywhere.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

“Substantively Important” Isn’t Substantive. It Also Isn’t Important

Since it began in 2002, the What Works Clearinghouse has played an important role in finding, rating, and publicizing findings of evaluations of educational programs. It performs a crucial function for evidence-based reform. For this very reason, it needs to be right. But in several important ways, it uses procedures that are indefensible and have a big impact on its conclusions.

One of these relates to a study rating called “substantively important-positive.” This refers to study outcomes with an effect size of at least +0.25, but that are not statistically significant. I’ve written about this before, but the WWC has recently released a database of information on its studies that makes it easy to analyze WWC data on a large scale, and we have learned a lot more about this topic.

Study outcomes rated as “substantively important – positive” can qualify a study as “potentially positive,” the second-highest WWC rating. “Substantively important-negative” findings (non-significant effect sizes less than -0.25) can cause a study to be rated as potentially negative, which can keep a study from getting a positive rating forever, as a single “potentially negative” rating, under current rules, ensures that a program can never receive a rating better than “mixed,” even if other studies found hundreds of significant positive effects.

People who follow the WWC and know about “substantively important” may assume that it may be a strange rule, but relatively rare in practice. But that is not true.

My graduate student, Amanda Inns, has just done an analysis of WWC data from their own database, and if you are a big fan of the WWC, this is going to be a shock. Amanda has looked at all WWC-accepted reading and math studies. Among these, she found a total of 339 individual outcomes rated “positive” or “potentially positive.” Of these, 155 (46%) reached the “potentially positive” level only because they had effect sizes over +0.25, but were not statistically significant.

Another 36 outcomes were rated “negative” or “potentially negative.” 26 of these (72%) were categorized as “potentially negative” only because they had effect sizes less than -0.25 and were not significant. I’m sure patterns would be similar for subjects other than reading and math.

Put another way, almost half (48%) of outcomes rated positive/potentially positive or negative/potentially negative by the WWC were not statistically significant. As one example of what I’m talking about, consider a program called The Expert Mathematician. It had just one study with only 70 students in 4 classrooms (2 experimental and 2 control). The WWC re-analyzed the data to account for clustering, and the outcomes were nowhere near statistically significant, though they were greater than +0.25. This tiny study, and this study alone, caused The Expert Mathematician to receive the WWC “potentially positive” rating and to be ranked seventh among all middle school math programs. Similarly, Waterford Early Learning received a “potentially positive” rating based on a single tiny study with only 70 kindergarteners in 6 schools. The outcomes ranged from -0.71 to +1.11, and though the mean was more than +0.25, the outcome was far from significant. Yet this study alone put Waterford on the WWC list of proven kindergarten programs.

I’m not taking any position on whether these particular programs are in fact effective. All I am saying is that these very small studies with non-significant outcomes say absolutely nothing of value about that question.

I’m sure that some of you nerdier readers who have followed me this far are saying to yourselves, “well, sure, these substantively important studies may not be statistically significant, but they are probably unbiased estimates of the true effect.”

More bad news. They are not. Not even close.

The problem, also revealed in Amanda Inns’ data, is that studies with large effect sizes but not statistical significance tend to have very small sample sizes (otherwise, they would have been significant). Across WWC reading and math studies that used individual-level assignment, median sample sizes were 48, 74, or 86, for substantively important, significant, or indeterminate (non-significant with ES < +0.25), respectively. For cluster studies, they were 10, 17, and 33 clusters respectively. In other words, “substantively important” outcomes averaged less than half the sample sizes of other outcomes.

And small-sample studies greatly overstate effect sizes. Among all factors that bias effect sizes, small sample size is the most important (only use of researcher/developer-made measures comes close). So a non-significant positive finding in a small study is not an unbiased point estimate that just needs a larger sample to show its significance. It is probably biased, in a consistent, positive direction. Studies with sample sizes less than 100 have about three times the mean effect sizes of studies with sample sizes over 1000, for example.

But “substantively important” ratings can throw a monkey wrench into current policy. The ESSA evidence standards require statistically significant effects for all of its top three levels (strong, moderate, and promising). Yet many educational leaders are using the What Works Clearinghouse as a guide to which programs will meet ESSA evidence standards. They may logically assume that if the WWC says a program is effective, then the federal government stands behind it, regardless of what the ESSA evidence standards actually say. Yet in fact, based on the data analyzed by Amanda Inns for reading and math, 46% of the outcomes rated as positive/potentially positive by WWC (taken to correspond to “strong” or “moderate,” respectively, under ESSA evidence standards) are non-significant, and therefore do not qualify under ESSA.

The WWC needs to remove “substantively important” from its ratings as soon as possible, to avoid a collision with ESSA evidence standards, and to avoid misleading educators any further. Doing so would help make the WWC’s impact on ESSA substantive. And important.

 

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

CDC Told to Avoid Use of “Evidence-Based”: Is the Earth Flat Again?

In this blog series, I generally try to stay non-partisan, avoiding issues that, though important, do not relate to evidence-based reform in education. However, the current administration has just crossed that line.

In a December 16 article in the Washington Post, Lena Sun and Juliet Eilperin reported that the Trump Administration has prohibited employees of the Centers for Disease Control and Prevention (CDC) from using seven words or phrases in their reports. Two of these are “evidence-based” and “science-based.” Admittedly, this relates to health, not education, but who could imagine that education will not be next?

I’m not sure exactly why “evidence-based” and “science-based” are included among a set of banned words that otherwise consist of words such as “fetus,” “transgender,” and “diversity” that have more obvious political overtones. The banning of “evidence-based” and “science-based” is particularly upsetting because evidence, especially in medicine, has up to now been such a non-partisan, good-government concept. Ultimately, Republicans and Democrats and their family members and friends get sick or injured, or want to prevent disease, and perhaps as a result, evidence-based health care has been popular on both sides of the aisle. In education, Republican House Speaker Paul Ryan and Democratic Senator Patty Murray have worked together as forceful advocates for evidence-based reform, as have many others. George W. Bush and Barak Obama both took personal and proactive roles in advancing evidence in education.

You have to go back a long time to find governments banning evidence itself. Perhaps you have to go back to Pope Paul V, whose Cardinal Bellarmine ordered Galileo in 1615 to: “…abandon completely the opinion that the sun stands still at the center of the world and the Earth moves…”

In fear for his life, Galileo agreed, but in 1633, Galileo was accused of breaking his promise. He was threatened with torture, and had to agree again to the Pope’s demand. He was placed under house arrest for the rest of his life.

After his 1633 banishment, Galileo was said to have muttered, “E pur si muove” (And yet it moves). If he did (historians doubt it), he was expressing defiance, but also a key principle of science: “Proven principles remain true even if we are no longer allowed to speak of them.”

The CDC officials were offered a new formulation to use instead of “evidence-based” and “science-based.” It was: “CDC bases its recommendations on science in consideration with community standards and wishes.”

This is of course the antithesis of evidence or science. Does the Earth circle the sun in some states or counties, but it’s the other way around in others? Who decides which scientific principles apply in a given location? Does objective science have any role at all or are every community’s beliefs as valid as every other’s? Adopting the ban would hold back research and applications of settled research, harming millions of potential beneficiaries and making the U.S. a laughingstock among advanced nations. Banning the words “evidence-based” and “science-based” will not change scientific reality. Yet it will perhaps slow down funding for research and dissemination of proven treatments, and that would be disastrous, both in medicine and in education. I hope and expect that scientists in both fields will continue to find the truth and make it known whatever the consequences, and that our leaders of both parties see the folly of this action and reverse it immediately.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

 

Title I: A 20% Solution

Here’s an idea that would cost nothing and profoundly shift education funding and the interest of educators and policy makers toward evidence-proven programs. Simply put, the idea is to require that schools receiving Title I funds use 20% of the total on programs that meet at least a moderate standard of evidence. Two thin dimes on the dollar could make a huge difference in all of education.

In terms of federal education policy, Title I is the big kahuna. At $15 billion per year, it is the largest federal investment in elementary and secondary education, and it has been very politically popular on both sides of the aisle since the Johnson administration in 1965, when the Elementary and Secondary Education Act (ESEA) was first passed. Title I has been so popular because it goes to every congressional district, and provides much-needed funding by formula to help schools serving children living in poverty. Since the reauthorization of ESEA as the Every Student Succeeds Act in 2015, Title I remains the largest expenditure.

In ESSA and other federal legislation, there are two kinds of funding. One is formula funding, like Title I, where money usually goes to states and is then distributed to districts and schools. The formula may adjust for levels of poverty and other factors, but every eligible school gets its share. The other kind of funding is called competitive, or discretionary funding. Schools, districts, and other entities have to apply for competitive funding, and no one is guaranteed a share. In many cases, federal funds are first given to states, and then schools or districts apply to their state departments of education to get a portion of it, but the state has to follow federal rules in awarding the funds.

Getting proven programs into widespread use can be relatively easy in competitive grants. Competitive grants are usually evaluated on a 100-point scale, with all sorts of “competitive preference points” for certain categories of applicants, such as for rural locations, inclusion of English language learners or children of military families, and so on. These preferences add perhaps one to five points to a proposal’s score, giving such applicants a leg up but not a sure thing. In the same way, I and others have proposed adding competitive preference points in competitive proposals for applicants who propose to adopt programs that meet established standards of evidence. For example, Title II SEED grants for professional development now require that applicants propose to use programs found to be effective in at least one rigorous study, and give five points if the programs have been proven effective in at least two rigorous studies. Schools qualifying for school improvement funding under ESSA are now required to select programs that meet ESSA evidence standards.

Adding competitive preference points for using proven programs in competitive grants is entirely sensible and pain-free. It costs nothing, and does not require applicants to use any particular program. In fact, applicants can forego the preference points entirely, and hope to win without them. Preference points for proven programs is an excellent way to nudge the field toward evidence-based reform without top-down mandates or micromanagement. The federal government states a preference for proven programs, which will at least raise their profile among grant writers, but no school or district has to do anything different.

The much more difficult problem is how to get proven programs into formula funding (such as Title I). The great majority of federal funds are awarded by formula, so restricting evidence-based reform to competitive grants is only nibbling at the edges of practice. One solution to this would be to allocate incentive grants to districts if they agree to use formula funds to adopt and implement proven programs.

However, incentives cost money. Instead, imagine that districts and schools get their Title I formula funds, as they have since 1965. However, Congress might require that districts use at least 20% of their Title I, Part A funding to adopt and implement programs that meet a modest standard of evidence, similar to the “moderate” level in ESSA (which requires one quasi-experimental study with positive effects). The adopted program could be anything that meets other Title I requirements—reading, math, tutoring, technology—except that the program has to have evidence of effectiveness. The funds could pay for necessary staffing, professional development, materials, software, hardware, and so on. Obviously, schools could devote more than 20% if they choose to do so.

There are several key advantages to this 20% solution. First, of course, children would immediately benefit from receiving programs with at least moderate evidence of effectiveness. Second, the process would instantly make leaders of the roughly 55,000 Title I schools intensely interested in evidence. Third, the process could gradually shift discussion about Title I away from its historical focus on “how much?” to an additional focus on “for what purpose?” Publishers, software developers, academics, philanthropy, and government itself would perceive the importance of evidence, and would commission or carry out far more high-quality studies to meet the new standards. Over time, the standards of evidence might increase.

All of this would happen at no additional cost, and with a minimum of micromanagement. There are now many programs that would meet the “moderate” standards of evidence in reading, math, tutoring, whole-school reform, and other approaches, so schools would have a wide choice. No Child Left Behind required that low-performing schools devote 20% of their Title I funding to after-school tutoring programs and student transfer policies that research later showed to make little or no difference in outcomes. Why not spend the same on programs that are proven to work in advance, instead of once again rolling the dice with the educational futures of at-risk children?

20% of Title I is a lot of money, but if it can make 100% of Title I more impactful, it is more than worthwhile.

What the Election Might Mean for Evidence-Based Education

Like everyone else in America, I awoke on Wednesday to a new era. Not only was Donald Trump elected president, but the Republicans retained control of both houses of Congress. This election will surely have a powerful impact on issues that the president-elect and other Republicans campaigned on, but education was hardly discussed. The New York Times summarized Mr. Trump’s education positions in an October 31 article. Mr. Trump has spoken in favor of charters and other school choice plans, incentive pay for teachers, and not much else. A Trump administration will probably appoint a conservative Secretary of Education, and that person would have considerable influence on what happens next.

What might this mean for evidence-based reform in education? Hopefully, the new administration will embrace evidence, as embodied in the Every Student Succeeds Act (ESSA). Why? Because the Congress that passed ESSA less than a year ago is more or less the same Congress that was just elected. Significantly, Senators Rob Portman (R-Ohio), Michael Bennet (D-Colorado), and Patty Murray (D-Washington), some of the major champions of evidence in the Senate, were all just re-elected. Senator Lamar Alexander (R-Tennessee), a key architect of ESSA, is still in office. In the absence of a major push from the new executive branch, the Congress seems likely to continue its bipartisan support for the ESSA law.

Or so I fervently hope.

Evidence has not been a partisan issue and it will hopefully remain bipartisan. Everyone has an interest in seeing that education dollars are spent wisely to benefit children. The evidence movement has advanced far enough to offer real hope that step-by-step progress can take place in education as increasingly effective methods, materials, and software become available, as a direct outcome of research and development. Evidence-based reform has strengthened through red and blue administrations. It should continue to grow through the new administration.

Or so I fervently hope.

What if Evidence Doesn’t Match Ideology?

Several years ago when the Conservative Party was first coming into office in the U.K., I had an opportunity to meet with a High Government Official. He had been told that I was a supporter of phonics in early reading, and that was what he wanted to talk about. We chatted amicably for some time about our agreement on this topic.

Then the Great Man turned to another topic. What did I think about the evidence on ability grouping?

I explained that the evidence did not favor ability grouping, and was about to explain why when he cut me off with the internationally understood gesture meaning, “I’m a very busy and important person. Get out of my office immediately.” Ever since then, the British government has gotten along just fine without my advice.

What the Great Man was telling me, of course, is the depressing reality of why it is so difficult to change policy or practice with evidence. Most people value research when it supports the ideological position they already had, and reject research when it does not. The result is that policy and practice remain an ideological struggle, little influenced by the actual findings of research. Advocates of a given position seek evidence to throw at their opponents or to defend themselves from evidence thrown at them by the “other side.” And we all too often evaluate evidence based on the degree to which it corresponds to our pre-existing beliefs rather than re-evaluating our beliefs in light of evidence. I recall that at a meeting of Institute of Education Sciences (IES) grantees, a respected superintendent spoke to the whole assemblage and, entirely without irony or humor, defined good research as that which confirms his beliefs, and bad research as that which contradicts his beliefs.

A scientific field only begins to move forward when researchers and users of the research come to accept research findings whether or not they support their previous beliefs. Not that this is easy. Even in the most scientific of fields, it usually takes a great deal of research over an extended time period to replace a widely accepted belief with a contrary set of findings. In the course of unseating the old belief, researchers who dare to go against the current orthodoxy have difficulty finding an audience, funding, promotions, or respect, so it’s a lot easier to go with the flow. Yet true sciences do change their minds based on evidence, even if they must often be dragged kicking and screaming to the altar of knowledge. One classic example I’ve heard of involved the bacterial origin of gastric ulcers. Ulcers were once thought to be caused by stress, until an obscure Australian researcher deliberately gave himself an ulcer by drinking a solution swarming with gastric bacteria. He then cured himself with a drug known to kill those bacteria. Today, the stress theory is gone and the bacteria theory is dominant, but it wasn’t easy.

Education researchers are only just beginning to have enough confidence in our own research to expect policy makers, practitioners, and other researchers to change their beliefs on the basis of evidence. Yet education will not be an evidence-driven field until evidence begins to routinely change beliefs about what works for students and what does not. We need to change thinking not only about individual programs or principles, but about the role of evidence itself. This is one reason that it is so important that research in education be of impeccable quality, so that we can have confidence that findings will replicate in future studies and will generalize to many practical applications.

A high government official in health would never dismiss research on gastric ulcers because he or she still believed that ulcers are caused by stress. A high government official in agriculture would never dismiss research on the effects of certain farming methods on soil erosion. In the U.S., at least, our Department of Education has begun to value evidence and to encourage schools to adopt proven programs and practices, but there is a long way to go before education joins medicine and agriculture in willingness to recognize and promote findings of rigorous and replicated research. We’re headed in the right direction, but I have to admit that the difficulties getting there are giving me one heck of an ulcer.*

*Just kidding. I’m fine.