Do School Districts Really Have Difficulty Meeting ESSA Evidence Standards?

The Center for Educational Policy recently released a report on how school districts are responding to the Every Student Succeeds Act (ESSA) requirement that schools seeking school improvement grants select programs that meet ESSA’s strong, moderate, or promising standards of evidence. Education Week ran a story on the CEP report.

The report noted that many states, districts, and schools are taking the evidence requirements seriously, and are looking at websites and consulting with researchers to help them identify programs that meet the standards. This is all to the good.

However, the report also notes continuing problems districts and schools are having finding out “what works.” Two particular problems were cited. One was that districts and schools were not equipped to review research to find out what works. The other was that rural districts and schools found few programs proven effective in rural schools.

I find these concerns astounding. The same concerns were expressed when ESSA was first passed, in 2015. But that was almost four years ago. Since 2015, the What Works Clearinghouse has added information to help schools identify programs that meet the top two ESSA evidence categories, strong and moderate. Our own Evidence for ESSA, launched in February, 2017, has up-to-date information on virtually all PK-12 reading and math programs currently in dissemination. Among hundreds of programs examined, 113 meet ESSA standards for strong, moderate, or promising evidence of effectiveness. WWC, Evidence for ESSA, and other sources are available online at no cost. The contents of the entire Evidence for ESSA website were imported into Ohio’s own website on this topic, and dozens of states, perhaps all of them, have informed their districts and schools about these sources.

The idea that districts and schools could not find information on proven programs if they wanted to do so is difficult to believe, especially among schools eligible for school improvement grants. Such schools, and the districts in which they are located, write a lot of grant proposals for federal and state funding. The application forms for school improvement grants always explain the evidence requirements, because that is the law. Someone in every state involved with federal funding knows about the WWC and Evidence for ESSA websites. More than 90,000 unique users have used Evidence for ESSA, and more than 800 more sign on each week.

blog_10-10-19_generickids_500x333

As to rural schools, it is true that many studies of educational programs have taken place in urban areas. However, 47 of the 113 programs qualified by Evidence for ESSA were validated in at least one rural study, or a study including a large enough rural sample to enable researchers to separately report program impacts for rural students. Also, almost all widely disseminated programs have been used in many rural schools. So rural districts and schools that care about evidence can find programs that have been evaluated in rural locations, or at least that were evaluated in urban or suburban schools but widely disseminated in rural schools.

Also, it is important to note that if a program was successfully evaluated only in urban or suburban schools, the program still meets the ESSA evidence standards. If no studies of a given outcome were done in rural locations, a rural school in need of better outcomes could, in effect, be asked to choose between a program proven to work somewhere and probably used in dissemination in rural schools, or they could choose a program not proven to work anywhere. Every school and district has to make the best choices for their kids, but if I were a rural superintendent or principal, I’d read up on proven programs, and then go visit some rural schools using that program nearby. Wouldn’t you?

I have no reason to suspect that the CEP survey is incorrect. There are many indications that district and school leaders often do feel that the ESSA evidence rules are too difficult to meet. So what is really going on?

My guess is that there are many district and school leaders who do not want to know about evidence on proven programs. For example, they may have longstanding, positive relationships with representatives of publishers or software developers, or they may be comfortable and happy with the materials and services they are already using, evidence-proven or not. If they do not have evidence of effectiveness that would pass muster with WWC or Evidence for ESSA, the publishers and software developers may push hard on state and district officials, put forward dubious claims for evidence (such as studies with no control groups), and do their best to get by in a system that increasingly demands evidence that they lack. In my experience, district and state officials often complain about having inadequate staff to review evidence of effectiveness, but their concern may be less often finding out what works as it is defending themselves from publishers, software developers, or current district or school users of programs, who maintain that they have been unfairly rated by WWC, Evidence for ESSA, or other reviews. State and district leaders who stand up to this pressure may have to spend a lot of time reviewing evidence or hearing arguments.

On the plus side, at the same time that publishers and software producers may be seeking recognition for their current products, many are also sponsoring evaluations of some of their products that they feel are mostly likely to perform well in rigorous evaluations. Some may be creating new programs that resemble programs that have met evidence standards. If the federal ESSA law continues to demand evidence for certain federal funding purposes, or even to expand this requirement to additional parts of federal grant-making, then over time the ESSA law will have its desired effect, rewarding the creation and evaluation of programs that do meet standards by making it easier to disseminate such programs. The difficulties the evidence movement is experiencing are likely to diminish over time as more proven programs appear, and as federal, state, district, and school leaders get comfortable with evidence.

Evidence-based reform was always going to be difficult, because of the amount of change it entails and the stakes involved. But sooner or later, it is the right thing to do, and leaders who insist on evidence will see increasing levels of learning among their students, at minimal cost beyond what they already spend on untested or ineffective approaches. Medicine went through a similar transition in 1962, when the U.S. Congress first required that medicines be rigorously evaluated for effectiveness and safety. At first, many leaders in the medical profession resisted the changes, but after a while, they came to insist on them. The key is political leadership willing to support the evidence requirement strongly and permanently, so that educators and vendors alike will see that the best way forward is to embrace evidence and make it work for kids.

Photo courtesy of Allison Shelley/The Verbatim Agency for American Education: Images of Teachers and Students in Action

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Send Us Your Evaluations!

In last week’s blog, I wrote about reasons that many educational leaders are wary of the ESSA evidence standards, and the evidence-based reform movement more broadly. Chief among these concerns was a complaint that few educational leaders had the training in education research methods to evaluate the validity of educational evaluations. My response to this was to note that it should not be necessary for educational leaders to read and assess individual evaluations of educational programs, because free, easy-to-interpret review websites, such as the What Works Clearinghouse and Evidence for ESSA, already do such reviews. Our Evidence for ESSA website (www.evidenceforessa.org) lists reading and math programs available for use anywhere in the U.S., and we are constantly on the lookout for any we might have missed. If we have done our job well, you should be able to evaluate the evidence base for any program, in perhaps five minutes.

Other evidence-based fields rely on evidence reviews. Why not education? Your physician may or may not know about medical research, but most rely on websites that summarize the evidence. Farmers may be outstanding in their fields, but they rely on evidence summaries. When you want to know about the safety and reliability of cars you might buy, you consult Consumer Reports. Do you understand exactly how they get their ratings? Neither do I, but I trust their expertise. Why should this not be the same for educational programs?

At Evidence for ESSA, we are aiming to provide information on every program available to you, if you are a school or district leader. At the moment, we cover reading and mathematics, grades pre-k to 12. We want to be sure that if a sales rep or other disseminator offers you a program, you can look it up on Evidence for ESSA and it will be there. If there are no studies of the program that meet our standards, we will say so. If there are qualifying studies that either do or do not have evidence of positive outcomes that meet ESSA evidence standards, we will say so. On our website, there is a white box on the homepage. If you type in the name of any reading or math program, the website should show you what we have been able to find out.

What we do not want to happen is that you type in a program title and find nothing. In our website, “nothing” has no useful meaning. We have worked hard to find every program anyone has heard of, and we have found hundreds. But if you know of any reading or math program that does not appear when you type in its name, please tell us. If you have studies of that program that might meet our inclusion criteria, please send them to us, or citations to them. We know that there are always additional programs entering use, and additional research on existing programs.

Why is this so important to us? The answer is simple, Evidence for ESSA exists because we believe it is essential for the progress of evidence-based reform for educators and policy makers to be confident that they can easily find the evidence on any program, not just the most widely used. Our vision is that someday, it will be routine for educators thinking of adopting educational programs to quickly consult Evidence for ESSA (or other reviews) to find out what has been proven to work, and what has not. I heard about a superintendent who, before meeting with any sales rep, asked them to show her the evidence for the effectiveness of their program on Evidence for ESSA or the What Works Clearinghouse. If they had it, “Come on in,” she’d say. If not, “Maybe later.”

Only when most superintendents and other school officials do this will program publishers and other providers know that it is worth their while to have high-quality evaluations done of each of their programs. Further, they will find it worthwhile to invest in the development of programs likely to work in rigorous evaluations, to provide enough quality professional development to give their programs a chance to succeed, and to insist that schools that adopt their proven programs incorporate the methods, materials, and professional development that their own research has told them are needed for success. Insisting on high-quality PD, for example, adds cost to a program, and providers may worry that demanding sufficient PD will price them out of the market. But if all programs are judged on their proven outcomes, they all will require adequate PD, to be sure that the programs will work when evaluated. That is how evidence will transform educational practice and outcomes.

So our attempt to find and fairly evaluate every program in existence is not due to our being nerds or obsessive compulsive neurotics (though these may be true, too). But thorough, rigorous review of the whole body of evidence in every subject and grade level, and for attendance, social emotional learning, and other non-academic outcomes, is part of a plan.

You can help us on this part of our plan. Tell us about anything we have missed, or any mistakes we have made. You will be making an important contribution to the progress of our profession, and to the success of all children.

blog_6-6-19_mail_500x381
Send us your evaluations!
Photo credit: George Grantham Bain Collection, Library of Congress [Public domain]

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Why Do Some Educators Push Back Against Evidence?

In December, 2015, the U.S. Congress passed the Every Student Succeeds Act, or ESSA. Among many other provisions, ESSA defined levels of evidence supporting educational programs: Strong (at least one randomized experiment with positive outcomes), moderate (at least one quasi-experimental study with positive outcomes), and promising (at least one correlational study with positive outcomes). For various forms of federal funding, schools are required (in school improvement) or encouraged (in seven other funding streams) to use programs falling into one of these top three categories. There is also a fourth category, “demonstrates a rationale,” but this one has few practical consequences.

3 ½  years later, the ESSA evidence standards are increasing interest in evidence of effectiveness for educational programs, especially among schools applying for school improvement funding and in state departments of education, which are responsible for managing the school improvement grant process. All of this is to the good, in my view.

On the other hand, evidence is not yet transforming educational practice. Even in portions of ESSA that encourage or require use of proven programs among schools seeking federal funding, schools and districts often try to find ways around the evidence requirements rather than truly embracing them. Even when schools do say they used evidence in their proposals, they may have just accepted assurances from publishers or developers stating that their programs meet ESSA standards, even when this is clearly not so.

blog_5-30-19_pushingcar_500x344
Why are these children in India pushing back on a car?  And why do many educators in our country push back on evidence?

Educators care a great deal about their children’s achievement, and they work hard to ensure their success. Implementing proven, effective programs does not guarantee success, but it greatly increases the chances. So why has evidence of effectiveness played such a limited role in program selection and implementation, even when ESSA, the national education law, defines evidence and requires use of proven programs under certain circumstances?

The Center on Education Policy Report

Not long ago, the Center on Education Policy (CEP) at George Washington University published a report of telephone interviews of state leaders in seven states. The interviews focused on problems states and districts were having with implementation of the ESSA evidence standards. Six themes emerged:

  1. Educational leaders are not comfortable with educational research methods.
  2. State leaders feel overwhelmed serving large numbers of schools qualifying for school improvement.
  3. Districts have to seriously re-evaluate longstanding relationships with vendors of education products.
  4. State and district staff are confused about the prohibition on using Title I school improvement funds on “Tier 4” programs (ones that demonstrate a rationale, but have not been successfully evaluated in a rigorous study).
  5. Some state officials complained that the U.S. Department of Education had not been sufficiently helpful with implementation of ESSA evidence standards.
  6. State leaders had suggestions to make education research more accessible to educators.

What is the Reality?

I’m sure that the concerns expressed by the state and district leaders in the CEP report are sincerely felt. But most of them raise issues that have already been solved at the federal, state, and/or district levels. If these concerns are as widespread as they appear to be, then we have serious problems of communication.

  1. The first theme in the CEP report is one I hear all the time. I find it astonishing, in light of the reality.

No educator needs to be a research expert to find evidence of effectiveness for educational programs. The federal What Works Clearinghouse (https://ies.ed.gov/ncee/wwc/) and our Evidence for ESSA (www.evidenceforessa.org) provide free information on the outcomes of programs, at least in reading and mathematics, that is easy to understand and interpret. Evidence for ESSA provides information on programs that do meet ESSA standards as well as those that do not. We are constantly scouring the literature for studies of replicable programs, and when asked, we review entire state and district lists of adopted programs and textbooks, at no cost. The What Works Clearinghouse is not as up-to-date and has little information on programs lacking positive findings, but it also provides easily interpreted information on what works in education.

In fact, few educational leaders anywhere are evaluating the effectiveness of individual programs by reading research reports one at a time. The What Works Clearinghouse and Evidence for ESSA employ experts who know how to find and evaluate outcomes of valid research and to describe the findings clearly. Why would every state and district re-do this job for themselves? It would be like having every state do its own version of Consumer Reports, or its own reviews of medical treatments. It just makes no sense. In fact, at least in the case of Evidence for ESSA, we know that more than 80,000 unique readers have used Evidence for ESSA since it launched in 2017. I’m sure even larger numbers have used the What Works Clearinghouse and other reviews. The State of Ohio took our entire Evidence for ESSA website and put it on its own state servers with some other information. Several other states have strongly promoted the site. The bottom line is that educational leaders do not have to be research mavens to know what works, and tens of thousands of them know where to find fair and useful information.

  1. State leaders are overwhelmed. I’m sure this is true, but most state departments of education have long been understaffed. This problem is not unique to ESSA.
  2. Districts have to seriously re-evaluate longstanding relationships with vendors. I suspect that this concern is at the core of the problem on evidence. The fact is that most commercial programs do not have adequate evidence of effectiveness. Either they have no qualifying studies (by far the largest number), or they do have qualifying evidence that is not significantly positive. A vendor with programs that do not meet ESSA standards is not going to be a big fan of evidence, or ESSA. These are often powerful organizations with deep personal relationships with state and district leaders. When state officials adhere to a strict definition of evidence, defined in ESSA, local vendors push back hard. Understaffed state departments are poorly placed to fight with vendors and their friends in district offices, so they may be forced to accept weak or no evidence.
  3. Confusions about Tier 4 evidence. ESSA is clear that to receive certain federal funds schools must use programs with evidence in Tiers 1, 2, or 3, but not 4. The reality is that definitions of Tier 4 are so weak that any program on Earth can meet this standard. What program anywhere does not have a rationale? The problem is that districts, states, and vendors have used confusion about Tier 4 to justify any program they wish. Some states are more sophisticated than others and do not allow this, but the very existence of Tier 4 in ESSA language creates a loophole that any clever sales rep or educator can use, or at least try to get away with.
  4. The U. S. Department of Education is not helpful enough. In reality, USDoE is understaffed and overwhelmed on many fronts. In any case, ESSA puts a lot of emphasis on state autonomy, so the feds feel unwelcome in performing oversight.

The Future of Evidence in Education

Despite the serious problems in implementation of ESSA, I still think it is a giant step forward. Every successful field, such as medicine, agriculture, and technology, has started its own evidence revolution fighting entrenched interests and anxious stakeholders. As late as the 1920s, surgeons refused to wash their hands before operations, despite substantial evidence going back to the 1800s that handwashing was essential. Evidence eventually triumphs, though it often takes many years. Education is just at the beginning of its evidence revolution, and it will take many years to prevail. But I am unaware of any field that embraced evidence, only to retreat in the face of opposition. Evidence eventually prevails because it is focused on improving outcomes for people, and people vote. Sooner or later, evidence will transform the practice of education, as it has in so many other fields.

Photo credit: Roger Price from Hong Kong, Hong Kong [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)]

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Measuring Social Emotional Skills in Schools: Return of the MOOSES

Throughout the U. S., there is huge interest in improving students’ social emotional skills and related behaviors. This is indeed important as a means of building tomorrow’s society. However, measuring SEL skills is terribly difficult. Not that measuring reading, math, or science learning is easy, but there are at least accepted measures in those areas. In SEL, almost anything goes, and measures cover an enormous range. Some measures might be fine for theoretical research and some would be all right if they were given independently of the teachers who administered the treatment, but SEL measures are inherently squishy.

A few months ago, I wrote a blog on measurement of social emotional skills. In it, I argued that social emotional skills should be measured in pragmatic school research as objectively as possible, especially to avoid measures that merely reflect having students in experimental groups repeating back attitudes or terminology they learned in the program. I expressed the ideal for social emotional measurement in school experiments as MOOSES: Measurable, Observable, Objective, Social Emotional Skills.

Since that time, our group at Johns Hopkins University has received a generous grant from the Gates Foundation to add research on social emotional skills and attendance to our Evidence for ESSA website. This has enabled our group to dig a lot deeper into measures for social emotional learning. In particular, JHU graduate student Sooyeon Byun created a typology of SEL measures arrayed from least to most MOOSE-like. This is as follows.

  1. Cognitive Skills or Low-Level SEL Skills.

Examples include executive functioning tasks such as pencil tapping, the Stroop test, and other measures of cognitive regulation, as well as recognition of emotions. These skills may be of importance as part of theories of action leading to social emotional skills of importance to schools, but they are not goals of obvious importance to educators in themselves.

  1. Attitudes toward SEL (non-behavioral).

These include agreement with statements such as “bullying is wrong,” and statements about why other students engage in certain behaviors (e.g., “He spilled the milk because he was mean.”).

  1. Intention for SEL behaviors (quasi-behavioral).

Scenario-based measures (e.g., what would you do in this situation?).

  1. SEL behaviors based on self-report (semi-behavioral).

Reports of actual behaviors of self, or observations of others, often with frequencies (e.g., “How often have you seen bullying in this school during this school year?”) or “How often do you feel anxious or afraid in class in this school?”)

This category was divided according to who is reporting:

4a. Interested party (e.g., report by teachers or parents who implemented the program and may have reason to want to give a positive report)

4b. Disinterested party (e.g., report by students or by teachers or parents who did not administer the treatment)

  1. MOOSES (Measurable, Observable, Objective Social Emotional Skills)
  • Behaviors observed by independent observers, either researchers, ideally unaware of treatment assignment, or by school officials reporting on behaviors as they always would, not as part of a study (e.g., regular reports of office referrals for various infractions, suspensions, or expulsions).
  • Standardized tests
  • Other school records

blog_2-21-19_twomoose_500x333

Uses for MOOSES

All other things being equal, school researchers and educators should want to know about measures as high as possible on the MOOSES scale. However, all things are never equal, and in practice, some measures lower on the MOOSES scale may be all that exists or ever could exist. For example, it is unlikely that school officials or independent observers could determine students’ anxiety or fear, so self-report (level 4b) may be essential. MOOSES measures (level 5) may be objectively reported by school officials, but limiting attention to such measures may limit SEL measurement to readily observable behaviors, such as aggression, truancy, and other behaviors of importance to school management, and not on difficult-to-observe behaviors such as bullying.

Still, we expect to find in our ongoing review of the SEL literature that there will be enough research on outcomes measured at level 3 or above to enable us to downplay levels 1 and 2 for school audiences, and in many cases to downplay reports by interested parties in level 4a, where teachers or parents who implement a program then rate the behavior of the children they served.

Social emotional learning is important, and we need measures that reflect their importance, minimizing potential bias and staying as close as possible to independent, meaningful measures of behaviors that are of the greatest importance to educators. In our research team, we have very productive arguments about these measurement issues in the course of reviewing individual articles. I placed a cardboard cutout of a “principal” called “Norm” in our conference room. Whenever things get too theoretical, we consult “Norm” for his advice. For example, “Norm” is not too interested in pencil tapping and Stroop tests, but he sure cares a lot about bullying, aggression, and truancy. Of course, as part of our review we will be discussing our issues and initial decisions with real principals and educators, as well as other experts on SEL.

The growing number of studies of SEL in recent years enables reviewers to set higher standards than would have been feasible even just a few years ago. We still have to maintain a balance in which we can be as rigorous as possible but not end up with too few studies to review.  We can all aspire to be MOOSES, but that is not practical for some measures. Instead, it is useful to have a model of the ideal and what approaches the ideal, so we can make sense of the studies that exist today, with all due recognition of when we are accepting measures that are nearly MOOSES but not quite the real Bullwinkle

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Replication

The holy grail of science is replication. If a finding cannot be repeated, then it did not happen in the first place. There is a reason that the humor journal in the hard sciences is called the Journal of Irreproducible Results. For scientists, results that are irreproducible are inherently laughable, therefore funny. In many hard science experiments, replication is pretty much guaranteed. If you heat an iron bar, it gets longer. If you cross parents with the same recessive gene, one quarter of their progeny will express the recessive trait (think blue eyes).

blog_1-24-19_bunnies_500x363

In educational research, we care about replication just as much as our colleagues in the lab coats across campus. However, when we’re talking about evaluating instructional programs and practices, replication is a lot harder, because students and schools differ. Positive outcomes obtained in one experiment may or may not replicate in a second trial. Sometimes this is true because the first experiment had features known to contribute to bias: small sample sizes, brief study durations, extraordinary amounts of resources or expert time to help the experimental schools or classes, use of measures made by the developers or researchers or otherwise overaligned with the experimental group (but not the control group), or use of matched rather than randomized assignment to conditions, can all contribute to successful-appearing outcomes in a first experiment. Second or third experiments are more likely to be larger, longer, and more stringent than the first study, and therefore may not replicate. Even when the first study has none of these problems, it may not replicate because of differences in the samples of schools, teachers, or students, or for other, perhaps unknowable problems. A change in the conditions of education may cause a failure to replicate. Our Success for All whole-school reform model has been found to be effective many times, mostly by third party evaluators. However, Success for All has always specified a full-time facilitator and at least one tutor for each school. An MDRC i3 evaluation happened to fall in the middle of the recession, and schools, which were struggling to afford classroom teachers, could not afford facilitators or tutors. The results were still positive on some measures, especially for low achievers, but the effect sizes were less than half of what others had found in many studies. Stuff happens.

Replication has taken on more importance recently because the ESSA evidence standards only require a single positive study. To meet the strong, moderate, or promising standards, programs must have at least one “well-designed and well-implemented” study using randomized (strong), matched (moderate), or correlational (promising) designs and finding significantly positive outcomes. Based on the “well-designed and well-implemented” language, our Evidence for ESSA website requires features of experiments similar to those also required by the What Works Clearinghouse (WWC). These requirements make it difficult to be approved, but they remove many of the experimental design features that typically cause first studies to greatly overstate program impacts: small size, brief durations, overinvolved experimenters, and developer-made measures. They put (less rigorous) matched and correlational studies in lower categories. So one study that meets ESSA or Evidence for ESSA requirements is at least likely to be a very good study. But many researchers have expressed discomfort with the idea that a single study could qualify a program for one of the top ESSA categories, especially if (as sometimes happens) there is one study with a positive outcomes and many with zero or at least nonsignificant outcomes.

The pragmatic problem is that if ESSA had required even two studies showing positive outcomes, this would wipe out a very large proportion of current programs. If research continues to identify effective programs, it should only be a matter of time before ESSA (or its successors) requires more than one study with a positive outcomes.

However, in the current circumstance, there is a way researchers and educators might at least estimate the replicability of given programs when they have only a single study with a significant positive outcomes. This would involve looking at the findings for entire genres of programs. The logic here is that if a program has only one ESSA-qualifying study, but it closely resembles other programs that also have positive outcomes, that program should be taken a lot more seriously than a program that obtained a positive outcome that differs considerably from outcomes of very similar programs.

As one example, there is much evidence from many studies by many researchers indicating positive effects of one-to-one and one-to-small group tutoring, in reading and mathematics. If a tutoring program has only one study, but this one study has significant positive findings, I’d say thumbs up. I’d say the same about cooperative learning approaches, classroom management strategies using behavioral principles, and many others, where a whole category of programs has had positive outcomes.

In contrast, if a program has a single positive outcome and there are few if any similar approaches that obtained positive outcomes, I’d be much more cautious. An example might be textbooks in mathematics, which rarely make any difference because control groups are also likely to be using textbooks, and textbooks considerably resemble each other. In our recent elementary mathematics review (Pellegrini, Lake, Inns, & Slavin, 2018), only one textbook program available in the U.S. had positive outcomes (out of 16 studies). As another example, there have been several large randomized evaluations of the use of interim assessments. Only one of them found positive outcomes. I’d be very cautious about putting much faith in benchmark assessments based on this single anomalous finding.

Looking for findings from similar studies is facilitated by looking at reviews we make available at www.bestevidence.org. These consist of reviews of research organized by categories of programs. Looking for findings from similar programs won’t help with the ESSA law, which often determines its ratings based on the findings of a single study, regardless of other findings on the same program or similar programs. However, for educators and researchers who really want to find out what works, I think checking similar programs is not quite as good as finding direct replication of positive findings on the same programs, but perhaps, as we like to say, close enough for social science.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Evidence, Standards, and Chicken Feathers

In 1509, John Damian, an alchemist in the court of James IV of Scotland proclaimed that he had developed a way for humans to fly. He made himself some wings from chicken feathers and jumped from the battlements of Stirling Castle, the Scottish royal residence at the time. His flight was brief but not fatal.  He landed in a pile of manure, and only broke his thigh.  Afterward, he explained that the problem was that he used the wrong kind of feathers.  If only he had used eagle feathers, he could have flown, he asserted.  Fortunately for him, he never tried flying again, with any kind of feathers.

blog_11-15-18_humanornithopter_500x314

The story of John Damian’s downfall is humorous, and in fact the only record of it is a contemporary poem making fun of it. Yet there are important analogies to educational policy today from this incident in Scottish history. These are as follows:

  1. Damian proclaimed the success of his plan for human flight before he or anyone else had tried it and found it effective.
  2. After his flight ended in the manure pile, he proclaimed (again without evidence) that if only he’d used eagle feathers, he would have succeeded. This makes sense, of course, because eagles are much better flyers than chickens.
  3. He was careful never to actually try flying with eagle feathers.

All of this is more or less what we do all the time in educational policy, with one big exception.  In education, based on Damian’s experience, we might have put forward policies stating that from now on human powered flight must only be done with eagle feathers, not chicken feathers.

What I am referring to in education is our obsession with standards as a basis for selecting textbooks, software, and professional development, and the relative lack of interest in evidence. Whole states and districts spend a lot of time devising standards and then reviewing materials and services to be sure that they align with these standards. In contrast, the idea of checking to see that texts, software, and PD have actually been evaluated and found to be effective in real classrooms with real teachers and students has been a hard slog.

Shouldn’t textbooks and programs that meet modern standards also produce higher student performance on tests closely aligned with those standards? This cannot be assumed. Not long ago, my colleagues and I examined every reading and math program rated “meets expectations” (the highest level) on EdReports, a website that rates programs in terms of their alignment with college- and career-ready standards.  A not so grand total of two programs had any evidence of effectiveness on any measure not made by the publishers. Most programs rated “meets expectations” had no evidence at all, and a smaller number had been evaluated and found to make no difference.

I am not in any way criticizing EdReports.  They perform a very valuable service in helping schools and districts know which programs meet current standards. It makes no sense for every state and district to do this for themselves, especially in the cases where there are very few or no proven programs. It is useful to at least know about programs aligned with standards.

There is a reason that so few products favorably reviewed on EdReports have any positive outcomes in rigorous research. Most are textbooks, and very few textbooks have evidence of effectiveness. Why? The fact is that standards or no standards, EdReports or no EdReports, textbooks do not differ very much from each other in aspects that matter for student learning. Textbooks differ (somewhat) in content, but if there is anything we have learned from our many reviews of research on what works in education, what matters is pedagogy, not content. Yet since decisions about textbooks and software depend on standards and content, decision makers almost invariably select textbooks and software that have never been successfully evaluated.

Even crazy John Damian did better than we do. Yes, he claimed success in flying before actually trying it, but at last he did try it. He concluded that his flying plan would have worked if he’d used eagle feathers, but he never imposed this untested standard on anyone.

Untested textbooks and software probably don’t hurt anyone, but millions of students desperately need higher achievement, and focusing resources on untested or ineffective textbooks, software, and PD does not move them forward. The goal of education is to help all students succeed, not to see that they use aligned materials. If a program has been proven to improve learning, isn’t that a lot more important than proving that it aligns with standards? Ideally, we’d want schools and districts to use programs that are both proven effective and aligned with standards, but if no programs meet both criteria, shouldn’t those that are proven effective be preferred? Without evidence, aren’t we just giving students and teachers eagle feathers and asking them to take a leap of faith?

Photo credit: Humorous portrayal of a man who flies with wings attached to his tunic, Unknown author [Public domain], via Wikimedia Commons/Library of Congress

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

 

Miss Evers’ Boys (And Girls)

Most people who have ever been involved with human subjects’ rights know about the Tuskegee Syphilis Study. This was a study of untreated syphilis, in which 622 poor, African American sharecroppers, some with syphilis and some without, were evaluated over 40 years.

The study, funded and overseen by the U.S. Public Health Service, started in 1932. In 1940, researchers elsewhere discovered that penicillin cured syphilis. By 1947, penicillin was “standard of care” for syphilis, meaning that patients with syphilis received penicillin as a matter of course, anywhere in the U.S.

But not in Tuskegee. Not in 1940. Not in 1947. Not until 1972, when a whistle-blower made the press aware of what was happening. In the meantime, many of the men died of syphilis, 40 of their wives contracted the disease, and 19 of their children were born with congenital syphilis. The men had never even been told the nature of the study, they were not informed in 1940 or 1947 that there was now a cure, and they were not offered that cure. Leaders of the U.S. Public Health Service were well aware that there was a cure for syphilis, but for various reasons, they did not stop the study. Not in 1940, not in 1947, not even when whistle-blowers told them what was going on. They stopped it only when the press found out.

blog_11-1-18_tuskegee_500x363

In 1997 a movie on the Tuskegee Syphilis Study was released. It was called Miss Evers’ Boys. Miss Evers (actually, Eunice Rivers) was the African-American public health nurse who was the main point of contact for the men over the whole 40 years. She deeply believed that she, and the study, were doing good for the men and their community, and she formed close relationships with them. She believed in the USPHS leadership, and thought they would never harm her “boys.”

The Tuskegee study was such a crime and scandal that it utterly changed procedures for medical research in the U.S. and most of the world. Today, participants in research with any level of risk, or their parents if they are children, must give informed consent for participation in research, and even if they are in a control group, they must receive at least “standard of care”: currently accepted, evidence-based practices.

If you’ve read my blogs, you’ll know where I’m going with this. Failure to use proven educational treatments, unlike medical ones, is rarely fatal, at least not in the short term. But otherwise, our profession carries out Tuskegee crimes all the time. It condemns failing students to ineffective programs and practices when effective ones are known. It fails to even inform parents or children, much less teachers and principals, that proven programs exist: Proven, practical, replicable solutions for the problems they face every day.

Like Miss Rivers, front-line educators care deeply about their charges. Most work very hard and give their absolute best to help all of their children to succeed. Teaching is too much hard work and too little money for anyone to do it for any reason but for the love of children.

But somewhere up the line, where the big decisions are made, where the people are who know or who should know which programs and practices are proven to work and which are not, this information just does not matter. There are exceptions, real heroes, but in general, educational leaders who believe that schools should use proven programs have to fight hard for this position. The problem is that the vast majority of educational expenditures—textbooks, software, professional development, and so on—lack even a shred of evidence. Not a scintilla. Some have evidence that they do not work. Yet advocates for those expenditures (such as sales reps and educators who like the programs) argue strenuously for programs with no evidence, and it’s just easier to go along. Whole states frequently adopt or require textbooks, software, and services of no known value in terms of improving student achievement. The ESSA evidence standards were intended to focus educators on evidence and incentivize use of proven programs, at least for the lowest-achieving 5% of schools in each state, but so far it’s been slow going.

Yet there are proven alternatives. Evidence for ESSA (www.evidenceforessa.org) lists more than 100 PK-12 reading and math programs that meet the top three ESSA evidence standards. The majority meet the top level, “Strong.” And most of the programs were researched with struggling students. Yet I am not perceiving a rush to find out about proven programs. I am hearing a lot of new interest in evidence, but my suspicion, growing every day, is that many educational leaders do not really care about the evidence, but are instead just trying to find a way to keep using the programs and providers they already have and already like, and are looking for evidence to justify keeping things as they are.

Every school has some number of struggling students. If these children are provided with the same approaches that have not worked with them or with millions like them, it is highly likely that most will fail, with all the consequences that flow from school failure: Retention. Assignment to special education. Frustration. Low expectations. Dropout. Limited futures. Poverty. Unemployment. There are 50 million children in grades PK to 12 in the U.S. This is the grinding reality for perhaps 10 to 20 million of them. Solutions are readily available, but not known or used by caring and skilled front-line educators.

In what way is this situation unlike Tuskegee in 1940?

 Photo credit: By National Archives Atlanta, GA (U.S. government) ([1], originally from National Archives) [Public domain], via Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.