Most people who have ever been involved with human subjects’ rights know about the Tuskegee Syphilis Study. This was a study of untreated syphilis, in which 622 poor, African American sharecroppers, some with syphilis and some without, were evaluated over 40 years.
The study, funded and overseen by the U.S. Public Health Service, started in 1932. In 1940, researchers elsewhere discovered that penicillin cured syphilis. By 1947, penicillin was “standard of care” for syphilis, meaning that patients with syphilis received penicillin as a matter of course, anywhere in the U.S.
But not in Tuskegee. Not in 1940. Not in 1947. Not until 1972, when a whistle-blower made the press aware of what was happening. In the meantime, many of the men died of syphilis, 40 of their wives contracted the disease, and 19 of their children were born with congenital syphilis. The men had never even been told the nature of the study, they were not informed in 1940 or 1947 that there was now a cure, and they were not offered that cure. Leaders of the U.S. Public Health Service were well aware that there was a cure for syphilis, but for various reasons, they did not stop the study. Not in 1940, not in 1947, not even when whistle-blowers told them what was going on. They stopped it only when the press found out.
In 1997 a movie on the Tuskegee Syphilis Study was released. It was called Miss Evers’ Boys. Miss Evers (actually, Eunice Rivers) was the African-American public health nurse who was the main point of contact for the men over the whole 40 years. She deeply believed that she, and the study, were doing good for the men and their community, and she formed close relationships with them. She believed in the USPHS leadership, and thought they would never harm her “boys.”
The Tuskegee study was such a crime and scandal that it utterly changed procedures for medical research in the U.S. and most of the world. Today, participants in research with any level of risk, or their parents if they are children, must give informed consent for participation in research, and even if they are in a control group, they must receive at least “standard of care”: currently accepted, evidence-based practices.
If you’ve read my blogs, you’ll know where I’m going with this. Failure to use proven educational treatments, unlike medical ones, is rarely fatal, at least not in the short term. But otherwise, our profession carries out Tuskegee crimes all the time. It condemns failing students to ineffective programs and practices when effective ones are known. It fails to even inform parents or children, much less teachers and principals, that proven programs exist: Proven, practical, replicable solutions for the problems they face every day.
Like Miss Rivers, front-line educators care deeply about their charges. Most work very hard and give their absolute best to help all of their children to succeed. Teaching is too much hard work and too little money for anyone to do it for any reason but for the love of children.
But somewhere up the line, where the big decisions are made, where the people are who know or who should know which programs and practices are proven to work and which are not, this information just does not matter. There are exceptions, real heroes, but in general, educational leaders who believe that schools should use proven programs have to fight hard for this position. The problem is that the vast majority of educational expenditures—textbooks, software, professional development, and so on—lack even a shred of evidence. Not a scintilla. Some have evidence that they do not work. Yet advocates for those expenditures (such as sales reps and educators who like the programs) argue strenuously for programs with no evidence, and it’s just easier to go along. Whole states frequently adopt or require textbooks, software, and services of no known value in terms of improving student achievement. The ESSA evidence standards were intended to focus educators on evidence and incentivize use of proven programs, at least for the lowest-achieving 5% of schools in each state, but so far it’s been slow going.
Yet there are proven alternatives. Evidence for ESSA (www.evidenceforessa.org) lists more than 100 PK-12 reading and math programs that meet the top three ESSA evidence standards. The majority meet the top level, “Strong.” And most of the programs were researched with struggling students. Yet I am not perceiving a rush to find out about proven programs. I am hearing a lot of new interest in evidence, but my suspicion, growing every day, is that many educational leaders do not really care about the evidence, but are instead just trying to find a way to keep using the programs and providers they already have and already like, and are looking for evidence to justify keeping things as they are.
Every school has some number of struggling students. If these children are provided with the same approaches that have not worked with them or with millions like them, it is highly likely that most will fail, with all the consequences that flow from school failure: Retention. Assignment to special education. Frustration. Low expectations. Dropout. Limited futures. Poverty. Unemployment. There are 50 million children in grades PK to 12 in the U.S. This is the grinding reality for perhaps 10 to 20 million of them. Solutions are readily available, but not known or used by caring and skilled front-line educators.
In what way is this situation unlike Tuskegee in 1940?
Photo credit: By National Archives Atlanta, GA (U.S. government) (, originally from National Archives) [Public domain], via Wikimedia Commons
This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.
6 thoughts on “Miss Evers’ Boys (And Girls)”
Dude, I appreciate your writings, but this is just an egregiously BAD false equivalence.
You’re going to equate a clear biological/medicinal finding with educational/sociological suspicions?
You may have a point that there are plenty of educational practices out there that have little to no research support. But none of these strategies either have a) the impact on its subjects, nor the b) severity of same on its subject, nor c) research certainty. (One CANNOT make the claim that psych/Ed/Sociological findings have the clarity of physics/Chem/Biological findings. And I am a Psych major). Finally, d) no flawed educational practice ever killed nor transmitted a deadly disease upon any of its subjects.
Thanks for your response. I agree that a failure to use proven programs is rarely fatal, and I said as much in the blog. However, in another sense, the situation is more dire in education, because so many more children are affected. Millions of children are consigned to special education or other forms of failure due to the lack of tutors, for example. Tutoring programs have been rigorously evaluated and found to be effective many times, and are readily available. Other approaches have also been rigorously applied and found to be effective. Yet these proven approaches are widely withheld from children, and the children predictably fail as a result. Even if medical and educational treatments differ, as they do, the moral situation is the same: Children fail in very large numbers, proven approaches that would prevent failure are available but not used, and life goes on.
LikeLiked by 1 person
Juxtaposition of the Tuskegee “crime” to the use of ineffective educational programs got my attention. However I don’t buy the comparison. The crime in the Tuskegee case was to deliberately WITHHOLD treatment, not the use of INEFFECTIVE treatments. For example, could not doctors have attempted to treat those black men’s syphillitic condition with some form of arsenic, albeit an INEFFECTIVE treatment? In education, a “Tuskegee crime” would be, for example, to have poor black kids sit in the classroom everyday with a researcher instead of a teacher for the purpose of observing what happens to UNEDUCATED poor children over time. Yes, many schools (public and private) use ineffective educational programs. However, the reasons are generally not criminal, e.g., (1) insufficient funds to implement the most effective interventions, (2) some effective interventions are deemed too difficult to implement, (3) school leaders and educators believe their “ineffective” treatments are more effective than “proven” interventions, and (4) what’s effective for one student may be ineffective for another. In my experience, the closest dynamic I would liken to a Tuskegee crime is teachers’ unwillingness to work in some inner city schools. For without effective educators, failure is certain. Keep up the great work, Bob.
Thank you for your comments. I would admit that the Tuskegee case is not an exact parallel to the failure to provide proven educational programs to struggling children. But the core elements are there: Children are struggling in reading and math, and if left untreated (by proven programs), they will suffer life-long damage. Proven solutions are known and readily available. Yes, some of these programs are expensive, and some may be difficult to provide, and some educators may think they do not really work. Yet all of these are just excuses, and all could be overcome. An individual school might find it difficult to implement some proven programs with its current budget, but other programs that are equally effective cost far less, and the cost of providing (for example) paraprofessional tutors using proven programs could be afforded by any school system without greatly raising the cost of education. If educators do not believe the programs work, then they can be educated, or additional studies can be done in every state or region. Of course there are difficulties in implementing any change, but if there is strong reason to believe that certain solutions may be effective, the answer is further development, research, and dissemination, not giving up and sticking with the methods we’ve always used that do not do enough to ensure the success of struggling students.
As always, a compelling read; however it appears you’re guilty of exactly what you’re criticizing.
I’m disappointed with your endorsement of Evidence for ESSA and how it is set up. The site leads with the strength of the research, “STRONG!” in bright green, but programs with VERY SMALL EFFECT SIZES are included and many educators miss that fact because of the STRONG rating. It leads educators to falsely believe that some of these programs will be effective. Seriously, how will an effect size of 0.13 help struggling children? Especially if it’s an expensive program such as LLI? If instead, the most prominent rating was the effect size, and then you could check the strength of the research, it would be much more helpful. You mention your frustration with people endorsing their favorite programs even though they lack efficacy, but that’s exactly what this site ends up doing. The years-long affiliation with F & P shines through like a beacon.
Thank you for your comments. I agree that the format of Evidence for ESSA gives too much emphasis on programs with small effect sizes. However, the whole idea of the website is to identify programs that do and do not meet the ESSA evidence standards. Those standards focus on statistical significance, not on effect sizes, and only a single significant positive effect with a randomized study is required, as long as there are no studies with significantly negative effects. We cannot arbitrarily ignore the ESSA standards, of course. However, we list the average effect sizes and list programs in order by average effect sizes and number of studies, so any reader can easily select the programs with the largest effect sizes, among those meeting the “Strong” or “Moderate” categories. For this particular blog, my point is that there are many programs that meet the ESSA standards and have large effect sizes, and struggling students need and should receive those programs.