Make No Small Plans

“Make no little plans; they have no magic to stir men’s blood, and probably themselves will not be realized. Make big plans, aim high in hope and work, remembering that a noble, logical diagram, once recorded, will never die…”

-Daniel Burnham, American architect, 1910

More than 100 years ago, architect Daniel Burnham expressed an important insight. “Make no little plans,” he said. Many people have said that, one way or another. But Burnham’s insight was that big plans matter because they “have magic to stir men’s blood.” Small plans do not, and for this reason may never even be implemented. Burnham believed that even if big plans fail, they have influence into the future, as little plans do not.

blog_6-27-19_Great Wall of China
Make no small plans.

In education, we sometimes have big plans. Examples include comprehensive school reform in the 1990s, charter schools in the 2000s, and evidence-based reform today. None of these have yet produced revolutionary positive outcomes, but all of them have captured the public imagination. Even if you are not an advocate of any of these, you cannot ignore them, as they take on a life of their own. When conditions are right, they will return many times, in many forms, and may eventually lead to substantial impacts. In medicine, it was demonstrated in the mid-1800s that germs caused disease and that medicine could advance through rigorous experimentation (think Lister and Pasteur, for example). Yet sterile procedures in operations and disciplined research on practical treatments took 100 years to prevail. The medical profession resisted sterile procedures and evidence-based medicine for many years. Sterile procedures and evidence-based medicine were big ideas. It took a long time for them to take hold, but they did prevail, and remained big ideas through all that time.

Big Plans in Education

In education, as in medicine long ago, we have thousands of important problems, and good work continues (and needs to continue) on most of them. However, at least in American education, there is one crucial problem that dwarfs all others and lends itself to truly big plans. This is the achievement gap between students from middle class backgrounds and those from disadvantaged backgrounds. As noted in my April 25 blog, the achievement gap between students who qualify for free lunch and those who do not, between African American and White students, and between Hispanic students and non-Hispanic White students, all average an effect size of about 0.50. This presents a serious challenge. However, as I pointed out in that blog, there are several programs in existence today capable of adding an effect size of +0.50 to the reading or math achievement of students at risk. All programs that can do this involve one-to-one or one-to-small group tutoring. Tutoring is expensive, but recent research has found that well-trained and well-supervised tutors with BAs, but not necessarily teaching certificates, can obtain the same outcomes as certified teachers do, at half the cost. Using our own Success for All program with six tutors per school (K-5), high-poverty African American elementary schools in Baltimore obtained effect sizes averaging +0.50 for all students and +0.75 for students in the lowest 25% of their grades (Madden et al., 1993). A follow-up to eighth grade found that achievement outcomes maintained and both retentions and special education placements were cut in half (Borman & Hewes, 2003). We have not had the opportunity to once again implement Success for All with so much tutoring included, but even with fewer tutors, Success for All has had substantial impacts. Cheung et al. (2019) found an average effect size of +0.27 across 28 randomized and matched studies, a more than respectable outcome for a whole-school intervention. For the lowest-achieving students, the average was +0.56.

Knowing that Success for All can achieve these outcomes is important in itself, but it is also an indication that substantial positive effects can be achieved for whole schools, and with sufficient tutors, can equal the entire achievement gaps according to socio-economic status and race. If one program can do this, why not many others?

Imagine that the federal government or other large funders decided to support the development and evaluation of several different ideas. Funders might establish a goal of increasing reading achievement by an effect size of +0.50, or as close as possible to this level, working with high-poverty schools. Funders would seek organizations that have already demonstrated success at an impressive level, but not yet +0.50, who could describe a compelling strategy to increase their impact to +0.50 or more. Depending on the programs’ accomplishments and needs, they might be funded to experiment with enhancements to their promising model. For example, they might add staff, add time (e.g., continue for multiple years), or add additional program components likely to strengthen the overall model. Once programs could demonstrate substantial outcomes in pilots, they might be funded to do a cluster randomized trial. If this experiment shows positive effects approaching +0.50 or more, the developers might receive funding for scale-up. If the outcomes are substantially positive but significantly less than +0.50, the funders might decide to help the developers make changes leading up to a second randomized experiment.

There are many details to be worked out, but the core idea could capture the imagination and energy of educators and public-spirited citizens alike. This time, we are not looking for marginal changes that can be implemented cheaply. This time, we will not quit until we have many proven, replicable programs, each of which is so powerful that it can, over a period of years, remedy the entire achievement gap. This time, we are not making changes in policy or governance and hoping for the best. This time, we are going directly to the schools where the disadvantaged kids are, and we are not declaring victory until we can guarantee such students gains that will give them the same outcomes as those of the middle class kids in the suburbs.

Perhaps the biggest idea of all is the idea that we need big ideas with big outcomes!

Anyway, this is my big plan. What’s yours?

————

Note: Just as I was starting on this blog, I got an email from Ulrich Boser at the Center for American Progress. CAP and the Thomas Fordham Foundation are jointly sponsoring an “Education Moonshot,” including a competition with a grand prize of $10,000 for a “moonshot idea that will revolutionize schooling and dramatically improve student outcomes.” For more on this, please visit the announcement site. Submissions are due August 1st at this online portal and involve telling them in 500 words your, well, big plan.

 

References

Borman, G., & Hewes, G. (2003).  Long-term effects and cost effectiveness of Success for All.  Educational Evaluation and Policy Analysis, 24 (2), 243-266.

Cheung, A., Xie, C., Zhuang, T., & Slavin, R. E. (2019). Success for All: A quantitative synthesis of evaluations. Manuscript submitted for publication.

Madden, N.A., Slavin, R.E., Karweit, N.L., Dolan, L.J., & Wasik, B.A. (1993).  Success for All:  Longitudinal effects of a restructuring program for inner-city elementary schools.  American Educational Research Journal, 30, 123-148.

 

 

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Advertisements

Charter Schools? Smarter Schools? Why Not Both?

I recently saw an editorial in the May 29 Washington Post, entitled “Denying Poor Children a Chance,” a pro-charter school opinion piece that makes dire predictions about the damage to poor and minority students that would follow if charter expansion were to be limited.  In education, it is common to see evidence-free opinions for and against charter schools, so I was glad to see actual data in the Post editorial.   In my view, if charter schools could routinely and substantially improve student outcomes, especially for disadvantaged students, I’d be a big fan.  My response to charter schools is the same as my response to everything else in education: Show me the evidence.

The Washington Post editorial cited a widely known 2015 Stanford CREDO study comparing urban charter schools to matched traditional public schools (TPS) in the same districts.  Evidence always attracts my attention, so I decided to look into this and other large, multi-district studies. Despite the Post’s enthusiasm for the data, the average effect size was only +0.055 for math and +0.04 for reading.  By anyone’s standards, these are very, very small outcomes.  Outcomes for poor, urban, African American students were somewhat higher, at +0.08 for math and +0.06 for reading, but on the other hand, average effect sizes for White students were negative, averaging -0.05 for math and -0.02 for reading.  Outcomes were also negative for Native American students: -0.10 for math, zero for reading.  With effect sizes so low, these small differences are probably just different flavors of zero.  A CREDO (2013) study of charter schools in 27 states, including non-urban as well as urban schools, found average effect sizes of +0.01 for math and -0.01 for reading. How much smaller can you get?

In fact, the CREDO studies have been widely criticized for using techniques that inflate test scores in charter schools.  They compare students in charter schools to students in traditional public schools, matching on pretests and ethnicity.  This ignores the obvious fact that students in charter schools chose to go there, or their parents chose for them to go.  There is every reason to believe that students who choose to attend charter schools are, on average, higher-achieving, more highly motivated, and better behaved than students who stay in traditional public schools.  Gleason et al. (2010) found that students who applied to charter schools started off 16 percentage points higher in reading and 13 percentage points higher in math than others in the same schools who did not apply.  Applicants were more likely to be White and less likely to be African American or Hispanic, and they were less likely to qualify for free lunch.  Self-selection is a particular problem in studies of students who choose or are sent to “no-excuses” charters, such as KIPP or Success Academies, because the students or their parents know students will be held to very high standards of behavior and accomplishment, and may be encouraged to leave the school if they do not meet those standards (this is not a criticism of KIPP or Success Academies, but when such charter systems use lotteries to select students, the students who show up for the lotteries were at least motivated to participate in a lottery to attend a very demanding school).

Well-designed studies of charter schools usually focus on schools that use lotteries to select students, and then they compare the students who were successful in the lottery to those who were not so lucky.  This eliminates the self-selection problem, as students were selected by a random process.  The CREDO studies do not do this, and this may be why their studies report higher (though still very small) effect sizes than those reported by syntheses of studies of students who all applied to charters, but may have been “lotteried in” or “lotteried out” at random.  A very rigorous WWC synthesis of such studies by Gleason et al. (2010) found that middle school students who were lotteried into charter schools in 32 states performed non-significantly worse than those lotteried out, in math (ES=-0.06) and in reading (ES=-0.08).  A 2015 update of the WWC study found very similar, slightly negative outcomes in reading and math.

It is important to note that “no-excuses” charter schools, mentioned earlier, have had more positive outcomes than other charters.  A recent review of lottery studies by Cheng et al. (2017) found effect sizes of +0.25 for math and +0.17 for reading.  However, such “no-excuses” charters are a tiny percentage of all charters nationwide.

blog_6-5-19_schoolmortorbd_500x422

Other meta-analyses of studies of achievement outcomes of charter schools also exist, but none found effect sizes as high as the CREDO urban study.  The means of +0.055 for math and +0.04 for reading represent upper bounds for effects of urban charter schools.

Charter Schools or Smarter Schools?

So far, every study of achievement effects of charters has focused on impacts of charters on achievement compared to those of traditional public schools.  However, this should not be the only question.  “Charters” and “non-charters” do not exhaust the range of possibilities.

What if we instead ask this question: Among the range of programs available, which are most likely to be most effective at scale?

To illustrate the importance of this question, consider a study in England, which evaluated a program called Engaging Parents Through Mobile Phones.  The program involves texting parents on cell phones to alert them to upcoming tests, inform them about whether students are completing their homework, and tell them what students were being taught in school.  A randomized evaluation (Miller et al, 2017) found effect sizes of +0.06 for math and +0.03 for reading, remarkably similar to the urban charter school effects reported by CREDO (2015).  The cost of the mobile phone program was £6 per student per year, or $7.80.  If you like the outcomes of charter schools, might you prefer to get the same outcomes for $7.80 per child per year, without all the political, legal, and financial stresses of charter schools?

The point here is that rather than arguing about the size of small charter effects, one could consider charters a “treatment” and compare them to other proven approaches.  In our Evidence for ESSA website, we list 112 reading and math programs that meet ESSA standards for “Strong,” “Moderate,” or “Promising” evidence of effectiveness.  Of these, 107 had effect sizes larger than those CREDO (2015) reports for urban charter schools.  In both math and reading, there are many programs with average effect sizes of +0.20, +0.30, up to more than +0.60.  If applied as they were in the research, the best of these programs could, for example, entirely overcome Black-White and Hispanic-White achievement gaps in one or two years.

A few charter school networks have their own proven educational approaches, but the many charters that do not have proven programs should be looking for them.  Most proven programs work just as well in charter schools as they do in traditional public schools, so there is no reason existing charter schools should not proactively seek proven programs to increase their outcomes.  For new charters, wouldn’t it make sense for chartering agencies to encourage charter applicants to systematically search for and propose to adopt programs that have strong evidence of effectiveness?  Many charter schools already use proven programs.  In fact, there are several that specifically became charters to enable them to adopt or maintain our Success for All whole-school reform program.

There is no reason for any conflict between charter schools and smarter schools.  The goal of every school, regardless of its governance, should be to help students achieve their full potential, and every leader of a charter or non-charter school would agree with this. Whatever we think about governance, all schools, traditional or charter, should get smarter, using proven programs of all sorts to improve student outcomes.

References

Cheng, A., Hitt, C., Kisida, B., & Mills, J. N. (2017). “No excuses” charter schools: A meta-analysis of the experimental evidence on student achievement. Journal of School Choice, 11 (2), 209-238.

Clark, M.A., Gleason, P. M., Tuttle, C. C., & Silverberg, M. K., (2015). Do charter schools improve student achievement? Educational Evaluation and Policy Analysis, 37 (4), 419-436.

Gleason, P.M., Clark, M. A., Tuttle, C. C., & Dwoyer, E. (2010).The evaluation of charter school impacts. Washington, DC: What Works Clearinghouse.

Miller, S., Davison, J, Yohanis, J., Sloan, S., Gildea, A., & Thurston, A. (2016). Texting parents: Evaluation report and executive summary. London: Education Endowment Foundation.

Washington Post: Denying poor children a chance. [Editorial]. (May 29, 2019). The Washington Post, A16.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Could Proven Programs Eliminate Gaps in Elementary Reading Achievement?

What if every child in America could read at grade level or better? What if the number of students in special education for learning disabilities, or retained in grade, could be cut in half?

What if students who become behavior problems or give up on learning because of nothing more than reading difficulties could instead succeed in reading and no longer be frustrated by failure?

Today these kinds of outcomes are only pipe dreams. Despite decades of effort and billions of dollars directed toward remedial and special education, reading levels have barely increased.  Gaps between middle class and economically disadvantaged students remain wide, as do gaps between ethnic groups. We’ve done so much, you might think, and nothing has really worked at scale.

Yet today we have many solutions to the problems of struggling readers, solutions so effective that if widely and effectively implemented, they could substantially change not only the reading skills, but the life chances of students who are struggling in reading.

blog_4-25-19_teacherreading_500x333

How do I know this is possible? The answer is that the evidence is there for all to see.

This week, my colleagues and I released a review of research on programs for struggling readers. The review, written by Amanda Inns, Cynthia Lake, Marta Pellegrini, and myself, uses academic language and rigorous review methods. But you don’t have to be a research expert to understand what we found out. In ten minutes, just reading this blog, you will know what needs to be done to have a powerful impact on struggling readers.

Everyone knows that there are substantial gaps in student reading performance according to social class and race. According to the National Assessment of Educational Progress, or NAEP, here are key gaps in terms of effect sizes at fourth grade:

Gap in Effect Sizes
No Free/Reduced lunch/

Free/Reduced lunch

0.56
White/African American 0.52
White/Hispanic 0.46

These are big differences. In order to eliminate these gaps, we’d have to provide schools serving disadvantaged and minority students with programs or services sufficient to increase their reading scores by about a half standard deviation. Is this really possible?

Can We Really Eliminate Such Big and Longstanding Gaps?

Yes, we can. And we can do it cost-effectively.

Our review examined thousands of studies of programs intended to improve the reading performance of struggling readers. We found 59 studies of 39 different programs that met very high standards of research quality. 73% of the qualifying studies used random assignment to experimental or control groups, just as the most rigorous medical studies do. We organized the programs into response to intervention (RTI) tiers:

Tier 1 means whole-class programs, not just for struggling readers

Tier 2 means targeted services for students who are struggling to read

Tier 3 means intensive services for students who have serious difficulties.

Our categories were as follows:

Multi-Tier (Tier 1 + tutoring for students who need it)

Tier 1:

  • Whole-class programs

Tier 2:

  • Technology programs
  • One-to-small group tutoring

Tier 3:

  • One-to-one tutoring

We are not advocating for RTI itself, because the data on RTI are unclear. But it is just common sense to use proven programs with all students, then proven remedial approaches with struggling readers, then intensive services for students for whom Tier 2 is not sufficient.

Do We Have Proven Programs Able to Overcome the Gaps?

The table below shows average effect sizes for specific reading approaches. Wherever you see effect sizes that approach or exceed +0.50, you are looking at proven solutions to the gaps, or at least programs that could become a component in a schoolwide plan to ensure the success of all struggling readers.

Programs That Work for Struggling Elementary Readers

Multi-Tier Approaches Grades Proven No. of Studies Mean Effect Size
      Success for All K-5 3 +0.35
      Enhanced Core Reading Instruction 1 1 +0.24
Tier 1 – Classroom Approaches      
     Cooperative Integrated Reading                        & Composition (CIRC) 2-6 3 +0.11
      PALS 1 1 +0.65
Tier 2 – One-to-Small Group Tutoring      
      Read, Write, & Type (T 1-3) 1 1 +0.42
      Lindamood (T 1-3) 1 1 +0.65
      SHIP (T 1-3) K-3 1 +0.39
      Passport to Literacy (TA 1-4/7) 4 4 +0.15
      Quick Reads (TA 1-2) 2-3 2 +0.22
Tier 3 One-to-One Tutoring
      Reading Recovery (T) 1 3 +0.47
      Targeted Reading Intervention (T) K-1 2 +0.50
      Early Steps (T) 1 1 +0.86
      Lindamood (T) K-2 1 +0.69
      Reading Rescue (T or TA) 1 1 +0.40
      Sound Partners (TA) K-1 2 +0.43
      SMART (PV) K-1 1 +0.40
      SPARK (PV) K-2 1 +0.51

Key:    T: Certified teacher tutors

TA: Teaching assistant tutors

PV: Paid volunteers (e.g., AmeriCorps members)

1-X: For small group tutoring, the usual group size for tutoring (e.g., 1-2, 1-4)

(For more information on each program, see www.evidenceforessa.org)

The table is a road map to eliminating the achievement gaps that our schools have wrestled with for so long. It only lists programs that succeeded at a high level, relative to others at the same tier levels. See the full report or www.evidenceforessa for information on all programs.

It is important to note that there is little evidence of the effectiveness of tutoring in grades 3-5. Almost all of the evidence is from grades K-2. However, studies done in England in secondary schools have found positive effects of three reading tutoring programs in the English equivalent of U.S. grades 6-7. These findings suggest that when well-designed tutoring programs for grades 3-5 are evaluated, they will also show very positive impacts. See our review on secondary reading programs at www.bestevidence.org for information on these English middle school tutoring studies. On the same website, you can also see a review of research on elementary mathematics programs, which reports that most of the successful studies of tutoring in math took place in grades 2-5, another indicator that reading tutoring is also likely to be effective in these grades.

Some of the individual programs have shown effects large enough to overcome gaps all by themselves if they are well implemented (i.e., ES = +0.50 or more). Others have effect sizes lower than +0.50 but if combined with other programs elsewhere on the list, or if used over longer time periods, are likely to eliminate gaps. For example, one-to-one tutoring by certified teachers is very effective, but very expensive. A school might implement a Tier 1 or multi-tier approach to solve all the easy problems inexpensively, then use cost-effective one-to-small group methods for students with moderate reading problems, and only then use one-to-one tutoring with the small number of students with the greatest needs.

Schools, districts, and states should consider the availability, practicality, and cost of these solutions to arrive at a workable solution. They then need to make sure that the programs are implemented well enough and long enough to obtain the outcomes seen in the research, or to improve on them.

But the inescapable conclusion from our review is that the gaps can be closed, using proven models that already exist. That’s big news, news that demands big changes.

Photo credit: Courtesy of Allison Shelley/The Verbatim Agency for American Education: Images of Teachers and Students in Action

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Mislabeled as Disabled

Kenny is a 10th grader in the Baltimore City Public Schools. He is an African American from a disadvantaged neighborhood, attending a high school that requires high grades and test scores. He has good attendance, and has never had any behavior problems. A good kid, by all accounts but one.

Kenny reads at the kindergarten level.

Kenny has spent most of his time in school in special education. He received extensive and expensive services, following an Individual Education Program (IEP) made and updated over time just for him, tailored to his needs.

Yet despite all of this, he is still reading at the kindergarten level in 10th grade.

Kenny’s story starts off a remarkable book, Mislabeled as Disabled, by my friend Kalman (Buzzy) Hettleman. A lawyer by training, Hettleman has spent many years volunteering in Baltimore City schools to help children being considered for special education obtain the targeted assistance they need to either avoid special education or succeed in it. What he has seen, and describes in detail in his book, is nothing short of heartbreaking. In fact, it makes you furious. Here is a system designed to improve the lives of vulnerable children, spending vast amounts of money to enable talented and hard-working teachers to work with children. Yet the outcomes are appalling. It’s not just Kenny. Thousands of students in Baltimore, and in every other city and state, are failing. These are mostly children with specific learning disabilities or other mild, “high-incidence” categories. Or they are struggling readers not in special education who are not doing much better. Many of the students who are categorized as having mild disabilities are not disabled, and would have done at least as well with appropriate services in the regular classroom. Instead, what they get is an IEP. Such children are “mislabeled as disabled,” and obtain little benefit from the experience.

blog_4-4-19_BuzzyHettleman_500x333Buzzy has worked at many levels of this system. He was on the Baltimore school board for many years. He taught social work at the University of Maryland. He has been an activist, fighting relentlessly for the rights of struggling students (and at 84 years of age still is). Most recently, he has served on the Kirwan Commission, appointed to advise the state legislature on reform policies and new funding formulas for the state’s schools. Buzzy has seen it all, from every angle. His book is deeply perceptive and informed, and makes many recommendations for policy and practice. But his message is infuriating. What he documents is a misguided system that is obsessed with rules and policies but pays little attention to what actually works for struggling learners.

What most struggling readers need is proven, well-implemented programs in a Response to Intervention (RTI) framework. Mostly, this boils down to tutoring. Most struggling students can benefit enormously from one-to-small group tutoring by well-qualified teaching assistants (paraprofessionals), so tutoring need not be terribly expensive. Others may need certified teachers or one-to-one. Some struggling readers can succeed with well-implemented proven, strategies in the regular classroom (Tier 1). Those who do not succeed in Tier 1 should receive proven one-to-small group tutoring approaches (Tier 2). If that is not sufficient, a small number of students may need one-to-one tutoring, although research tells us that one-to-small group is almost as effective as one-to-one, and is a lot less expensive.

Tutoring is the missing dynamic in the special education system for struggling readers, whether or not they have IEPs. Yes, some districts do provide tutoring to struggling readers, and if the tutoring model they implement is proven in rigorous research it is generally effective. The problem is that there are few schools or districts that provide enough tutoring to enough struggling readers to move the needle.

Buzzy described a policy he devised with Baltimore’s then-superintendent, Andres Alonso. They called it “one year plus.” It was designed to ensure that all students with high-incidence disabilities, such as those with specific learning disabilities, must receive instruction sufficient to enable them to make one year’s progress or more every 12 months.  If students could do this, they would, over time, close the gap between their reading level and their grade level. This was a radical idea, and its implementation it fell far short. But the concept is exactly right. Students with mild disabilities, who are the majority of those with IEPs, can surely make such gains. In recent years, research has identified a variety of tutoring approaches that can ensure one year or more of progress in a year for most students with IEPs, at a cost a state like Maryland could surely afford.

            Mislabeled as Disabled is written about Buzzy’s personal experience in Baltimore. However, what he describes is happening in districts and states throughout the U.S., rich as well as poor. This dismal cycle can stop anywhere we choose to stop it. Buzzy Hettleman describes in plain, powerful language how this could happen, and most importantly, why it must.

Reference

Hettleman, K. R. (2019). Mislabeled as disabled. New York: Radius.

 This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Miss Evers’ Boys (And Girls)

Most people who have ever been involved with human subjects’ rights know about the Tuskegee Syphilis Study. This was a study of untreated syphilis, in which 622 poor, African American sharecroppers, some with syphilis and some without, were evaluated over 40 years.

The study, funded and overseen by the U.S. Public Health Service, started in 1932. In 1940, researchers elsewhere discovered that penicillin cured syphilis. By 1947, penicillin was “standard of care” for syphilis, meaning that patients with syphilis received penicillin as a matter of course, anywhere in the U.S.

But not in Tuskegee. Not in 1940. Not in 1947. Not until 1972, when a whistle-blower made the press aware of what was happening. In the meantime, many of the men died of syphilis, 40 of their wives contracted the disease, and 19 of their children were born with congenital syphilis. The men had never even been told the nature of the study, they were not informed in 1940 or 1947 that there was now a cure, and they were not offered that cure. Leaders of the U.S. Public Health Service were well aware that there was a cure for syphilis, but for various reasons, they did not stop the study. Not in 1940, not in 1947, not even when whistle-blowers told them what was going on. They stopped it only when the press found out.

blog_11-1-18_tuskegee_500x363

In 1997 a movie on the Tuskegee Syphilis Study was released. It was called Miss Evers’ Boys. Miss Evers (actually, Eunice Rivers) was the African-American public health nurse who was the main point of contact for the men over the whole 40 years. She deeply believed that she, and the study, were doing good for the men and their community, and she formed close relationships with them. She believed in the USPHS leadership, and thought they would never harm her “boys.”

The Tuskegee study was such a crime and scandal that it utterly changed procedures for medical research in the U.S. and most of the world. Today, participants in research with any level of risk, or their parents if they are children, must give informed consent for participation in research, and even if they are in a control group, they must receive at least “standard of care”: currently accepted, evidence-based practices.

If you’ve read my blogs, you’ll know where I’m going with this. Failure to use proven educational treatments, unlike medical ones, is rarely fatal, at least not in the short term. But otherwise, our profession carries out Tuskegee crimes all the time. It condemns failing students to ineffective programs and practices when effective ones are known. It fails to even inform parents or children, much less teachers and principals, that proven programs exist: Proven, practical, replicable solutions for the problems they face every day.

Like Miss Rivers, front-line educators care deeply about their charges. Most work very hard and give their absolute best to help all of their children to succeed. Teaching is too much hard work and too little money for anyone to do it for any reason but for the love of children.

But somewhere up the line, where the big decisions are made, where the people are who know or who should know which programs and practices are proven to work and which are not, this information just does not matter. There are exceptions, real heroes, but in general, educational leaders who believe that schools should use proven programs have to fight hard for this position. The problem is that the vast majority of educational expenditures—textbooks, software, professional development, and so on—lack even a shred of evidence. Not a scintilla. Some have evidence that they do not work. Yet advocates for those expenditures (such as sales reps and educators who like the programs) argue strenuously for programs with no evidence, and it’s just easier to go along. Whole states frequently adopt or require textbooks, software, and services of no known value in terms of improving student achievement. The ESSA evidence standards were intended to focus educators on evidence and incentivize use of proven programs, at least for the lowest-achieving 5% of schools in each state, but so far it’s been slow going.

Yet there are proven alternatives. Evidence for ESSA (www.evidenceforessa.org) lists more than 100 PK-12 reading and math programs that meet the top three ESSA evidence standards. The majority meet the top level, “Strong.” And most of the programs were researched with struggling students. Yet I am not perceiving a rush to find out about proven programs. I am hearing a lot of new interest in evidence, but my suspicion, growing every day, is that many educational leaders do not really care about the evidence, but are instead just trying to find a way to keep using the programs and providers they already have and already like, and are looking for evidence to justify keeping things as they are.

Every school has some number of struggling students. If these children are provided with the same approaches that have not worked with them or with millions like them, it is highly likely that most will fail, with all the consequences that flow from school failure: Retention. Assignment to special education. Frustration. Low expectations. Dropout. Limited futures. Poverty. Unemployment. There are 50 million children in grades PK to 12 in the U.S. This is the grinding reality for perhaps 10 to 20 million of them. Solutions are readily available, but not known or used by caring and skilled front-line educators.

In what way is this situation unlike Tuskegee in 1940?

 Photo credit: By National Archives Atlanta, GA (U.S. government) ([1], originally from National Archives) [Public domain], via Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Response to Proven Instruction (RTPI)

Response to Intervention (RTI) is one of those great policy ideas caring policymakers always come up with that is carefully crafted and enthusiastically announced, then inconsistently implemented, evaluated at great cost, and found to have minimal impacts, if any.   In the case of RTI, the policy is genuinely sensible, but the 2015 MDRC evaluation (Balu et al., 2015) found that the implementation was poor and outcomes were nil, at least as measured in a much-criticized regression discontinuity design (see Fuchs & Fuchs, 2017).  An improvement on RTI, multi-tier systems of support (MTSS), adds in some good ideas, but I don’t think it will be enough.

The problem, I think, relates to something I wrote about at the time the MDRC study appeared. In fact, I gave the phenomenon a name: Bob’s Law, which states that any policy or intervention that is not well defined will not be well implemented and therefore will not work, no matter how sensible it may be. In the case of RTI/MTSS, everyone has a pretty good idea what “tier 1, tier 2, and tier 3” are in concept, but no one knows what they are actually composed of. So each district and school and teacher makes up their own strategies to do general teaching followed by remediation if needed, followed by intensive services if necessary. The problem is that since the actual programs provided in each tier are not specified, everyone will do pretty much what they would have done if RTI had not existed. And guess what?  If both RTI and non-RTI teachers are drawing from the same universally accepted basket of teaching methods, there is no reason to believe that outcomes will be better than ordinary practice if the RTI group is doing more or less the same thing as the non-RTI group.  This is not to say that standard methods are deficient, but why would we expect outcomes to differ if practices don’t?

Response to Proven Instruction (RTPI).

I recently wrote an article proposing a new approach to RTI/MTSS (Slavin, Inns, Pellegrini, & Lake, 2018).  The idea is simple. Why not insist that struggling learners receive tier 1, tier 2, and (if necessary) tier 3 services, each of which is proven to work in rigorous research?  In the article I listed numerous tier 2 and tier 3 services for reading and math that have all been successfully evaluated, with significant outcomes and effect sizes in excess of +0.20.  Every one of these programs involved tutoring, one to one or one to small group, by teachers or paraprofessionals. I also listed tier 1 services found to be very effective for struggling learners.  All of these programs are described at www.evidenceforessa.org.

blog 10 25 18 figure 1 2

If there are so many effective approaches for struggling learners, these should form the core of RTI/MTSS services. I would argue that tier 1 should be composed of proven whole class or whole school programs; tier 2, one-to-small group tutoring by well-qualified paraprofessionals using proven approaches; and tier 3, one-to-one tutoring by paraprofessionals or teachers using proven approaches (see Figure 1).

The result would have to be substantial improvements in the achievement of struggling learners, and reductions in special education and retentions.  These outcomes are assured, as long as implementation is strong, because the programs themselves are proven to work.  Over time, better and more cost-effective programs would be sure to appear, but we could surely do a lot better today with the programs we have now.

Millions of children live in the cruel borderlands between low reading groups and special education. These students are perfectly normal, except from 9:00 to 3:00 on school days. They start school with enthusiasm, but then slide over the years into failure, despair, and then dropout or delinquency.  If we have proven approaches and can use them in a coherent system to ensure success for all of these children, why would we not use them?

Children have a right to have every chance to succeed.  We have a moral imperative to see that they receive what they need, whatever it takes.

References

Balu, R., Zhu, P., Doolittle, F., Schiller, E., Jenkins, J., & Gersten, R. (2015). Evaluation of response to intervention practices for elementary school reading. Washington, DC: U.S. Department of Education, Institute for Education Sciences, NCEE 2016-4000.

Fuchs, D., & Fuchs, L.S. (2017). Critique of the National Evaluation of Response to Intervention: A case for simpler frameworks. Exceptional Children, 83 (3), 1-14.

Slavin, R.E., Inns, A., Pellegrini, M., & Lake, C. (2018). Response to proven instruction (RTPI): Enabling struggling learners. Manuscript submitted for publication.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Two Years of Second Grade? Really?

In a recent blog, Mike Petrilli, President of the Fordham Institute, floated an interesting idea. Given the large numbers of students in high-poverty schools who finish elementary school far behind, what if we gave them all a second year of second grade? (he calls it “2.5”). This, he says, would give disadvantaged schools another year to catch kids up, without all the shame and fuss of retaining them.

blog_10-18-18_2ndgrade_500x333

At one level, I love this idea, but not on its merits. One more year of second grade would cost school districts or states the national average per-pupil cost of $11,400. So would I like to have $11,400 more for every child in a school district serving many disadvantaged students? You betcha. But another year of second grade is not in the top hundred things I’d do with it.

Just to give you an idea of what we’re talking about, my state, Maryland, has about 900,000 students in grades K-12. Adding a year of second grade for all of them would cost about $10,260,000,000. If half of them are, say, in Title 1 schools (one indicator of high poverty), that’s roughly $5 billion and change. Thanks, Mike! To be fair, this $5 billion would be spent over a 12-year period, as students go through year 2.5, so let’s say only a half billion a year.

What could Maryland’s schools do with a half billion dollars a year?  Actually, I wrote them a plan, arguing that if Maryland were realistically planning to ensure the success of every child on that state tests, they could do it, but it would not be cheap.

What Maryland, or any state, could do with serious money would be to spend it on proven programs, especially for struggling learners. As one example, consider tutoring. The well-known Reading Recovery program, for instance, uses a very well-trained tutor working one-to-one with a struggling first grader for about 16 weeks. The cost was estimated by Hollands et al. (2016) at roughly $4600. So Petrilli’s second grade offer could be traded for about three years of tutoring, not just for struggling first graders, but for every single student in a high-poverty school. And there are much less expensive forms of tutoring. It would be easy to figure out how every single student in, say, Baltimore, could receive tutoring every single year of elementary school using paraprofessionals and small groups for students with less serious problems and one-to-one tutoring for those with more serious problems (see Slavin, Inns, & Pellegrini, 2018).

Our Evidence for ESSA website lists many proven, highly effective approaches in reading and math. These are all ready to go; the only reason that they are not universally used is that they cost money, or so I assume. And not that much money, in the grand scheme of things.

I don’t understand why, even in this thought experiment, Mike Petrili is unwilling to consider the possibility of spending serious money on programs and practices that have actually been proven to work. But in case anyone wants to follow up on his idea, or at least pilot it in Maryland, please mail me $5 billion, and I will make certain that every student in every high-poverty school in the state does in fact reach the end of elementary school performing at or near grade level. Just don’t expect to see double when you check in on our second graders.

References

Hollands, F. M., Kieffer, M. J., Shand, R., Pan, Y., Cheng, H., & Levin, H. M. (2016). Cost-effectiveness analysis of early reading programs: A demonstration with recommendations for future research. Journal of Research on Educational Effectiveness9(1), 30-53.

Slavin, R. E., Inns, A., Pellegrini, M. & Lake (2018).  Response to proven instruction (RTPI): Enabling struggling learners. Submitted for publication.

Photo credit: By Petty Officer 1st Class Jerry Foltz (https://www.dvidshub.net/image/383907) [Public domain], via Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.