Miss Evers’ Boys (And Girls)

Most people who have ever been involved with human subjects’ rights know about the Tuskegee Syphilis Study. This was a study of untreated syphilis, in which 622 poor, African American sharecroppers, some with syphilis and some without, were evaluated over 40 years.

The study, funded and overseen by the U.S. Public Health Service, started in 1932. In 1940, researchers elsewhere discovered that penicillin cured syphilis. By 1947, penicillin was “standard of care” for syphilis, meaning that patients with syphilis received penicillin as a matter of course, anywhere in the U.S.

But not in Tuskegee. Not in 1940. Not in 1947. Not until 1972, when a whistle-blower made the press aware of what was happening. In the meantime, many of the men died of syphilis, 40 of their wives contracted the disease, and 19 of their children were born with congenital syphilis. The men had never even been told the nature of the study, they were not informed in 1940 or 1947 that there was now a cure, and they were not offered that cure. Leaders of the U.S. Public Health Service were well aware that there was a cure for syphilis, but for various reasons, they did not stop the study. Not in 1940, not in 1947, not even when whistle-blowers told them what was going on. They stopped it only when the press found out.

blog_11-1-18_tuskegee_500x363

In 1997 a movie on the Tuskegee Syphilis Study was released. It was called Miss Evers’ Boys. Miss Evers (actually, Eunice Rivers) was the African-American public health nurse who was the main point of contact for the men over the whole 40 years. She deeply believed that she, and the study, were doing good for the men and their community, and she formed close relationships with them. She believed in the USPHS leadership, and thought they would never harm her “boys.”

The Tuskegee study was such a crime and scandal that it utterly changed procedures for medical research in the U.S. and most of the world. Today, participants in research with any level of risk, or their parents if they are children, must give informed consent for participation in research, and even if they are in a control group, they must receive at least “standard of care”: currently accepted, evidence-based practices.

If you’ve read my blogs, you’ll know where I’m going with this. Failure to use proven educational treatments, unlike medical ones, is rarely fatal, at least not in the short term. But otherwise, our profession carries out Tuskegee crimes all the time. It condemns failing students to ineffective programs and practices when effective ones are known. It fails to even inform parents or children, much less teachers and principals, that proven programs exist: Proven, practical, replicable solutions for the problems they face every day.

Like Miss Rivers, front-line educators care deeply about their charges. Most work very hard and give their absolute best to help all of their children to succeed. Teaching is too much hard work and too little money for anyone to do it for any reason but for the love of children.

But somewhere up the line, where the big decisions are made, where the people are who know or who should know which programs and practices are proven to work and which are not, this information just does not matter. There are exceptions, real heroes, but in general, educational leaders who believe that schools should use proven programs have to fight hard for this position. The problem is that the vast majority of educational expenditures—textbooks, software, professional development, and so on—lack even a shred of evidence. Not a scintilla. Some have evidence that they do not work. Yet advocates for those expenditures (such as sales reps and educators who like the programs) argue strenuously for programs with no evidence, and it’s just easier to go along. Whole states frequently adopt or require textbooks, software, and services of no known value in terms of improving student achievement. The ESSA evidence standards were intended to focus educators on evidence and incentivize use of proven programs, at least for the lowest-achieving 5% of schools in each state, but so far it’s been slow going.

Yet there are proven alternatives. Evidence for ESSA (www.evidenceforessa.org) lists more than 100 PK-12 reading and math programs that meet the top three ESSA evidence standards. The majority meet the top level, “Strong.” And most of the programs were researched with struggling students. Yet I am not perceiving a rush to find out about proven programs. I am hearing a lot of new interest in evidence, but my suspicion, growing every day, is that many educational leaders do not really care about the evidence, but are instead just trying to find a way to keep using the programs and providers they already have and already like, and are looking for evidence to justify keeping things as they are.

Every school has some number of struggling students. If these children are provided with the same approaches that have not worked with them or with millions like them, it is highly likely that most will fail, with all the consequences that flow from school failure: Retention. Assignment to special education. Frustration. Low expectations. Dropout. Limited futures. Poverty. Unemployment. There are 50 million children in grades PK to 12 in the U.S. This is the grinding reality for perhaps 10 to 20 million of them. Solutions are readily available, but not known or used by caring and skilled front-line educators.

In what way is this situation unlike Tuskegee in 1940?

 Photo credit: By National Archives Atlanta, GA (U.S. government) ([1], originally from National Archives) [Public domain], via Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Advertisements

Response to Proven Instruction (RTPI)

Response to Intervention (RTI) is one of those great policy ideas caring policymakers always come up with that is carefully crafted and enthusiastically announced, then inconsistently implemented, evaluated at great cost, and found to have minimal impacts, if any.   In the case of RTI, the policy is genuinely sensible, but the 2015 MDRC evaluation (Balu et al., 2015) found that the implementation was poor and outcomes were nil, at least as measured in a much-criticized regression discontinuity design (see Fuchs & Fuchs, 2017).  An improvement on RTI, multi-tier systems of support (MTSS), adds in some good ideas, but I don’t think it will be enough.

The problem, I think, relates to something I wrote about at the time the MDRC study appeared. In fact, I gave the phenomenon a name: Bob’s Law, which states that any policy or intervention that is not well defined will not be well implemented and therefore will not work, no matter how sensible it may be. In the case of RTI/MTSS, everyone has a pretty good idea what “tier 1, tier 2, and tier 3” are in concept, but no one knows what they are actually composed of. So each district and school and teacher makes up their own strategies to do general teaching followed by remediation if needed, followed by intensive services if necessary. The problem is that since the actual programs provided in each tier are not specified, everyone will do pretty much what they would have done if RTI had not existed. And guess what?  If both RTI and non-RTI teachers are drawing from the same universally accepted basket of teaching methods, there is no reason to believe that outcomes will be better than ordinary practice if the RTI group is doing more or less the same thing as the non-RTI group.  This is not to say that standard methods are deficient, but why would we expect outcomes to differ if practices don’t?

Response to Proven Instruction (RTPI).

I recently wrote an article proposing a new approach to RTI/MTSS (Slavin, Inns, Pellegrini, & Lake, 2018).  The idea is simple. Why not insist that struggling learners receive tier 1, tier 2, and (if necessary) tier 3 services, each of which is proven to work in rigorous research?  In the article I listed numerous tier 2 and tier 3 services for reading and math that have all been successfully evaluated, with significant outcomes and effect sizes in excess of +0.20.  Every one of these programs involved tutoring, one to one or one to small group, by teachers or paraprofessionals. I also listed tier 1 services found to be very effective for struggling learners.  All of these programs are described at www.evidenceforessa.org.

blog 10 25 18 figure 1 2

If there are so many effective approaches for struggling learners, these should form the core of RTI/MTSS services. I would argue that tier 1 should be composed of proven whole class or whole school programs; tier 2, one-to-small group tutoring by well-qualified paraprofessionals using proven approaches; and tier 3, one-to-one tutoring by paraprofessionals or teachers using proven approaches (see Figure 1).

The result would have to be substantial improvements in the achievement of struggling learners, and reductions in special education and retentions.  These outcomes are assured, as long as implementation is strong, because the programs themselves are proven to work.  Over time, better and more cost-effective programs would be sure to appear, but we could surely do a lot better today with the programs we have now.

Millions of children live in the cruel borderlands between low reading groups and special education. These students are perfectly normal, except from 9:00 to 3:00 on school days. They start school with enthusiasm, but then slide over the years into failure, despair, and then dropout or delinquency.  If we have proven approaches and can use them in a coherent system to ensure success for all of these children, why would we not use them?

Children have a right to have every chance to succeed.  We have a moral imperative to see that they receive what they need, whatever it takes.

References

Balu, R., Zhu, P., Doolittle, F., Schiller, E., Jenkins, J., & Gersten, R. (2015). Evaluation of response to intervention practices for elementary school reading. Washington, DC: U.S. Department of Education, Institute for Education Sciences, NCEE 2016-4000.

Fuchs, D., & Fuchs, L.S. (2017). Critique of the National Evaluation of Response to Intervention: A case for simpler frameworks. Exceptional Children, 83 (3), 1-14.

Slavin, R.E., Inns, A., Pellegrini, M., & Lake, C. (2018). Response to proven instruction (RTPI): Enabling struggling learners. Manuscript submitted for publication.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Two Years of Second Grade? Really?

In a recent blog, Mike Petrilli, President of the Fordham Institute, floated an interesting idea. Given the large numbers of students in high-poverty schools who finish elementary school far behind, what if we gave them all a second year of second grade? (he calls it “2.5”). This, he says, would give disadvantaged schools another year to catch kids up, without all the shame and fuss of retaining them.

blog_10-18-18_2ndgrade_500x333

At one level, I love this idea, but not on its merits. One more year of second grade would cost school districts or states the national average per-pupil cost of $11,400. So would I like to have $11,400 more for every child in a school district serving many disadvantaged students? You betcha. But another year of second grade is not in the top hundred things I’d do with it.

Just to give you an idea of what we’re talking about, my state, Maryland, has about 900,000 students in grades K-12. Adding a year of second grade for all of them would cost about $10,260,000,000. If half of them are, say, in Title 1 schools (one indicator of high poverty), that’s roughly $5 billion and change. Thanks, Mike! To be fair, this $5 billion would be spent over a 12-year period, as students go through year 2.5, so let’s say only a half billion a year.

What could Maryland’s schools do with a half billion dollars a year?  Actually, I wrote them a plan, arguing that if Maryland were realistically planning to ensure the success of every child on that state tests, they could do it, but it would not be cheap.

What Maryland, or any state, could do with serious money would be to spend it on proven programs, especially for struggling learners. As one example, consider tutoring. The well-known Reading Recovery program, for instance, uses a very well-trained tutor working one-to-one with a struggling first grader for about 16 weeks. The cost was estimated by Hollands et al. (2016) at roughly $4600. So Petrilli’s second grade offer could be traded for about three years of tutoring, not just for struggling first graders, but for every single student in a high-poverty school. And there are much less expensive forms of tutoring. It would be easy to figure out how every single student in, say, Baltimore, could receive tutoring every single year of elementary school using paraprofessionals and small groups for students with less serious problems and one-to-one tutoring for those with more serious problems (see Slavin, Inns, & Pellegrini, 2018).

Our Evidence for ESSA website lists many proven, highly effective approaches in reading and math. These are all ready to go; the only reason that they are not universally used is that they cost money, or so I assume. And not that much money, in the grand scheme of things.

I don’t understand why, even in this thought experiment, Mike Petrili is unwilling to consider the possibility of spending serious money on programs and practices that have actually been proven to work. But in case anyone wants to follow up on his idea, or at least pilot it in Maryland, please mail me $5 billion, and I will make certain that every student in every high-poverty school in the state does in fact reach the end of elementary school performing at or near grade level. Just don’t expect to see double when you check in on our second graders.

References

Hollands, F. M., Kieffer, M. J., Shand, R., Pan, Y., Cheng, H., & Levin, H. M. (2016). Cost-effectiveness analysis of early reading programs: A demonstration with recommendations for future research. Journal of Research on Educational Effectiveness9(1), 30-53.

Slavin, R. E., Inns, A., Pellegrini, M. & Lake (2018).  Response to proven instruction (RTPI): Enabling struggling learners. Submitted for publication.

Photo credit: By Petty Officer 1st Class Jerry Foltz (https://www.dvidshub.net/image/383907) [Public domain], via Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Beyond the Spaghetti Bridge: Why Response to Intervention is Not Enough

I know an engineer at Johns Hopkins University who invented the Spaghetti Bridge Challenge. Teams of students are given dry, uncooked spaghetti and glue, and are challenged to build a bridge over a 500-millimeter gap. The bridge that can support the most weight wins.

blog_9-27-18_spaghettibridge_500x333

Spaghetti Bridge tournaments are now held all over the world, and they are wonderful for building interest in engineering. But I don’t think any engineer would actually build a real bridge based on a winning spaghetti bridge prototype. Much as spaghetti bridges do resemble the designs of real bridges, there are many more factors a real engineer has to take into account: Weight of materials, tensile strength, flexibility (in case of high winds or earthquakes), durability, and so on.

In educational innovation and reform, we have lots of great ideas that resemble spaghetti bridges. That’s because they would probably work great if only their components were ideal. An example like this is Response to Intervention (RTI), or its latest version, Multi-Tiered Systems of Supports (MTSS). Both RTI and MTSS start with a terrific idea: Instead of just testing struggling students to decide whether or not to assign them to special education, provide them with high-quality instruction (Tier 1), supplemented by modest assistance if that is not sufficient (Tier 2), supplemented by intensive instruction if Tier 2 is not sufficient (Tier 3). In law, or at least in theory, struggling readers must have had a chance to succeed in high-quality Tier 1, Tier 2, and Tier 3 instruction before they can be assigned to special education.

The problem is that there is no way to ensure that struggling students truly received high-quality instruction at each tier level. Teachers do their best, but it is difficult to make up effective approaches from scratch. MTSS or RTI is a great idea, but their success depends on the effectiveness of whatever struggling students receive as Tier 1, 2, and 3 instruction.

This is where spaghetti bridges come in. Many bridge designs can work in theory (or in spaghetti), but whether or not a bridge really works in the real world depends on how it is made, and with what materials in light of the demands that will be placed on it.

The best way to ensure that all components of RTI or MTSS policy are likely to be effective is to select approaches for each tier that have themselves been proven to work. Fortunately, there is now a great deal of research establishing the effectiveness of programs, proven effective for struggling students that use whole-school or whole-class methods (Tier 1), one-to-small group tutoring (Tier 2), or one-to-one tutoring (Tier 3). Many of these tutoring models are particularly cost-effective because they successfully provide struggling readers with tutoring from well-qualified paraprofessionals, usually ones with bachelor’s degrees but not teaching certificates. Research on both reading and math tutoring has clearly established that such paraprofessional tutors, using structured models, have tutees who gain at least as much as do tutors who are certified teachers. This is important not only because paraprofessionals cost about half as much as teachers, but also because there are chronic teacher shortages in high-poverty areas, such as inner-city and rural locations, so certified teacher tutors may not be available at any cost.

If schools choose proven components for their MTSS/RTI models, and implement them with thought and care, they are sure to see enhanced outcomes for their struggling students. The concept of MTSS/RTI is sound, and the components are proven. How could the outcomes be less than stellar? And in addition to improved achievement for vulnerable learners, hiring many paraprofessionals to serve as tutors in disadvantaged schools could enable schools to attract and identify capable, caring young people with bachelor’s degrees to offer accelerated certification, enriching the local teaching force.

With a spaghetti bridge, a good design is necessary but not sufficient. The components of that design, its ingredients, and its implementation, determine whether the bridge stands or falls in practice. So it is with MTSS and RTI. An approach based on strong evidence of effectiveness is essential to enable these good designs achieve their goals.

Photo credit: CSUF Photos (CC BY-NC-SA 2.0), via flickr

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

More Chinese Dragons: How the WWC Could Accelerate Its Pace

blog_4-26-18_chinesedragon_500x375

A few months ago, I wrote a blog entitled “The Mystery of the Chinese Dragon: Why Isn’t the WWC Up to Date?” It really had nothing to do with dragons, but compared the timeliness of the What Works Clearinghouse review of research on secondary reading programs and a Baye et al. (2017) review on the same topic. The graph depicting the difference looked a bit like a Chinese dragon with a long tail near the ground and huge jaws. The horizontal axis was the dates accepted studies had appeared, and the vertical axis was the number of studies. Here is the secondary reading graph.

blog_4-26-18_graph1_500x292

What the graph showed is that the WWC and the U.S. studies from the Baye et al. (2017) review were similar in coverage of studies appearing from 1987 to 2009, but after that diverged sharply, because the WWC is very slow to add new studies, in comparison to reviews using similar methods.

In the time since the Chinese Dragon for secondary reading studies appeared on my blog, my colleagues and I have completed two more reviews, one on programs for struggling readers by Inns et al. (2018) and one on programs for elementary math by Pellegrini et al. (2018). We made new Chinese Dragon graphs for each, which appear below.*

blog_4-26-18_graph3_500x300

blog_4-26-18_graph2_500x316

*Note: In the reading graph, the line for “Inns et al.” added numbers of studies from the Inns et al. (2018) review of programs for struggling readers to additional studies of programs for all elementary students in an unfinished report.

The new dragons look remarkably like the first. Again, what matters is the similar pattern of accepted studies before 2009, (the “tail”), and the sharply diverging rates in more recent years (the “jaws”).

There are two phenomena that cause the dragons’ “jaws” to be so wide open. The upper jaw, especially in secondary reading and elementary math, indicate that many high-quality rigorous evaluations are appearing in recent years. Both the WWC inclusion standards and those of the Best Evidence Encyclopedia (BEE; www.bestevidence.org) require control groups, clustered analysis for clustered designs, samples that are well-matched at pretest and have similar attrition by posttest, and other features indicating methodological rigor, of the kind expected by the ESSA evidence standards, for example.

The upper jaw of each dragon is increasing so rapidly because rigorous research is increasing rapidly in the U.S. (it is also increasing rapidly in the U.K., but the WWC does not include non-U.S. studies, and non-U.S. studies are removed from the graph for comparability). This increase is due to U. S. Department of Education funding of many rigorous studies in each topic area, through its Institute for Education Sciences (IES) and Investing in Innovation (i3) programs, and special purpose funding such as Striving Readers and Preschool Curriculum Education Research. These recent studies are not only uniformly rigorous, they are also of great importance to educators, as they evaluate current programs being actively disseminated today. Many of the older programs whose evaluations appear on the dragons’ tails no longer exist, as a practical matter. If educators wanted to adopt them, the programs would have to be revised or reinvented. For example, Daisy Quest, still in the WWC, was evaluated on TRS-80 computers not manufactured since the 1980s. Yet exciting new programs with rigorous evaluations, highlighted in the BEE reviews, do not appear at all in the WWC.

I do not understand why the WWC is so slow to add new evaluations, but I suspect that the answer lies in the painstaking procedures any government has to follow to do . . ., well, anything. Perhaps there are very good reasons for this stately pace of progress. However, the result is clear. The graph below shows the publication dates of every study in every subject and grade level accepted by the WWC and entered on its database. This “half-dragon” graph shows that only 26 studies published or made available after 2013 appear on the entire WWC database. Of these, only two have appeared after 2015.

blog_4-26-18_graph4_500x316

The slow pace of the WWC is of particular concern in light of the appearance of the ESSA evidence standards. More educators than ever before must be consulting the WWC, and many must be wondering why programs they know to exist are not listed there, or why recent studies do not appear.

Assuming that there are good reasons for the slow pace of the WWC, or that for whatever reason the pace cannot be greatly accelerated, what can be done to bring the WWC up to date? I have a suggestion.

Imagine that the WWC commissioned someone to do rapid updating of all topics reviewed on the WWC website. The reviews would follow WWC guidelines, but would appear very soon after studies were published or issued. It’s clear that this is possible, because we do it for Evidence for ESSA (www.evidenceforessa.org). Also, the WWC has a number of “quick reviews,” “single study reports,” and so on, scattered around on its site, but not integrated with its main “Find What Works” reviews of various programs. These could be readily integrated with “Find What Works.”

The recent studies identified in this accelerated process might be identified as “provisionally reviewed,” much as the U. S. Patent Office has “patent pending” before inventions are fully patented. Users would have an option to look only at program reports containing fully reviewed studies, or could decide to look at reviews containing both fully and provisionally reviewed studies. If a more time consuming full review of a study found results different from those of the provisional review, the study report and the program report in which it was contained would be revised, of course.

A process of this kind could bring the WWC up to date and keep it up to date, providing useful, actionable evidence in a timely fashion, while maintaining the current slower process, if there is a rationale for it.

The Chinese dragons we are finding in every subject we have examined indicate the rapid growth and improving quality of evidence on programs for schools and students. The U. S. Department of Education and our whole field should be proud of this, and should make it a beacon on a hill, not hide our light under a bushel. The WWC has the capacity and the responsibility to highlight current, high-quality studies as soon as they appear. When this happens, the Chinese dragons will retire to their caves, and all of us, government, researchers, educators, and students, will benefit.

References

Baye, A., Lake, C., Inns, A., & Slavin, R. (2017). Effective reading programs for secondary students. Manuscript submitted for publication. Also see Baye, A., Lake, C., Inns, A. & Slavin, R. E. (2017, August). Effective reading programs for secondary students. Baltimore, MD: Johns Hopkins University, Center for Research and Reform in Education.

Inns, A., Lake, C., Pellegrini, M., & Slavin, R. (2018). Effective programs for struggling readers: A best-evidence synthesis. Paper presented at the annual meeting of the Society for Research on Educational Effectiveness, Washington, DC.

Pellegrini, M., Inns, A., & Slavin, R. (2018). Effective programs in elementary mathematics: A best-evidence synthesis. Paper presented at the annual meeting of the Society for Research on Educational Effectiveness, Washington, DC.

Photo credit: J Bar [GFDL (http://www.gnu.org/copyleft/fdl.html), CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/), GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Proven Tutoring Approaches: The Path to Universal Proficiency

There are lots of problems in education that are fundamentally difficult. Ensuring success in early reading, however, is an exception. We know what skills children need in order to succeed in reading. No area of teaching has a better basis in high-quality research. Yet the reading performance of America’s children is not improving at an adequate pace. Reading scores have hardly changed in the past decade, and gaps between white, African-American, and Hispanic students have been resistant to change.
In light of the rapid growth in the evidence base, and of the policy focus on early reading at the federal and state levels, this is shameful. We already know a great deal about how to improve early reading, and we know how to learn more. Yet our knowledge is not translating into improved practice and improved outcomes on a large enough scale.
There are lots of complex problems in education, and complex solutions. But here’s a really simple solution:

 

Over the past 30 years researchers have experimented with all sorts of approaches to improve students’ reading achievement. There are many proven and promising classroom approaches, and such programs should be used with all students in initial teaching as broadly as possible. Effective classroom instruction, universal access to eyeglasses, and other proven approaches could surely reduce the number of students who need tutors. But at the end of the day, every child must read well. And the only tool we have that can reliably make a substantial difference at scale with struggling readers is tutors, using proven one-to-one or small-group methods.

I realized again why tutors are so important in a proposal I’m making to the State of Maryland, which wants to bring all or nearly all students to “proficient” on its state test, the PARCC. “Proficient” on the PARCC is a score of 750, with a standard deviation of about 50. The state mean is currently around 740. I made a colorful chart (below) showing “bands” of scores below 750 to show how far students have to go to get to 750.

 

Each band covers an effect size of 0.20. There are several classroom reading programs with effect sizes this large, so if schools adopted them, they could move children scoring at 740 to 750. These programs can be found at www.evidenceforessa.org. But implementing these programs alone still leaves half of the state’s children not reaching “proficient.”

What about students at 720? They need 30 points, or +0.60. The best one-to-one tutoring can achieve outcomes like this, but these are the only solutions that can.

Here are mean effect sizes for various reading tutoring programs with strong evidence:

 

 

As this chart shows, one-to-one tutoring, by well-trained teachers or paraprofessionals using proven programs, can potentially have the impacts needed to bring most students scoring 720 (needing 30 points or an effect size of +0.60) to proficiency (750). Three programs have reported effect sizes of at least +0.60, and several others have approached this level. But what about students scoring below 720?

So far I’ve been sticking to established facts, studies of tutoring that are, in most cases, already being disseminated. Now I’m entering the region of well-justified supposition. Almost all studies of tutoring occupy just one year or less. But what if the lowest achievers could receive multiple years of tutoring, if necessary?

One study, over 2½ years, did find an effect size of +0.68 for one-to-one tutoring. Could we do better that that? Most likely. In addition to providing multiple years of tutoring, it should be possible to design programs to achieve one-year effect sizes of +1.00 or more. These may incorporate technology or personalized approaches specific to the needs of individual children. Using the best programs for multiple years, if necessary, could increase outcomes further. Also, as noted earlier, using proven programs other than tutoring for all students may increase outcomes for students who also receive tutoring.

But isn’t tutoring expensive? Yes it is. But it is not as expensive as the costs of reading failure: Remediation, special education, disappointment, and delinquency. If we could greatly improve the reading performance of low achievers, this would of course reduce inequities across the board. Reducing inequities in educational outcomes could reduce inequities in our entire society, an outcome of enormous importance.

Even providing a substantial amount of teacher tutoring could, by my calculations, increase total state education expenditures (in Maryland) by only about 12%. These costs could be reduced greatly or even eliminated by reducing expenditures on ineffective programs, reducing special education placements, and other savings. Having some tutoring done by part time teachers may reduce costs. Using small-group tutoring (fewer than 6 students at a time) for students with milder problems may save a great deal of money. Even at full cost, the necessary funding could be phased in over a period of 6 years at 2% a year.

The bottom line is that the low levels of achievement and high levels of gaps according to economic and racial differences could be improved a great deal using methods already proven to be effective and already widely available. Educators and policy makers are always promising policies that bring every child to proficiency: “No Child Left Behind” and “Every Student Succeeds” come to mind. Yet if these outcomes are truly possible, why shouldn’t we be pursuing them, with every resource at our disposal?

Love, Hope, and Evidence in Secondary Reading

I am pleased to announce that our article reviewing research on effective secondary reading programs has just been posted on the Best Evidence Encyclopedia, aka the BEE. Written with my colleagues Ariane Baye, Cynthia Lake, and Amanda Inns, our review found 64 studies of 49 reading programs for students in grades 6 to 12, which had to meet very high standards of quality. For example, 55 of the studies used random assignment to conditions.

But before I get all nerdy about the technical standards of the review, I want to reflect on what we learned. I’ve already written about one thing we learned, that simply providing more instructional time made little difference in outcomes. In 22 of the studies, students got an extra period for reading beyond what control students got for at least an entire year, yet programs (other than tutoring) that provided extra time did no better than those that did not.

If time doesn’t help struggling readers, what does? I think I can summarize our findings with three words: love, hope, and evidence.

Love and hope are exactly what students who are reading below grade level are lacking. They are no longer naive. They know exactly what it means to be a poor reader in a high-poverty secondary school (almost all of the schools in our review served disadvantaged adolescents). If you can’t read well, college is out of the question. Decent jobs without a degree are scarce. If you have no hope, you cannot be motivated, or you may be motivated in antisocial directions that give you at least a chance for money and recognition. Every child needs love, but poor readers in secondary schools are too often looking for love in all the wrong places.

The successful programs in our review were ones that give adolescents a chance to earn the hope and love they crave. One category, all studies done in England, involved one-to-one and small group tutoring. How better to build close relationships between students and caring adults than to have individual or very small group time with them? And the one-to-one or small group setting allows tutors to personalize instruction, giving students a sense of hope that this time, their efforts will pay off (as the evidence says it will).

But the largest impacts in our review came from two related programs – The Reading Edge and Talent Development High School (TDHS). These both developed in our research center at Johns Hopkins University in the 1990s, so I have to be very modest here. But beyond these individual programs, I think there is a larger message.

Both The Reading Edge (for middle schools) and TDHS (for high schools) organize students into mixed-ability cooperative teams. The team members work on activities designed to build reading comprehension and related skills. Students are frequently assessed and on the basis of those assessments, they can earn recognition for their teams. Teachers introduce lessons, and then, as students work with each other on reading activities, teachers can cruise around the class looking in on students who need encouragement or help, solving problems, and building relationships. Students are on task, eager to learn, and seeing the progress they are making, but students and teachers are laughing together, sharing easy banter, and encouraging each other. Yes, this really happens. I’ve seen it hundreds of times in secondary schools throughout the U.S. and England.

Many of the most successful programs in our review also are based on principles of love and hope. BARR, a high school program, is an excellent example. It uses block scheduling to build positive relationships among a group of students and teachers, adding regular meetings between teachers and students to review their progress in all areas, social as well as academic. The program focuses on building positive social-emotional skills and behaviors, and helping students describe their desired futures, make plans to get there, and regularly review progress on their plans with their teachers and peers. Love and hope.

California’s Expository Reading and Writing Course helps 12th graders hoping to attend California State Universities prepare to pass the test used to determine whether students have to take remedial English (a key factor in college dropout). The students work in groups, helping each other to build reading, writing, and discussion skills, and helping students to visualize a future for themselves. Love and hope.

A few technology programs showed promising outcomes, especially Achieve3000 and Read 180. These do not replace teachers and peers with technology, but instead cycle students through small group, teacher-led, and computer-assisted activities. Pure technology programs did not work so well, but models taking advantage of relationships as well as personalization did best. Love and hope.

Of course, love and hope are not sufficient. We also need evidence that students are learning more than they might have been. To produce positive achievement effects requires outstanding teaching strategies, professional development, curricular approaches, assessments, and more. Love and hope may be necessary but they are not sufficient.

Our review applied the toughest evidence standards we have ever applied. Most of the studies we reviewed did not show positive impacts on reading achievement. But the ones that did so inspire that much more confidence. The very fact that we could apply these standards and still find plenty of studies that meet them shows how much our field is maturing. This in itself fills me with hope.

And love.

Apology

In a recent blog, I wrote about work we are doing to measure the impact on reading and math performance of a citywide campaign to provide assessments and eyeglasses to every child in Baltimore, from pre-k to grade 8. I forgot to mention the name of the project, Vision for Baltimore, and neglected to say that the project operates under the authority of the Baltimore City Health Department, which has been a strong supporter. I apologize for the omission.