Getting Below the Surface to Understand Disappointing Outcomes

Long ago, I toured West Germany, visiting some family friends near Hanover. They suggested I go see Duderstadt, a picturesque town nearby (see picture of it below).

My wife, Nancy, and I drove into Duderstadt and walked around. It was indeed gorgeous, but very strange. Not a person was in sight. Every shop was closed. In the center of the town was a beautiful church. We reasoned that churches are always open. We walked to the church door, I stretched out my hand to open it, but inches away the door burst open. An entire wedding party streamed out into the street. The church was packed to the rafters with happy people, now following the bride and groom out of the church. Mystery solved.

If social scientists came to Duderstadt when we did but failed to see the wedding, they might make all sorts of false conclusions. An economist might see the empty shops and conclude that the economy of rural Germany is doomed, due to low productivity. A demographer might agree and blame this on the obviously declining workforce. But looking just the thickness of a church door beneath the surface, all could immediately understand what was happening.

My point here is a simple one. I am a quant. I believe in numbers and rigorous research designs. But at the same time, I also want to understand what is really going on, and the main numbers rarely tell the whole story.

I was thinking about this when I read the rather remarkable study by Carolyn Heinrich and her colleagues (2010), cited in my two previous blogs. Like many other researchers, she and her colleagues found near-zero impacts for Supplemental Educational Services. At the time this study took place, this was a surprise. How could all that additional instructional time after school not make a meaningful difference?

But instead of just presenting the overall (bad) findings, she poked around town, so to speak, to find out what was going on.

What she found was appalling, but also perfectly logical. Most eligible middle and high school students in Milwaukee who were offered after-school programs either failed to sign up, or if they did sign up, did not attend even a single day, or if they did attend a single day, they attended irregularly, thereafter. And why did they not sign up or attend? Most programs offered attractive incentives, such as iPods, very popular at the time, so about half of the eligible students did sign up, at least. But after the first day, when they got their incentives, students faced drudgery. Heinrich et al. cite evidence that most instruction was either teachers teaching immobile students, or students doing unsupervised worksheets. Heinrich et al.’s technical report had a sentence (dropped in the published report), which I quoted previously, but will quote again here: “One might also speculate that parents and students are, in fact, choosing rationally in not registering for or attending SES.”

A study of summer school by Borman & Dowling (2006) made a similar observation. K-1 students in Baltimore were randomly assigned to have an opportunity to attend three years of summer school. The summer school sessions included 7 weeks of 6-hour a day activities, including 2 ½ hours of reading and writing instruction, plus sports, art, and other enrichment activities. Most eligible students (79%) signed up and attended in the first summer, but fewer did so in the second summer (69%) and even fewer in the third summer (42%). The analyses focused on the students who were eligible for the first and second summers, and found no impact on reading achievement. There was a positive effect for the students who did show up and attended for two summers.

Many studies of summer school, after school, and SES programs overall (including both) have just reported the disappointing outcomes without exploring why they occurred. Such reports are important, if well done, but they offer little understanding of why. Could after school or summer school programs work better if we took into account the evidence on why they usually fail? Perhaps. For example, in my previous blog, I suggested that extended-time programs might do better if they provided one-to-one, or small-group tutoring. However, there is only suggestive evidence that this might be true, and there are good reasons that it might not be, because of the same attendance and motivation problems that may doom any program, no matter how good, when struggling students go to school during times when their friends are outside playing.

Econometric production function models predicting that more instruction leads to more learning are useless unless we take into account what students are actually being provided in extended-time programs and what their motivational state is likely to be. We have to look a bit below the surface to explain why disappointing outcomes are so often achieved, so we can avoid mistakes and repeat successes, rather than making the same mistakes over and over again.

Correction

My recent blog, “Avoiding the Errors of Supplemental Educational Services,” started with a summary of the progress of the Learning Recovery Act.  It was brought to my attention that my summary was not correct.  In fact, the Learning Recovery Act has been introduced in Congress, but is not part of the current reconciliation proposal moving through Congress and has not become law. The Congressional action cited in my last blog was referring to a non-binding budget resolution, the recent passage of which facilitated the creation of the $1.9 trillion reconciliation bill that is currently moving through Congress. Finally, while there is expected to be some amount of funding within that current reconciliation bill to address the issues discussed within my blog, reconciliation rules will prevent the Learning Recovery Act from being included in the current legislation as introduced. I apologize for this error.

References

Borman, G. D., & Dowling, N. M. (2006). Longitudinal achievement effects of multiyear summer school: Evidence from the Teach Baltimore randomized field trial. Educational Evaluation and Policy Analysis, 28(1), 25–48. https://doi.org/10.3102/01623737028001025

Heinrich, C. J., Meyer, R., H., & Whitten, G. W. (2010). Supplemental Education Services under No Child Left Behind: Who signs up and what do they gain? Education Evaluation and Policy Analysis, 32, 273-298.

Photo credit: Amrhingar, CC BY-SA 3.0, via Wikimedia Commons

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

Avoiding the Errors of Supplemental Educational Services (SES)

“The definition of insanity is doing the same thing over and over again, and expecting different results.” –Albert Einstein

Last Friday, the U.S. Senate and House of Representatives passed a $1.9 trillion recovery bill. Within it is the Learning Recovery Act (LRA). Both the overall bill and the Learning Recovery Act are timely and wonderful. In particular, the LRA emphasizes the importance of using research-based tutoring to help students who are struggling in reading or math. The linking of evidence to large-scale federal education funding began with the 2015 ESSA definition of proven educational programs, and the LRA would greatly increase the importance of evidence-based practices.

But if you sensed a “however” coming, you were right. The “however” is that the LRA requires investments of substantial funding in “school extension programs,” such as “summer school, extended day, or extended school year programs” for vulnerable students.

This is where the Einstein quote comes in. “School extension programs” sound a lot like Supplemental Educational Services (SES), part of No Child Left Behind that offered parents and children an array of services that had to be provided after school or in summer school.

The problem is, SES was a disaster. A meta-analysis of 28 studies of SES by Chappell et al. (2011) found a mean effect size of +0.04 for math and +0.02 for reading. A sophisticated study by Deke et al. (2014) found an effect size of +0.05 for math and -0.03 for reading. These effect sizes are just different flavors of zero. Zero was the outcome whichever way you looked at the evidence, with one awful exception: The lowest achievers, and special education students, actually performed significantly less well in the Deke et al. (2014) study if they were in SES than if they qualified but did not sign up. The effect sizes for these students were around -0.20 for reading and math. Heinrich et al. (2009) also reported that the lowest achievers were least likely to sign up for SES, and least likely to attend regularly if they did. All three major studies found that outcomes did not vary much depending on which type of provider or program they received. Considering that the per-pupil cost was estimated at $1,725 in 2021 dollars, these outcomes are distressing, but more important is the fact that despite the federal government’s willingness to spend quite a lot on them, millions of struggling students in desperate need of effective assistance did not benefit.

Why did SES fail? I have two major explanations. Heinrich et al. (2009), who added questionnaires and observations to find out what was going on, discovered that at least in Milwaukee, attendance in SES after-school programs was appalling (as I reported in my previous blog). In the final year studied, only 16% of eligible students were attending (less than half signed up at all, and of those, average attendance in the remedial program was only 34%). Worse, the students in greatest need were least likely to attend.

From their data and other studies they cite, Heinrich et al. (2010) paint a picture of students doing boring, repetitive worksheets unrelated to what they were doing in their school-day classes. Students were incentivized to sign up for SES services with incentives, such as iPods, gift cards, or movie passes. Students often attended just enough to get their incentives, but then stopped coming. In 2006-2007, a new policy limited incentives to educationally-related items, such as books and museum trips, and attendance dropped further. Restricting SES services to after-school and summertime, when attendance is not mandated and far from universal, means that students who did attend were in school while their friends were out playing. This is hardly a way to engage students’ motivation to attend or to exert effort. Low-achieving students see after school and summertime as their free time, which they are unlikely to give up willingly.

Beyond the problems of attendance and motivation in extended time, there was another key problem with SES. This was that none of the hundreds of programs offered to students in SES were proven to be effective beforehand (or ever) in rigorous evaluations. And there was no mechanism to find out which of them were working well, until very late in the program’s history. As a result, neither schools nor parents had any particular basis for selecting programs according to their likely impact. Program providers probably did their best, but there was no pressure on them to make certain that students benefited from SES services.

As I noted in my previous blog, evaluations of SES do not provide the only evidence that after school and summer school programs rarely work for struggling students. Reviews of summer school programs by Xie et al. (in press) and of after school programs (Dynarski et al., 2002; Kidron & Lindsay, 2014) have found similar outcomes, always for the same reasons: poor attendance and poor motivation of students in school when they would otherwise have free time.

Designing an Effective System of Services for Struggling Students

There are two policies that are needed to provide a system of services capable of substantially improving student achievement. One is to provide services during the ordinary school day and year, not in after school or summer school. The second is to strongly emphasize the use of programs proven to be highly effective in rigorous research.

Educational services provided during the school day are far more likely to be effective than those provided after school or in the summer. During the day, everyone expects students to be in school, including the students themselves. There are attendance problems during the regular school day, of course, especially in secondary schools, but these problems are much smaller than those in non-school time, and perhaps if students are receiving effective, personalized services in school and therefore succeeding, they might attend more regularly. Further, services during the school day are far easier to integrate with other educational services. Principals, for example, are far more likely to observe tutoring or other services if they take place during the day, and to take ownership for ensuring their effectiveness. School day services also entail far fewer non-educational costs, as they do not require changing bus schedules, cleaning and securing schools more hours each day, and so on.

The problem with in-school services is that they can disrupt the basic schedule. However, this need not be a problem. Schools could designate service periods for each grade level spread over the school day, so that tutors or other service providers can be continuously busy all day. Students should not be taken out of reading or math classes, but there is a strong argument that a student who is far below grade level in reading or math needs a reading or math tutor using a proven tutoring model far more than other classes, at least for a semester (the usual length of a tutoring sequence).

If schools are deeply reluctant to interrupt any of the ordinary curriculum, then they might extend their day to offer art, music, or other subjects during the after-school session. These popular subjects might attract students without incentives, especially if students have a choice of which to attend. This could create space for tutoring or other services during the school day. A schedule like this is virtually universal in Germany, which provides all sports, art, music, theater, and other activities after school, so all in-school time is available for academic instruction.

Use of proven programs makes sense throughout the school day. Tutoring should be the main focus of the Learning Recovery Act, because in this time of emergency need to help students recover from Covid school closures, nothing less will do. But in the longer term, adoption of proven classroom programs in reading, math, science, writing, and other subjects should provide a means of helping students succeed in all parts of the curriculum (see www.evidenceforessa.org).

In summer, 2021, there may be a particularly strong rationale for summer school, assuming schools are otherwise able to open.  The evidence is clear that doing ordinary instruction during the summer will not make much of a difference, but summer could be helpful if it is used as an opportunity to provide as many struggling students as possible in-person, one-to-one or one-to-small group tutoring in reading or math.  In the summer, students might receive tutoring more than once a day, every day for as long as six weeks.  This could make a particularly big difference for students who basically missed in-person kindergarten, first, or second grade, a crucial time for learning to read.  Tutoring is especially effective in those grades in reading, because phonics is relatively easy for tutors to teach.  Also, there is a large number of effective tutoring programs for grades K-2.  Early reading failure is very important to prevent, and can be prevented with tutoring, so the summer months may get be just the right time to help these students get a leg up on reading.

The Learning Recovery Act can make life-changing differences for millions of children in serious difficulties. If the LRA changes its emphasis to the implementation of proven tutoring programs during ordinary school times, it is likely to accomplish its mission.

SES served a useful purpose in showing us what not to do. Let’s take advantage of these expensive lessons and avoid repeating the same errors. Einstein would be so proud if we heed his advice.

Correction

My recent blog, “Avoiding the Errors of Supplemental Educational Services,” started with a summary of the progress of the Learning Recovery Act.  It was brought to my attention that my summary was not correct.  In fact, the Learning Recovery Act has been introduced in Congress, but is not part of the current reconciliation proposal moving through Congress and has not become law. The Congressional action cited in my last blog was referring to a non-binding budget resolution, the recent passage of which facilitated the creation of the $1.9 trillion reconciliation bill that is currently moving through Congress. Finally, while there is expected to be some amount of funding within that current reconciliation bill to address the issues discussed within my blog, reconciliation rules will prevent the Learning Recovery Act from being included in the current legislation as introduced.

References

Chappell, S., Nunnery, J., Pribesh, S., & Hager, J. (2011). A meta-analysis of Supplemental Education Services (SES) provider effects on student achievement. Journal of Education for Students Placed at Risk, 16 (1), 1-23.

Deke, J., Gill, B. Dragoset, L., & Bogen, K. (2014). Effectiveness of supplemental educational services. Journal of Research in Educational Effectiveness, 7, 137-165.

Dynarski, M. et al. (2003). When schools stay open late: The national evaluation of the 21st Century Community Learning Centers Programs (First year findings). Washington, DC: U.S. Department of Education.

Heinrich, C. J., Meyer, R., H., & Whitten, G. W. (2010). Supplemental Education Services under No Child Left Behind: Who signs up and what do they gain? Education Evaluation and Policy Analysis, 32, 273-298.

Kidron, Y., & Lindsay, J. (2014). The effects of increased learning time on student academic and nonacademic outcomes: Findings from a meta‑analytic review (REL 2014-015). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Appalachia.

Xie, C., Neitzel, A., Cheung, A., & Slavin, R. E. (2020). The effects of summer programs on K-12 students’ reading and mathematics achievement: A meta-analysis. Manuscript submitted for publication.

This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

“Am I Even Real Anymore?” The Truth About Virtual Learning

“My 8-year-old was sobbing last night because she misses playing with her friends at recess, she misses her teacher, and she is worried that everyone has forgotten her. At one point, she asked me if she was even real anymore.”

This appeared in a letter that ran in the July 25th Baltimore Sun. It was written by Jenny Elliott, a Catonsville mother of elementary students. In it, Ms. Elliott tells how she and her husband have been unable to get their kids to do the virtual learning assignments her kids’ school has assigned.

“(Virtual learning) just doesn’t work for them. I can’t physically force them to stare at their devices and absorb information. We’ve yelled, we’ve begged, we’ve made a game of it…we’ve tried everything we can think of. We failed.”

One of the most poignant parts of Ms. Elliott’s letter is her feeling that everyone knows that virtual learning isn’t working for most kids, but no one wants to say so.

“I am begging someone to speak honestly about virtual learning. I have only the perspective of an elementary school parent, but I have to imagine this negatively impacts children at all levels. The communication coming (from authorities) all over the country…it feels delusional.”

Ms. Elliott expresses enormous guilt. “As a parent, I’ve had to see this every day for the last five months, and every day I feel crushing guilt that I can’t make any of it better.” She expresses gratitude for the efforts of her kids’ teachers, and feels sympathy for them. (“I love you, teachers. I am so sorry this is your reality too.”)

blog_8-6-20_computerbordom_500x338 Ms. Elliott notes that if anyone should be able to make virtual learning work, it should be her family: “…a secure living situation, two parents, food security, access to high-speed internet, access to an internet-powered device, etc.” I might add that Catonsville is in suburban Baltimore County, which has a national reputation for its substantial investments in technology over many years.

Since I have written many blogs about schools’ responses to the Covid school closures, I’ve been expecting a letter like this. Informally, my colleagues and I have been chatting to teachers and parents we know with kids in school. Almost every single one tells a story like Ms. Elliott’s. Highly educated parents, plenty of technology, tech-savvy kids, capable and hard-working teachers, all different ages, it does not seem to matter. Teachers and parents alike refer to motivated and successful students who log on and then pay no attention. The kids are  communicating on a different device with their friends, playing games, reading, whatever. There are kids who are engaged with virtual learning, but very few that we’ve heard about.

Much of the reporting about virtual learning has emphasized the lack of access to the Internet, and districts are spending billions to provide devices and improve access. There is a lot of talk about how school closures are increasing learning gaps because disadvantaged students lack access to the Internet, as though school closures are only a problem for disadvantaged students. But if Ms. Elliott is representative of many parents, and I’m sure she is, the problem is far larger than that of students who lack access to technology.

Everyone involved with schools seems to know this, but they do not want to talk about it. There seems to be a giant, unspoken pall of guilt that keeps the reality of what is happening in virtual learning from being discussed openly. Parents feel guilty because they feel deficient if they are not able to get their kids to respond to virtual learning. Teachers feel guilty because they don’t want to admit that they are not able to get more of their students to pay attention. School administrators want to be perceived to be doing something, anything, to combat the educational effects of extended school closures, so while they do talk about the need to obtain more devices and offer teachers more professional development, they do not like to talk about the kids who do have devices, but don’t do much with them. They promise that things will soon be better, with more devices, more professional development, and better lessons turning the tide. Ms. Elliott is sympathetic, but doubtful. “I appreciate those efforts and wholeheartedly believe the educational system is doing the absolute best they can…but I can’t pretend that the virtual school plans will work for our kids.”

Ms. Elliott states at the beginning of her letter that she has no solution to suggest, but she just wants the truth to be known. I have no sure-fire solutions myself. But I do know one thing. Any workable solutions there may be will have to begin with a firm understanding of what is really happening in schools using virtual learning.

In most of the U.S., opening schools in August or September should be out of the question. The rates of new Covid cases remain far too high, and no amount of social distancing inside schools can be safe for students or staff when there are so many carriers of the disease outside of schools. The only true solution, until cures or vaccines are widely available, is to return to the one thing that has worked throughout the world: mandating universal use of masks, shutting down businesses that put people close to each other, and so on. This is the only thing that saved China and South Korea and Italy and Spain and New York City, and it is the only solution now. The faster we return to what works, the sooner we can fully open schools, and then start the long process of healing the terrible damage being done to our children’s learning.

I am not suggesting giving up on virtual learning. If schools will be closed for a long time, it is all we have. But I am pessimistic about trying to fix the current approach to virtual learning. I think we could use all those computers and educators online to much greater effort by providing online tutoring to individuals and small groups, for example, rather than trying to create a classroom community out of children working from home. Perhaps there are ways other than tutoring to use online instruction effectively, but I do not know them. In any case, we need immediate investment in development and evaluation to find the most effective and cost-effective solutions possible, while we wait for a safe time to open schools. We’ll all get through this, one way or the other, but in order to minimize the negative impact on student learning, let’s start with the truth, and then build and use the evidence of what works.

 This blog was developed with support from Arnold Ventures. The views expressed here do not necessarily reflect those of Arnold Ventures.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

Cooperative Learning and Achievement

Once upon a time, two teachers went together to an evening workshop on effective teaching strategies. The speaker was dynamic, her ideas were interesting, and everyone in the large audience enjoyed the speech. Afterwards, the two teachers drove back to the town where they lived. The driver talked excitedly with her friend about all the wonderful ideas they’d heard, raised questions about how to put them into practice, and related them to things she’d read, heard, and experienced before.

After an hour’s drive, however, the driver realized that her friend had been asleep for the whole return trip.

Now here’s my question: who learned the most from the speech? Both the driver and her friend were equally excited by the speech and paid equal attention to it. Yet no one would doubt that the driver learned much more, because after the lecture, she talked all about it, thinking her friend was awake.

Every teacher knows how much they learn about any topic by teaching it, or discussing it with others. Imagine how much more the driver and her friend would have learned from the lecture if they had both been participating fully, sharing ideas, perceptions, agreements, disagreements, and new ideas.

So far, this is all obvious, right? Everyone knows that people learn when they are engaged, when they have opportunities to discuss with others, explain to others, ask questions of others, and receive explanations.

Yet in traditionally organized classes, learning does not often happen like this. Teachers teach, students listen, and if genuine discussion takes place at all, it is between the teacher and a small minority of students who always raise their hands and ask good questions. Even in the most exciting and interactive of classes, many students, often a majority, say little or nothing. They may give an answer if called upon, but “giving an answer” is not at all the same as engagement. Even in classes that are organized in groups and encourage group interaction, some students do most of the participating, while others just watch, at best. Evidence from research, especially studies by Noreen Webb (2008), find that the students who learn the most in group settings are those who give full explanations to others. These are the drivers, returning to my opening story. Those who receive a lot of explanations also learn. Who learns least? Those who neither explain nor receive explanations.

For achievement outcomes, it is not enough to put students into groups and let them talk. Research finds that cooperative learning works best when there are group goals and individual accountability. That is, groups can earn recognition or small privileges (e.g., lining up first for recess) if the average of each team member’s score meets a high standard. The purpose of group goals and individual accountability is to incentivize team members to help and encourage each other to excel, and to avoid having, for example, one student do all the work while the others watch (Chapman, 2001). Students can be silent in groups, as they can be in class, but this is less likely if they are working with others toward a common goal that they can achieve only if all team members succeed.

blog_3-5-20_coopstudents_500x333

The effectiveness of cooperative learning for enhancing achievement has been known for a long time (see Rohrbeck et al., 2003; Roseth et al., 2008; Slavin, 1995, 2014). Forms of cooperative learning are frequently seen in elementary and secondary schools, but they are far from standard practice. Forms of cooperative learning that use group goals and individual accountability are even more rare.

There are many examples of programs that incorporate cooperative learning and meet the ESSA Strong or Moderate standards in reading, math, SEL, and attendance. You can see descriptions of the programs by visiting www.evidenceforessa.org and clicking on the cooperative learning filter. As you can see, it is remarkable how many of the programs identified as effective for improving student achievement by the What Works Clearinghouse or Evidence for ESSA make use of well-structured cooperative learning, usually with students working in teams or groups of 4-5 students, mixed in past performance. In fact, in reading and mathematics, only one-to-one or small-group tutoring are more effective than approaches that make extensive use of cooperative learning.

There are many successful approaches to cooperative learning adapted for different subjects, specific objectives, and age levels (see Slavin, 1995). There is no magic to cooperative learning; outcomes depend on use of proven strategies and high-quality implementation. The successful forms of cooperative learning provide at least a good start for educators seeking ways to make school engaging, exciting, social, and effective for learning. Students not only learn from cooperation in small groups, but they love to do so. They are typically eager to work with their classmates. Why shouldn’t we routinely give them this opportunity?

References

Chapman, E. (2001, April). More on moderations in cooperative learning outcomes. Paper presented at the annual meeting of the American Educational Research Association, Montreal, Canada.

Rohrbeck, C. A., Ginsburg-Block, M. D., Fantuzzo, J. W., & Miller, T. R. (2003). Peer-assisted learning interventions with elementary school students: A meta-analytic review. Journal of Educational Psychology, 94(2), 240–257.

Roseth, C., Johnson, D., & Johnson, R. (2008). Promoting early adolescents’ achievement and peer relationships: The effects of cooperative, competitive, and individualistic goal structures. Psychological Bulletin, 134(2), 223–246.

Slavin, R. E. (1995). Cooperative learning: Theory, research, and practice (2nd ed.). Boston, MA: Allyn & Bacon.

Slavin, R. E. (2014). Make cooperative learning powerful: Five essential strategies to make cooperative learning effective. Educational Leadership, 72 (2), 22-26.

Webb, N. M. (2008). Learning in small groups. In T. L. Good (Ed.), 21st century learning (Vol. 1, pp. 203–211). Thousand Oaks, CA: Sage.

Photo courtesy of Allison Shelley/The Verbatim Agency for American Education: Images of Teachers and Students in Action.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Note: If you would like to subscribe to Robert Slavin’s weekly blogs, just send your email address to thebee@bestevidence.org

 

Why Can’t Education Progress Like Medicine Does?

I recently saw an end-of-year article in The Washington Post called “19 Good Things That Happened in 2019.” Four of them were medical or public health breakthroughs. Scientists announced a new therapy for cystic fibrosis likely to benefit 90% of people with this terrible disease, incurable for most patients before now. The World Health Organization announced a new vaccine to prevent Ebola. The Bill and Melinda Gates Foundation announced that deaths of children before their fifth birthday have now dropped from 82 per thousand births in 1990 to 37 in 2019. The Centers for Disease Control reported a decline of 5.1 percent in deaths from drug overdoses in just one year, from 2017 to 2018.

Needless to say, breakthroughs in education did not make the list. In fact, I’ll bet there has never been an education breakthrough mentioned on such lists.

blog_1-9-20_kiddoctor_337x500 I get a lot of criticism from all sides for comparing education to medicine and public health. Most commonly, I’m told that it’s ever so much easier to give someone a pill than to change complex systems of education. That’s true enough, but not one of the 2019 medical or public health breakthroughs was anything like “taking a pill.” The cystic fibrosis cure involves a series of three treatments personalized to the genetic background of patients. It took decades to find and test this treatment. A vaccine for Ebola may be simple in concept, but it also took decades to develop. Also, Ebola occurs in very poor countries, where ensuring universal coverage with a vaccine is very complex. Reducing deaths of infants and toddlers took massive coordinated efforts of national governments, international organizations, and ongoing research and development. There is still much to do, of course, but the progress made so far is astonishing. Similarly, the drop in deaths due to overdoses required, and still requires, huge investments, cooperation between government agencies of all sorts, and constant research, development, and dissemination. In fact, I would argue that reducing infant deaths and overdose deaths strongly resemble what education would have to do to, for example, eliminate reading failure or enable all students to succeed at middle school mathematics. No one distinct intervention, no one miracle pill has by itself improved infant mortality or overdose mortality, and solutions for reading and math failure will similarly involve many elements and coordinated efforts among many government agencies, private foundations, and educators, as well as researchers and developers.

The difference between evidence-based reform in medicine/public health and education is, I believe, a difference in societal commitment to solving the problems. The general public, especially political leaders, tend to be rather complacent about educational failures. One of our past presidents said he wanted to help, but said, “We have more will than wallet” to solve educational problems. Another focused his education plans on recruiting volunteers to help with reading. These policies hardly communicate seriousness. In contrast, if medicine or public health can significantly reduce death or disease, it’s hard to be complacent.

Perhaps part of the motivational difference is due to the situations of powerful people. Anyone can get a disease, so powerful individuals are likely to have children or other relatives or friends who suffer from a given disease. In contrast, they may assume that children failing in school have inadequate parents or parents who need improved job opportunities or economic security or decent housing, which will take decades, and massive investments to solve. As a result, governments allocate little money for research, development, or dissemination of proven programs.

There is no doubt in my mind that we could, for example, eliminate early reading failure, using the same techniques used to eliminate diseases: research, development, practical experiments, and planful, rapid scale-up. It’s all a question of resources, political leadership, collaboration among many critical agencies and individuals, and a total commitment to getting the job done. The year reading failure drops to near zero nationwide, perhaps education will make the Washington Post list of “50 Good Things That Happened in 2050.”

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Do Different Textbooks Have Different Effects on Student Achievement?

The British comedy group Monty Python used to refer to “privileged glimpses into the perfectly obvious.”

And just last week, there they were. In a front-page article, the March 13 edition of Education Week reported that a six-state study of the achievement outcomes of different textbooks found . . . wait for it. . . near-zero relative effects on achievement measures (Sawchuck, 2019).

Really!

The study was led by Harvard’s Thomas Kane, a major proponent of the Common Core, who was particularly upset to find out that textbooks produced before and after the Common Core influenced textbook content had few if any differential effects on achievement.

I doubt that I am the only person who is profoundly unsurprised by these findings. For the past 12 years, I’ve been doing reviews of research on programs’ effects on achievement in rigorous research. Textbooks (or curricula) are usually one of the categories in my reviews. You can see the reviews at www.bestevidence.org. Here is a summary of the average effect sizes for textbooks or curricula:

Review No. of Studies Mean Effect Size
Elementary Reading

(Inns et al., 2019)

9 +0.03
Elementary Math

Pellegrini et al., 2018)

16 +0.06
Secondary Math

(Slavin et al., 2009)

40 +0.03
Secondary Science

(Cheung et al., 2016)

8 +0.10
Weighted Average 73 +0.04

None of these outcomes suggest that textbooks make much difference, and the study-weighted average of +0.04 is downright depressing.

blog_3-28-19_sleepingstudent_500x333

Beyond the data, it is easy to see why evaluations of the achievement outcomes of textbooks rarely find significant positive outcomes. Such studies compare one textbook to another textbook that is usually rather similar. The reason is that textbook publishers respond to the demands of the market, not to evidence of effectiveness. New and existing textbooks were shaped by similar market forces. When standards change, as in the case of the Common Core State Standards in recent years, all textbook companies generally are forced to make changes in the same direction. There may be a brief window of time when new textbooks designed to meet new standards have a temporary advantage, but large publishers are extremely sensitive to such changes, and if they are not up to date in terms of standards today, they soon will be. Still, as the Kane et al. study found, changes in standards do not in themselves improve achievement on a substantial scale. Changes in standards do change market demand, which changes the content of textbooks, but fundamentally, the changes are not enough to make a measurable difference in learning.

Kane was quoted by Education Week as drawing the lesson from the study that perhaps it isn’t the textbooks that matter, but rather how the textbooks are used:

“What levels of coaching or more-intensive professional development are required to help teachers use rigorous materials at higher levels of fidelity, and does that produce larger benefits?” (Sawchuk, 2019, p. 17).

This sounds logical, but recent research in elementary mathematics calls this approach into question. Pellegrini et al. (2018) examined a category of programs that provide teachers with extensive professional development focused on math content and pedagogy. The average effect size across 12 studies was only +0.04, or essentially zero. In contrast, what did work very well were one-to-one and one-to-small group tutoring (mean effect size = +0.29) and professional development focused on classroom management and motivation (mean effect size = +0.25). In other words, programs focusing on helping teachers use standards-based materials added little if anything to the learning impact of textbooks. What mattered, beyond tutoring, were approaches that change classroom routines and relationships, such as cooperative learning or classroom management methods.

Changing textbooks matters little, and adding extensive professional development focused on standards adds even less. Instead, strategies that engage, excite, and accommodate individual needs of students are what we find to matter a great deal, across many subjects and grade levels.

This should be a privileged glimpse into the perfectly obvious. Everyone knows that textbooks make little difference. Walk through classrooms in any school, teaching any subject at any grade level. Some classes are exciting, noisy, fully engaged places in which students are eager to learn. Others are well, teaching the textbook. In which type of class did you learn best? In which type do you hope your own children will spend their time in school, or wish they had?

What is obvious from the experience of every teacher and everyone who has ever been a student is that changing textbooks and focusing on standards do not in themselves lead to classrooms that kindle the love of learning. Imagine that you, as an accomplished adult educator, took a class in tennis, or Italian, or underwater basket weaving. Would a teacher using better textbooks and more advanced standards make you love this activity and learn from it? Or would a teacher who expresses enthusiasm for the subject and for the students, who uses methods that engage students in active social activities in every lesson, obtain better outcomes of every kind? I hope this question answers itself.

I once saw a science teacher in Baltimore teaching anatomy by having students take apart steamed crabs (a major delicacy in Baltimore). The kids were working in groups, laughing at this absurd idea, but they were learning like crazy, and learning to love science. I would submit that this experience, these connections among students, this laughter are the standards our schools need to attain. It’s not about textbooks, nor professional development on textbooks.

Another Baltimore teacher I knew taught a terrific unit on ancient Egypt. The students made their own sarcophagi, taking into the afterlife the things most important to them. Then the class went on a field trip to a local museum with a mummy exhibit, and finally, students made sarcophagi representing what Egyptians would value in the afterlife.  That’s what effective teaching is about.

The great 18th century Swedish botanist Carl Linnaeus took his students on walks into forests, fields, and lakes around Uppsala University. Whatever they found, they brought back held high singing and playing conch shell trumpets in triumph.  That’s what effective teaching is about.

In England, I saw a teacher teaching graph coordinates. She gave each student’s desk a coordinate, from 1, 1 to 5, 5, and put up signs labeled North, South, East, and West on the walls. She then made herself into a robot, and the students gave her directions to get from one coordinate to another. The students were laughing, but learning. That’s what effective teaching is about.

No textbook can compete with these examples of inspired teaching. Try to remember your favorite textbook, or your least favorite. I can’t think of a single one. They were all the same. I love to read and love to learn, and I’m sure anyone reading this blog is the same. But textbooks? Did a textbook ever inspire you to want to learn more or give you enthusiasm for any subject?

This is a privileged glimpse into the perfectly obvious to which we should devote our efforts in innovation and professional development. A textbook or standard never ignited a student’s passion or curiosity. Textbooks and standards may be necessary, but they will not transform our schools. Let’s use what we already know about how learning really happens, and then make certain that every teacher knows how to do the things that make learning engage students’ hearts and emotions, not just their minds.

References

Cheung, A., Slavin, R.E., Kim, E., & Lake, C. (2016). Effective secondary science programs: A best-evidence synthesis. Journal of Research on Science Teaching, 54 (1), 58-81. Doi: 10.1002/tea.21338

Inns, A., Lake, C. Byun, S., Shi, C., & Slavin, R. E. (2019). Effective Tier 1 reading instruction for elementary schools: A systematic review. Paper presented at the annual meeting of the Society for Research on Educational Effectiveness, Washington, D.C.

Pellegrini, M., Inns, A., Lake, C., & Slavin, R. E. (2018). Effective programs in elementary mathematics: A best-evidence synthesis. Manuscript submitted for publication.

Sawchuk, S. (2019, March 13). New texts failed to lift test scores in six-state study. Education Week, 38(25), 1, 17.

Slavin, R.E., Lake, C., & Groff, C. (2009). Effective programs in middle and high school mathematics: A best-evidence synthesis. Review of Educational Research, 79 (2), 839-911.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

 

What Works in Teaching Writing?

“I’ve learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel. The idea is to write it so that people hear it and it slides through the brain and goes straight to the heart.”   -Maya Angelou

It’s not hard to make an argument that creative writing is the noblest of all school subjects. To test this, try replacing the word “write” in this beautiful quotation from Maya Angelou with “read” or “compute.” Students must be proficient in reading and mathematics and other subjects, of course, but in what other subject must learners study how to reach the emotions of their readers?

blog_3-21-19_mangelou2_394x500

Good writing is the mark of an educated person. Perhaps especially in the age of electronic communications, we know most of the people we know largely through their writing. Job applications depend on the ability of the applicant to make themselves interesting to someone they’ve never seen. Every subject–science, history, reading, and many more–requires its own exacting types of writing.

Given the obvious importance of writing in people’s lives, you’d naturally expect that writing would occupy a central place in instruction. But you’d be wrong. Before secondary school, writing plays third fiddle to the other two of the 3Rs, reading and ‘rithmetic, and in secondary school, writing is just one among many components of English. College professors, employers, and ordinary people complain incessantly about the poor writing skills of today’s youth. The fact is that writing is not attended to as much as it should be, and the results are apparent to all.

Not surprisingly, the inadequate focus on writing in U.S. schools extends to an inadequate focus on research on this topic as well. My colleagues and I recently carried out a review of research on secondary reading programs. We found 69 studies that met rigorous inclusion criteria (Baye, Lake, Inns, & Slavin, in press). Recently, our group completed a review of secondary writing using similar inclusion standards, under funding from the Education Endowment Foundation in England (Slavin, Lake, Inns, Baye, Dachet, & Haslam, 2019). Yet we found only 14 qualifying studies, of which 11 were in secondary schools (we searched down to third grade).

To be fair, our inclusion standards were pretty tough. We required that studies compare experimental groups to randomized or matched control groups on measures independent of the experimental treatment. Tests could not have been made up by teachers or researchers, and they could not be scored by the teachers who taught the classes. Experimental and control groups had to be well-matched at pretest and have nearly equal attrition (loss of subjects over time). Studies had to have a duration of at least 12 weeks. Studies could include students with IEPs, but they could not be in self-contained, special education settings.

We divided the studies into three categories. One was studies of writing process models, in which students worked together to plan, draft, revise, and edit compositions in many genres. A very similar category was cooperative learning models, most of which also used a plan-draft-revise-edit cycle, but placed a strong emphasis on use of cooperative learning teams. A third category was programs that balanced writing with reading instruction.

Remarkably, the average effect sizes of each of the three categories were virtually identical, with a mean effect size of +0.18. There was significant variation within categories, however. In the writing process category, the interesting story concerned a widely used U.S. program, Self-Regulated Strategy Development (SRSD), evaluated in two qualifying studies in England. In one, the program was implemented in rural West Yorkshire and had huge impacts on struggling writers, the students for whom SRSD was designed. The effect size was +0.74. However, in a much larger study in urban Leeds and Lancashire, outcomes were not so positive (ES= +0.01), although effects were largest for struggling writers. There were many studies of SRSD in the U.S, but none of them qualified, due to a lack of control group, brief experiments, measures made up by researchers, and located in all-special education classrooms.

Three programs that emphasize cooperative learning had notably positive impacts. These were Writing Wings (ES = +0.13), Student Team Writing (ES = +0.38), and Expert 21 (ES = +0.58).

Among programs emphasizing reading and writing, two had a strong focus on English learners: Pathway (ES = +0.32) and ALIAS (ES = +0.18). Another two approaches had an explicit focus on preparing students for freshman English: College Ready Writers Program (ES = +0.18) and Expository Reading and Writing Course (ES = =0.13).

Looking across all categories, there were several factors common to successful programs that stood out:

  • Cooperative Learning. Cooperative learning usually aids learning in all subjects, but it makes particular sense in writing, as a writing team gives students opportunities to give and receive feedback on their compositions, facilitating their efforts to gain insight into how their peers think about writing, and giving them a sympathetic and ready audience for their writing.
  • Writing Process. Teaching students step-by-step procedures to work with others to plan, draft, revise, and edit compositions in various genres appears to be very beneficial. The first steps focus on helping students get their ideas down on paper without worrying about mechanics, while the later stages help students progressively improve the structure, organization, grammar, and punctuation of their compositions. These steps help students reluctant to write at all to take risks at the outset, confident that they will have help from peers and teachers to progressively improve their writing.
  • Motivation and Joy in Self-Expression. In the above quote, Maya Angelou talks about the importance in writing of “sliding through the brain to get to the heart.” But to the writer, this process must work the other way, too. Good writing starts in the heart, with an urge to say something of importance. The brain shapes writing to make it readable, but writing must start with a message that the writer cares about. This principle is demonstrated most obviously in writing process and cooperative learning models, where every effort is made to motivate students to find exciting and interesting topics to share with their peers. In programs balancing reading and writing, reading is used to help students have something important to write.
  • Extensive Professional Development. Learning to teach writing well is not easy. Teachers need opportunities to learn new strategies and to apply them in their own writing. All of the successful writing programs we identified in our review provided extensive, motivating, and cooperative professional development, often designed as much to help teachers catch the spirit of writing as to follow a set of procedures.

Our review of writing research found that there is considerable consensus in how to teach writing. There were more commonalities than differences across the categories. Effects were generally positive, however, because control teachers were not using these consensus strategies, or were not doing so with the skills imparted by the professional development characteristic of all of the successful approaches.

We cannot expect writing instruction to routinely produce Maya Angelous or Mark Twains. Great writers add genius to technique. However, we can create legions of good writers, and our students will surely benefit.

References

Baye, A., Lake, C., Inns, A., & Slavin, R. (in press). Effective reading programs for secondary students. Reading Research Quarterly.

Slavin, R. E., Lake, C. Inns, A., Baye, A., Dachet, D., & Haslam, J. (2019). A quantitative synthesis of research on writing approaches in Key Stage 2 and secondary schools. London: Education Endowment Foundation.

Photo credit: Kyle Tsui from Washington, DC, USA [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)]

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

A Mathematical Mystery

My colleagues and I wrote a review of research on elementary mathematics (Pellegrini, Lake, Inns, & Slavin, 2018). I’ve written about it before, but I wanted to hone in on one extraordinary set of findings.

In the review, there were 12 studies that evaluated programs that focused on providing professional development for elementary teachers of mathematics content and mathematics –-specific pedagogy. I was sure that this category would find positive effects on student achievement, but it did not. The most remarkable (and depressing) finding involved the huge year-long Intel study in which 80 teachers received 90 hours of very high-quality in-service during the summer, followed by an additional 13 hours of group discussions of videos of the participants’ class lessons. Teachers using this program were compared to 85 control teachers. After all this, students in the Intel classes scored slightly worse than controls on standardized measures (Garet et al., 2016).

If the Intel study were the only disappointment, one might look for flaws in their approach or their evaluation design or other things specific to that study. But as I noted earlier, all 12 of the studies of this kind failed to find positive effects, and the mean effect size was only +0.04 (n.s.).

Lest anyone jump to the conclusion that nothing works in elementary mathematics, I would point out that this is not the case. The most impactful category was tutoring programs, so that’s a special case. But the second most impactful category had many features in common with professional development focused on mathematics content and pedagogy, but had an average effect size of +0.25. This category consisted of programs focused on classroom management and motivation: Cooperative learning, classroom management strategies using group contingencies, and programs focusing on social emotional learning.

So there are successful strategies in elementary mathematics, and they all provided a lot of professional development. Yet programs for mathematics content and pedagogy, all of which also provided a lot of professional development, did not show positive effects in high-quality evaluations.

I have some ideas about what may be going on here, but I advance them cautiously, as I am not certain about them.

The theory of action behind professional development focused on mathematics content and pedagogy assumes that elementary teachers have gaps in their understanding of mathematics content and mathematics-specific pedagogy. But perhaps whatever gaps they have are not so important. Here is one example. Leading mathematics educators today take a very strong view that fractions should never be taught using pizza slices, but only using number lines. The idea is that pizza slices are limited to certain fractional concepts, while number lines are more inclusive of all uses of fractions. I can understand and, in concept, support this distinction. But how much difference does it make? Students who are learning fractions can probably be divided into three pizza slices. One slice represents students who understand fractions very well, however they are presented, and another slice consists of students who have no earthly idea about fractions. The third slice consists of students who could have learned fractions if it were taught with number lines but not pizzas. The relative sizes of these slices vary, but I’d guess the third slice is the smallest. Whatever it is, the number of students whose success depends on fractions vs. number lines is unlikely to be large enough to shift the whole group mean very much, and that is what is reported in evaluations of mathematics approaches. For example, if the “already got it” slice is one third of all students, and the “probably won’t get it” slice is also one third, the slice consisting of students who might get the concept one way but not the other is also one third. If the effect size for the middle slice were as high as an improbable +0.20, the average for all students would be less than +0.07, averaging across the whole pizza.

blog_2-14-19_slices_500x333

A related possibility relates to teachers’ knowledge. Assume that one slice of teachers already knows a lot of the content before the training. Another slice is not going to learn or use it. The third slice, those who did not know the content before but will use it effectively after training, is the only slice likely to show a benefit, but this benefit will be swamped by the zero effects for the teachers who already knew the content and those who will not learn or use it.

If teachers are standing at the front of the class explaining mathematical concepts, such as proportions, a certain proportion of students are learning the content very well and a certain proportion are bored, terrified, or just not getting it. It’s hard to imagine that the successful students are gaining much from a change of content or pedagogy, and only a small proportion of the unsuccessful students will all of a sudden understand what they did not understand before, just because it is explained better. But imagine that instead of only changing content, the teacher adopts cooperative learning. Now the students are having a lot of fun working with peers. Struggling students have an opportunity to ask for explanations and help in a less threatening environment, and they get a chance to see and ultimately absorb how their more capable teammates approach and solve difficult problems. The already high-achieving students may become even higher achieving, because as every teacher knows, explanation helps the explainer as much as the student receiving the explanation.

The point I am making is that the findings of our mathematics review may reinforce a general lesson we take away from all of our reviews: Subtle treatments produce subtle (i.e., small) impacts. Students quickly establish themselves as high or average or low achievers, after which time it is difficult to fundamentally change their motivations and approaches to learning. Making modest changes in content or pedagogy may not be enough to make much difference for most students. Instead, dramatically changing motivation, providing peer assistance, and making mathematics more fun and rewarding, seems more likely to make a significant change in learning than making subtle changes in content or pedagogy. That is certainly what we have found in systematic reviews of elementary mathematics and elementary and secondary reading.

Whatever the student outcomes are compared to controls, there may be good reason to improve mathematics content and pedagogy. But if we are trying to improve achievement for all students, the whole pizza, we need to use methods that make a more profound impact on all students. And that is true any way you slice it.

References

Garet, M. S., Heppen, J. B., Walters, K., Parkinson, J., Smith, T. M., Song, M., & Borman, G. D. (2016). Focusing on mathematical knowledge: The impact of content-intensive teacher professional development (NCEE 2016-4010). Washington, DC: U.S. Department of Education.

Pellegrini, M., Inns, A., Lake, C., & Slavin, R. E. (2018). Effective programs in elementary mathematics: A best-evidence synthesis. Paper presented at the Society for Research on Effective Education, Washington, DC.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

 

First There Must be Love. Then There Must be Technique.

I recently went to Barcelona. This was my third time in this wonderful city, and for the third time I visited La Sagrada Familia, Antoni Gaudi’s breathtaking church. It was begun in the 1880s, and Gaudi worked on it from the time he was 31 until he died in 1926 at 74. It is due to be completed in 2026.

Every time I go, La Sagrada Familia has grown even more astonishing. In the nave, massive columns branching into tree shapes hold up the spectacular roof. The architecture is extremely creative, and wonders lie around every corner.

blog_7-19-18_Barcelona_333x500

I visited a new museum under the church. At the entrance, it had a Gaudi quote:

First there must be love.

Then there must be technique.

This quote sums up La Sagrada Familia. Gaudi used complex mathematics to plan his constructions. He was a master of technique. But he knew that it all meant nothing without love.

In writing about educational research, I try to remind my readers of this from time to time. There is much technique to master in creating educational programs, evaluating them, and fairly summarizing their effects. There is even more technique in implementing proven programs in schools and classrooms, and in creating policies to support use of proven programs. But what Gaudi reminds us of is just as essential in our field as it was in his. We must care about technique because we care about children. Caring about technique just for its own sake is of little value. Too many children in our schools are failing to learn adequately. We cannot say, “That’s not my problem, I’m a statistician,” or “that’s not my problem, I’m a policymaker,” or “that’s not my problem, I’m an economist.” If we love children and we know that our research can help them, then it’s all of our problems. All of us go into education to solve real problems in real classrooms. That’s the structure we are all building together over many years. Building this structure takes technique, and the skilled efforts of many researchers, developers, statisticians, superintendents, principals, and teachers.

Each of us brings his or her own skills and efforts to this task. None of us will live to see our structure completed, because education keeps growing in techniques and capability. But as Gaudi reminds us, it’s useful to stop from time to time and remember why we do what we do, and for whom.

Photo credit: By Txllxt TxllxT [CC BY-SA 4.0  (https://creativecommons.org/licenses/by-sa/4.0)], from Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

On Motivation

Once upon a time there was a man standing on a city street selling pencils from a tin cup. An old friend came by and recognized him.

“Hank!” said his friend. “What happened to you? Didn’t you have a big job at the Acme Dog Food Company?”

Hank hung his head. “I did,” he said mournfully. “I was its chief scientist. But it closed down, and it was all my fault!”

“What happened?” asked his friend.

“We decided to make the best dog food ever. We got together the top experts in dog nutrition in the whole world to find out what dogs really need. We put in the very best ingredients, no matter what they cost.”

“That sounds wonderful!” exclaimed the friend.

“It sounded great,” sighed Hank, “but the darned dogs wouldn’t eat it!”

In educational development, research, and dissemination, I think we often make the mistake made by the mythical Acme Dog Food Company. We create instructional materials and software completely in accord with everything the experts recommend. Today, for example, someone might make a program that is aligned with the Common Core or other college- and career-readiness standards, that uses personalization and authentic problem solving, and so on. Not that there is anything wrong with these concepts, but are they enough?

The key factor, I’d argue, is motivation. No matter how nutritious our instruction is, it has to appeal to the kids. In a review of secondary reading programs my colleagues and I wrote recently (www.bestevidence.org), most of the programs evaluated were 100% in accord with what the experts suggest. In particular, most of them emphasized the teaching of metacognitive skills, which has long been the touchstone for secondary reading, and many also provided an extra instructional period every day, in accord with the popular emphasis on extra-time strategies.

However, the approaches that made the biggest differences in reading outcomes were not those that provided extra time. They included small-group or individual tutoring approaches, cooperative learning, BARR (a program focusing on building relationships between teachers and students), and a few technology approaches. The successful approaches usually included metacognitive skills, but so did many programs that did not show positive outcomes.

What united the successful strategies is that they all get to the head through the heart.

Tutoring allows total personalization of instruction, but it also lets tutors and students build personal, close relationships. BARR (Building Assets, Reducing Risks) is all about building personal relationships. Cooperative learning focuses on building relationships among students, and adding an element of fun and engagement to daily lessons. Some technology programs are also good at making lessons fun and engaging.

I can’t say for sure that these were the factors that made the difference in learning outcomes, but it seems likely. I’d never say that instructional content and strategies don’t matter. They do. But the very best teaching methods with the very best content are unlikely to enhance learning very much unless they make the kids eager to learn.