Reading by Third Grade: Thinking Beyond Retention

Everyone agrees that children who read well by third grade are far more likely than those who do not to succeed in later education and in life. For example, a recent report from the Annie E. Casey Foundation found that poor readers in third grade were four times more likely than adequate readers to eventually drop out of school. Based on this kind of information, many states have passed or are considering legislation requiring that students not reading at an established level by the end of third grade to repeat the grade. Usually, the legislation also includes some funding for schools to help struggling students meet the standard.

These state initiatives raise important questions. Are they good for the students who were retained? And are they good for the overall school system?

Retention has been studied for many years, and the research is reasonably clear. Children who repeat a grade do very poorly in reading and, ultimately, graduation rates compared similarly to their low-achieving age mates who were promoted. In comparison to their new grade mates, they show a short-term gain because they are older when they take the test, but this advantage wears off within a few years.

Reading-by-third-grade policies could end up being beneficial, however, because they provide substantial incentives to use effective strategies to prevent reading failure. Holding a child back is very expensive, incurring an extra year of per-pupil cost (roughly $10,000). Many states or districts have as many as 50 percent of third graders who might not meet the standards, so a school of 500 students holding back 50 percent of third graders would be failing about 35 children at a cost of $350,000 per year.

School leaders now have whatever their state gives them to reduce retentions, and they have a good reason to spend whatever it takes to reduce their retention rates and save a lot of money; $350,000 is a lot of money to solve a well-defined problem concentrated in pre-K through third grade.

There is no aspect of school improvement more extensively studied than preventing reading failure in the early grades. There are many proven programs, especially one-to-one and small-group tutoring, cooperative learning, and whole-school reform models.

But many schools in reading-by-third-grade states are simply hiring reading specialists to tutor children one-to-one. While tutoring can be very effective, in most schools one reading specialist cannot possibly tutor enough students to put them all over the line.

The research would suggest that different types of intervention are needed for different children depending on their age and how far behind they are, as in the made-up curve below. In this curve, students in the green section are likely to be reading at grade level, though high-quality teaching is still needed to keep them on track. Those in the yellow section are not on track for success, but are not too far from the criterion. Those in the orange and red zones need intensive help to reach grade level.

2013-01-28-HP2Graph1.jpg

A school strategy only focused on the orange and red zones, with one-to-one tutoring, would not be very effective, as they have such a long way to go. Using tutors just for students in the yellow zone is ethically questionable and is still unlikely to get to enough children.

Instead, schools need a thoughtful, integrated strategy to get the maximum number of students to the standard. This could involve using proven but relatively inexpensive strategies for all students, high-quality small group tutoring for students in the yellow and orange zones, and high-quality one-to-one tutoring for children in the red zone, as illustrated below.

2013-01-28-HP2Graph2.jpg

If schools use the reading-by-third-grade movement as an opportunity to use proven practices throughout the primary grades, they can reap substantial savings by avoiding unnecessary retentions, and most importantly, they can make a life-changing difference for all of their students.

Invest in What Works

Amidst all the hue and cry about the fiscal cliff and the debt limit, a voice of reason made a plea so reasonable and nonpartisan that it was of course ignored.

2013-01-18-Sen_Landrieu.jpg

Senator Mary Landrieu (D-LA), in the December 20th Congressional Record, wrote a pleafor investing taxpayer dollars in what works. “…we have a responsibility to our taxpayers to improve outcomes for young people and their families by driving Federal funds more efficiently toward evidence-based, results-oriented solutions.”On a particularly encouraging note, Senator Landrieu noted that a bipartisan consensus is growing in support of programs such as Investing in Innovation (i3) and the Social Innovation Fund (SIF), which support scaling up of proven programs in education and social services, respectively.

There is never a good time to waste taxpayers’ money, but in a time of fiscal belt-tightening, the need to focus resources on approaches proven to work is particularly compelling. What taxpayers want is better schools and more effective social services, not just more spending.

Today, there is a growing set of programs that have been proven to work in large experiments under realistic conditions. Programs such as i3 and SIF help build up the “shelf” of proven programs and help them begin to scale up, and this is unprecedented. But it is time to move to the next step: Encouraging schools and social services agencies to make use of proven programs.

For example, i3 is a $150 million a year program designed to scale up proven models, but Title I, the largest federal education program at $15 billion a year, is untouched by the evidence-based reform movement. The role of evidence is growing in federal funding, but as Senator Landrieu argues, we need to “target investments in interventions with the strongest evidence of effectiveness” across the board, not just in demonstrations.

Wise use of federal resources is not a Democratic or Republican issue. It is a moral imperative, both in the bond of trust between taxpayers and government and in ensuring effective services to vulnerable young people.

Taking the Guesswork Out of Policy

In a wonderful article in the Washington Post on December 7, Dylan Matthews makes the altogether rational argument that Congress should routinely authorize evaluations of programs it is thinking of signing into law:

“In a perfect world, (reporting on policy) would be a kind of science reporting. Just as my colleagues at the health desk often explain which medicines are effective and which are a bust, I’d ideally be able to describe what sociologists, economists, and political scientists have discovered about which policies work.”

Followers of Sputnik will not be surprised to hear that I strongly support this position, and am delighted to see it expressed in the Post, especially after a recent Op Ed by Jon Baron in the New York Times made similar arguments. But I did want to build a bit on Matthews’ argument. In fact, there is often evidence on whether a given program is likely to work, but Congress routinely ignores it. It may be that the evidence is mixed or difficult to understand. In other cases, the evidence is clear but for political or ideological reasons it does not matter.

Further, the vast majority of government funding goes into programs that already exist, and are so politically popular that they are sure to exist for a long time. In education, the classic example is Title I, a $15 billion program that has provided funding to disadvantaged schools since the Johnson administration.

So here is my friendly amendment to Matthews’ article: In addition to testing out initiatives that Congress is considering, let’s also proactively test out innovative ways of using the money that government already spends. For example, let’s fund the creation, evaluation, and dissemination of programs to be used under funding from Title I, or IDEA, or other long-established programs certain to continue in some form. This is a goal of the federal Investing in Innovation (i3) program, of course, which is funding development, evaluation, and scale-up of proven programs, but at $150 million annually, i3 is only 1% of Title I funding and a trivial proportion of all education funding.

Building up a strong, well-validated set of solutions to the enduring problems of education is the best investment Congress could make in education, and the current Congress and administration have gone further in this direction than any before them. Yet there remains a lot of work to do to ensure that the best programs and practices become available to all children, especially those in need.

Thumbnail image for Thumbnail image for Thumbnail image for Thumbnail image for Sputnik-Blog-Moving-Truck 01072013.jpgSputnik is moving! Follow me to the Huffington Post, where I’ll be continuing to write about evidence-based reform in education. See you there! And don’t forget to follow me on Facebook and Twitter.

What Makes an Effective School Principal? Reality-Based Principal Assessments

Note: This is a guest post by Steven M. Ross, Professor in the Center for Research and Reform in Education at Johns Hopkins University. The post originally appeared on the Bush Center Blog.

Evaluating the effectiveness of school principals is in everyone’s best interests–students, teachers, parents, and arguably principals most of all. But what are fair and valid measures of success? Unfortunately, past practices in defining and evaluating principals raise concerns about consistency and fairness. A few months ago, the National Association of Elementary School Principals and the National Association of Secondary School Principals released a report, “Rethinking Principal Evaluation: A New Paradigm Informed by Research and Practice,” which I co-authored with Mathew Clifford. We hope that this report stimulates thinking at the federal, state, and district levels about using principal evaluations to support two essential functions: (a) judging principals’ effectiveness and (b) helping principals to increase their effectiveness. Below are some of the main points we raised:

  • For schools to be effective in raising student achievement, it is imperative that prospective principals with strong leadership potential be identified, recruited, trained, and supported. This goal needs to be a key focus of every district and state.
  • Principals must be held accountable for ensuring high student achievement. Yet expectations for success need to be tempered by realities. First, principals affect student learning only indirectly through the school environments they create, educational programs they bring in, and the teachers they recruit and develop. Second, improving conditions for instruction and developing faculty take time. Therefore, evaluations of progress and feedback for improvement are needed regularly and frequently. Schools and school districts also differ considerably in the types of students and communities they serve. Principals who are placed in more challenging contexts may need more time and resources to raise achievement to the high levels desired. Evaluation criteria that that put too many eggs in one assessment basket, expect positive change to happen immediately, and ignore contextual variables seem likely to misclassify many principals, boosting some who are ineffective and downgrading others who are doing good things that simply need more time to work.

To increase the validity, utility, and fairness of evaluations, multiple indicators should be used.

  • Student Growth and Achievement: The degree to which the principal succeeds in fostering school-wide gains and high performance in student achievement, behavior, and personal growth and development.
  • Professional Growth and Learning: The degree to which the principal has followed through on professional development or learning plans to improve personal skills and practices.
  • School Planning and Progress: The principal’s success at managing the school planning process to achieve school improvement goals and increase student learning.
  • School Culture: The principal’s development of a positive school culture that promotes safety, collaboration, high expectations for teachers and students, and connectedness with the community.
  • Professional Qualities and Instructional Leadership: The principal’s leadership knowledge, skills, and competencies.
  • Stakeholder Support and Engagement: The principal’s ability to build strong support and involvement by teachers, parents, and the community.

None of these domains is difficult or expensive to include in a district or state evaluation. Of course, the risks of basing judgments of principals and feedback to them on insufficient or invalid information would likely be far greater.

Thumbnail image for Thumbnail image for Thumbnail image for Sputnik-Blog-Moving-Truck 01072013.jpgSputnik is moving! Follow Robert Slavin to the Huffington Post, where he’ll continue to write about evidence-based reform in education. See you there! And don’t forget to follow Dr. Slavin on Facebook and Twitter.

Effect Size Matters in Educational Research

Readers are often – and understandably – frustrated when it comes to reports of educational experiments and effect sizes. Let’s say a given program had an effect size of +0.30 (or 30% of a standard deviation). Is that large? Small? Is the program worth doing or worth forgetting? There is no simple answer, because it depends on the quality of the study.

The Institute of Education Sciences recently issued a report by Mark Lipsey and colleaguesfocusing on how to interpret effect sizes. It is very useful for researchers, as intended, but still not so useful for readers of research. I wanted to point out some key conclusions of this report and a few additional thoughts.

First, the Lipsey et al. report dismisses the old Cohen characterization of effect sizes of +0.20 as “small,” +0.50 as “moderate,” and +0.80 as “large.” Those numbers applied to small, highly controlled lab studies. In real-life educational experiments with broad measures of achievement and random assignment to treatments, effect sizes as large as +0.50, much less +0.80, are hardly ever seen, except on occasion in studies of one-to-one tutoring.

The larger issue is that studies vary in quality, and many features of studies give hugely inflated estimates of effect sizes. In order of likely importance, here are some factors to watch for:

Studies that incorporate any of these elements can easily produce effect sizes of +1.00 or more. Such studies should be disregarded by readers serious about knowing what works in real classrooms and what does not.

The Lipsey et al. review notes that in randomized studies using “broad measures” (such as state test scores or standardized measures), average effect sizes across elementary and middle school studies averaged only +0.08. Across all types of measures, average effect sizes were +0.40 for one-to-one tutoring, +0.26 for small-group interventions, +0.18 for whole-classroom treatments, and +0.10 for whole-school treatments. Perhaps we should call effect sizes from high-quality studies equivalent to those of one-to-one tutoring “high,” and then work backward from there.

The real point of the Lipsey et al. report is that the quality and nature of studies has to be taken into account in interpreting effect sizes. I wish it were simpler, but it is impossible to be simple without being misleading. A good start would be to stop paying attention to outcomes on researcher-made measures and very small or brief studies, so that effect sizes can at least be understood as representing outcomes educators care about.

Thumbnail image for Thumbnail image for Sputnik-Blog-Moving-Truck 01072013.jpgSputnik is moving! Follow me to the Huffington Post, where I’ll be continuing to write about evidence-based reform in education. See you there! And don’t forget to follow me on Facebook and Twitter.

Schools That Beat the Odds…On Purpose

Recently, Public Agenda released a report on nine Ohio elementary and secondary schools that were “beating the odds,” which means demonstrating outstanding student achievement in high-poverty schools. Of course, many other organizations have also identified such schools in the past, notably the Education Trust.

What was different in the Public Agenda report is that, at least in the case of the elementary schools, two of the three schools highlighted used a comprehensive school reform model–Success for All*. In other words, those schools were using a program that others could directly adopt, instead of trying on their own to figure out what all successful schools have to figure out: how to cultivate effective leadership, set high expectations, implement strong professional development, effectively use data, and so on.

The difference between a set of principles and a replicable program is night and day. A replicable program implements similar principles, but does so on purpose, and knows how to do it again and again. In one long-ago baseball game, Babe Ruth famously pointed at the right field wall and then hit the next pitch over it. Lots of baseball players hit home runs, but what made Babe’s home run memorable is that he said he was going to do it and then he did it-on purpose.

There are lots of schools that do find a way to “beat the odds” on their own, and I am not arguing that the only path to success is any particular form of comprehensive school reform. But using a proven approach removes the trial and error and false starts; the investment in innovation has already been made by someone else. Implemented with fidelity, a proven program removes much of the risk that typically lies on the path to success. We need a lot more programs capable of helping struggling schools beat the odds–on purpose.

*Note: Robert Slavin is cofounder and chairman of the board for the Success for All Foundation.

Thumbnail image for Sputnik-Blog-Moving-Truck 01072013.jpgSputnik is moving! Follow me to the Huffington Post, where I’ll be continuing to write about evidence-based reform in education. See you there!

Technology without Supports: Like Cotton Candy for Breakfast

Note: This is a guest post by Monica Beglau, Ed.D., Executive Director, and Lorie Kaplan, Ph.D. , eMINTS Program Director for the eMINTS National Center at the University of Missouri.

Does this sound familiar? “Our school just purchased the latest mobile technology tablets for all of the students in our elementary school. Does anyone know where we could get some training about how to use them and what apps we should buy?” We’ve heard variations on this theme across our state and nationally for several years. Too often, as others have noted, the allure of the device outweighs practical planning for the implementation. Appropriate high-quality professional development and ongoing support for teachers is essential to success. Just as having sweet fluffy cotton candy for breakfast hardly fits the bill for a nutritious breakfast, short-term “summer boot camps” or a few hours of professional development after school leave educators hungry for more and without the necessary “nutrients” for effective instructional practices.

When we help schools and districts successfully implement technology initiatives, we turn to the evidence that has guided our work since 1999:

Leadership – leadership at all levels is essential and the principal is one of the most important variables in large-scale technology implementations.

  • A clear vision and goals connect the technology implementation to identified instructional priorities agreed upon by all stakeholders.
  • Ongoing professional development support provides principals with the knowledge and skills needed to achieve teacher buy-in and to understand best practices that support technology-transformed learning.

Technology support and infrastructure – beyond the computing devices themselves, it takes a high level of teamwork to ensure that classrooms are supported so that any barriers to using the devices are minimized.

  • A plan is in place to provide technology staff with the resources needed to support the devices, the network, and the maintenance issues that impact implementations.

Professional development – teachers and administrators have access to professional learning opportunities that incorporate evidence-based elements:

  • Active learning – participants must be engaged in interactive learning, not just listening to a lecture or presentation.
  • Coherence – participants must see an explicit connection between the professional development and their classroom practice or leadership.
  • Duration and intensity – if professional development contact time is less than 49 hours, it will produce little effect on student achievement.
  • Personalization – professional development must take into account the varied learning styles and preferences of educators.
  • Coaching – in-classroom or on-site coaching and mentoring is required to help educators “translate” what they learn in professional development sessions to their own classrooms or schools.

Creating programs that effectively address all of these aspects is very challenging. Few programs are able to encompass all of the evidence-based variables in meaningful ways. Project RED findings clearly articulate that the transformations in learning made possible by technology are highly dependent on a set of Key Implementation Factors (KIFs) . In our experience, it is not possible for schools or districts to implement the KIFs without professional development that is built on the evidence-based practices outlined above. The precious time needed to provide our educators with the “nutrition” they need to help our students’ minds grow shouldn’t be wasted on empty calories that lack substance and depth.

The eMINTS National Center is a non-profit organization providing evidence-based professional development programs that have taught educators how to use technology effectively since 1999. The eMINTS instructional model has demonstrated positive effects on student achievement in more than 3,500 classrooms across the United States and in Australia. The Center is currently completing a study of the impact of professional development and technology in rural middle schools funded by the US Department of Education’s Investing in Innovation (i3) program.