With a Great Principal, Any Program Works. Right?

Whenever I speak about proven programs in education, someone always brings up what they consider a damning point. “Sure, there are programs proven to work. But it all comes down to the principal. A great principal can get any program to work. A weak principal can’t get any program to work. So if it’s all about the quality of principals, what do proven programs add?”

To counter this idea, consider Danica Patrick, one of the winningest NASCAR racecar drivers a few years ago. If you gave Danica and a less talented driver identical cars on an identical track, Danica was sure to win.blog_8-16_18_Danica_500x333But instead of the Formula 1 racecar she drove, what if you gave Danica a Ford Fiesta? Obviously, she wouldn’t have a chance. It takes a great car and a great driver to win NASCAR races.

Back to school principals, the same principle applies. Of course it is true that great principals get great results. But they get far better results if they are implementing effective programs.

In high-quality evaluations, you might have 50 schools assigned at random, either to use an experimental program or to a control group that continues doing what they’ve always done. There would usually be 25 of each in such a study. Because of random assignment, there are likely to be the same number of great principals, average principals, and less than average principals in each group. Differences in principal skills cannot be the reason for any differences in student outcomes, because of this distribution of great principals across experimental and control groups. All other factors, such as the initial achievement levels of schools, socioeconomic factors, and talents of teachers, are also balanced out by random assignment. They cannot cause one group (experimental) to do better than another (control), because they are essentially equal across the two sets of schools.

It can be true that when a developer or publisher shows off the extraordinary success of a school or two, the exceptional outcomes may be due to a combination of a great program and a great principal. Danica Patrick in a superior car would really dominate a less skilled driver in a less powerful car. The same is true of programs in schools. Great programs led by great principals (with great staffs) can produce extraordinary outcomes, probably beyond what the great principals could have done on their own.

If you doubt this, consider Danica Patrick in her Ford Fiesta!

Photo credits: Left: By Sarah Stierch [CC BY 4.0  (https://creativecommons.org/licenses/by/4.0)], from Wikimedia Commons; Right: By Morio [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], from Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Advertisements

First There Must be Love. Then There Must be Technique.

I recently went to Barcelona. This was my third time in this wonderful city, and for the third time I visited La Sagrada Familia, Antoni Gaudi’s breathtaking church. It was begun in the 1880s, and Gaudi worked on it from the time he was 31 until he died in 1926 at 74. It is due to be completed in 2026.

Every time I go, La Sagrada Familia has grown even more astonishing. In the nave, massive columns branching into tree shapes hold up the spectacular roof. The architecture is extremely creative, and wonders lie around every corner.

blog_7-19-18_Barcelona_333x500

I visited a new museum under the church. At the entrance, it had a Gaudi quote:

First there must be love.

Then there must be technique.

This quote sums up La Sagrada Familia. Gaudi used complex mathematics to plan his constructions. He was a master of technique. But he knew that it all meant nothing without love.

In writing about educational research, I try to remind my readers of this from time to time. There is much technique to master in creating educational programs, evaluating them, and fairly summarizing their effects. There is even more technique in implementing proven programs in schools and classrooms, and in creating policies to support use of proven programs. But what Gaudi reminds us of is just as essential in our field as it was in his. We must care about technique because we care about children. Caring about technique just for its own sake is of little value. Too many children in our schools are failing to learn adequately. We cannot say, “That’s not my problem, I’m a statistician,” or “that’s not my problem, I’m a policymaker,” or “that’s not my problem, I’m an economist.” If we love children and we know that our research can help them, then it’s all of our problems. All of us go into education to solve real problems in real classrooms. That’s the structure we are all building together over many years. Building this structure takes technique, and the skilled efforts of many researchers, developers, statisticians, superintendents, principals, and teachers.

Each of us brings his or her own skills and efforts to this task. None of us will live to see our structure completed, because education keeps growing in techniques and capability. But as Gaudi reminds us, it’s useful to stop from time to time and remember why we do what we do, and for whom.

Photo credit: By Txllxt TxllxT [CC BY-SA 4.0  (https://creativecommons.org/licenses/by-sa/4.0)], from Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Ensuring That Proven Programs Stay Effective in Practice

On a recent trip to Scotland, I visited a ruined abbey. There, in what remained of its ancient cloister, was a sign containing a rule from the 1459 Statute of the Strasbourg Stonecutters’ Guild:

If a master mason has agreed to build a work and has made a drawing of the work as it is to be executed, he must not change this original. But he must carry out the work according to the plan that he has presented to the lords, towns, or villages, in such a way that the work will not be diminished or lessened in value.

Although the Stonecutters’ Guild was writing more than five centuries ago, it touched on an issue we face right now in evidence-based reform in education. Providers of educational programs may have excellent evidence that meets ESSA standards and demonstrates positive effects on educational outcomes. That’s terrific, of course. But the problem is that after a program has gone into dissemination, its developers may find that schools are not willing or able to pay for all of the professional development or software or materials used in the experiments that validated the program. So they may provide less, sometimes much less, to make the program cheaper or easier to adopt. This is the problem that concerned the Stonecutters of Strasbourg: Grand plans followed by inadequate construction.

blog_5-17-18_MedBuilding_500x422

In our work on Evidence for ESSA, we see this problem all the time. A study or studies show positive effects for a program. In writing up information on costs, personnel, and other factors, we usually look at the program’s website. All too often, we find that the program on the website provides much less than the program that was evaluated.  The studies might have provided weekly coaching, but the website promises two visits a year. A study of a tutoring program might have involved one-to-two tutoring, but the website sells or licenses the materials in sets of 20 for use with groups of that size. A study of a technology program may have provided laptops to every child and a full-time technology coordinator, while the website recommends one device for every four students and never mentions a technology coordinator.

Whenever we see this, we take on the role of the Stonecutters’ Guild, and we have to be as solid as a rock. We tell developers that we are planning to describe their program as it was implemented in their successful studies. This sometimes causes a ruckus, with vendors arguing that providing what they did in the study would make the program too expensive. “So would you like us to list your program (as it is in your website) as unevaluated?” we say. We are not unreasonable, but we are tough, because we see ourselves as helping schools make wise and informed choices, not helping vendors sell programs that may have little resemblance to the programs that were evaluated.

This is hard work, and I’m sure we do not get it right 100% of the time. And a developer may agree to an honest description but then quietly give discounts and provide less than what our descriptions say. All we can do is state the truth on our website about what was provided in the successful studies as best as we can, and the schools have to insist that they receive the program as described.

The Stonecutters’ Guild, and many other medieval guilds, represented the craftsmen, not the customers. Yet part of their function was to uphold high standards of quality. It was in the collective interest of all members of the guild to create and maintain a “brand,” indicating that any product of the guild’s members met the very highest standards. Someday, we hope publishers, software developers, professional development providers, and others who work with schools will themselves insist on an evidence base for their products, and then demand that providers ensure that their programs continue to be implemented in ways that maximize the probability that they will produce positive outcomes for children.

Stonecutters only build buildings. Educators affect the lives of children, which in turn affect families, communities, and societies. Long after a stonecutter’s work has fallen into ruin, well-educated people and their descendants and communities will still be making a difference. As researchers, developers, and educators, we have to take this responsibility at least as seriously as did the stone masons of long ago.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Nevada Places Its Bets on Evidence

blog_3-29-18_HooverDam_500x375In Nevada, known as the land of big bets, taking risks is what they do. The Nevada State Department of Education (NDE) is showing this in its approach to ESSA evidence standards .  Of course, many states are planning policies to encourage use of programs that meet the ESSA evidence standards, but to my knowledge, no state department of education has taken as proactive a stance in this direction as Nevada.

 

Under the leadership of their state superintendent, Steve Canavero, Deputy Superintendent Brett Barley, and Director of the Office of Student and School Supports Seng-Dao Keo, Nevada has taken a strong stand: Evidence is essential for our schools, they maintain, because our kids deserve the best programs we can give them.

All states are asked by ESSA to require strong, moderate, or promising programs (defined in the law) for low-achieving schools seeking school improvement funding. Nevada has made it clear to its local districts that it will enforce the federal definitions rigorously, and only approve school improvement funding for schools proposing to implement proven programs appropriate to their needs. The federal ESSA law also provides bonus points on various other applications for federal funding, and Nevada will support these provisions as well.

However, Nevada will go beyond these policies, reasoning that if evidence from rigorous evaluations is good for federal funding, why shouldn’t it be good for state funding too? For example, Nevada will require ESSA-type evidence for its own funding program for very high-poverty schools, and for schools serving many English learners. The state has a reading-by-third-grade initiative that will also require use of programs proven to be effective under the ESSA regulations. For all of the discretionary programs offered by the state, NDE will create lists of ESSA-proven supplementary programs in each area in which evidence exists.

Nevada has even taken on the holy grail: Textbook adoption. It is not politically possible for the state to require that textbooks have rigorous evidence of effectiveness to be considered state approved. As in the past, texts will be state adopted if they align with state standards. However, on the state list of aligned programs, two key pieces of information will be added: the ESSA evidence level and the average effect size. Districts will not be required to take this information into account, but by listing it on the state adoption lists the state leaders hope to alert district leaders to pay attention to the evidence in making their selections of textbooks.

The Nevada focus on evidence takes courage. NDE has been deluged with concern from districts, from vendors, and from providers of professional development services. To each, NDE has made the same response: we need to move our state toward use of programs known to work. This is worth undergoing the difficult changes to new partnerships and new materials, if it provides Nevada’s children better programs, which will translate into better achievement and a chance at a better life. Seng-Dao Keo describes the evidence movement in Nevada as a moral imperative, delivering proven programs to Nevada’s children and then working to see that they are well implemented and actually produce the outcomes Nevada expects.

Perhaps other states are making similar plans. I certainly hope so, but it is heartening to see one state, at least, willing to use the ESSA standards as they were intended to be used, as a rationale for state and local educators not just to meet federal mandates, but to move toward use of proven programs. If other states also do this, it could drive publishers, software producers, and providers of professional development to invest in innovation and rigorous evaluation of promising approaches, as it increases use of approaches known to be effective now.

NDE is not just rolling the dice and hoping for the best. It is actively educating its district and school leaders on the benefits of evidence-based reform, and helping them make wise choices. With a proper focus on assessments of needs, facilitating access to information, and assistance with ensuring high quality implementation, really promoting use of proven programs should be more like Nevada’s Hoover Dam: A sure thing.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Photo by: Michael Karavanov [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

Evidence-Based Does Not Equal Evidence-Proven

Chemist

As I speak to educational leaders about using evidence to help them improve outcomes for students, there are two words I hear all the time that give me the fantods (as Mark Twain would say):

Evidence-based

            I like the first word, “evidence,” just fine, but the second word, “based,” sort of negates the first one. The ESSA evidence standards require programs that are evidence-proven, not just evidence-based, for various purposes.

“Evidence-proven” means that a given program, practice, or policy has been put to the test. Ideally, students, teachers, or schools have been assigned at random to use the experimental program or to remain in a control group. The program is provided to the experimental group for a significant period of time, at least a semester, and then final performance on tests that are fair to both groups are compared, using appropriate statistics.

If your doctor gives you medicine, it is evidence proven. It isn’t just the same color or flavor as something proven, it isn’t just generally in line with what research suggests might be a good idea. Instead, it has been found to be effective, compared to current standards of care, in rigorous studies.

“Evidence-based,” on the other hand, is one of those wiggle words that educators love to use to indicate that they are up-to-date and know what’s expected, but don’t actually intend to do anything different from what they are doing now.

Evidence-based is today’s equivalent of “based on scientifically-based research” in No Child Left Behind. It sure sounded good, but what educational program or practice can’t be said to be “based on” some scientific principle?

In a recent Brookings article Mark Dynarski wrote about state ESSA plans, and conversations he’s heard among educators. He says that the plans are loaded with the words “evidence-based,” but with little indication of what specific proven programs they plan to implement, or how they plan to identify, disseminate, implement, and evaluate them.

I hope the ESSA evidence standards give leaders in even a few states the knowledge and the courage to insist on evidence-proven programs, especially in very low-achieving “school improvement” schools that desperately need the very best approaches. I remain optimistic that ESSA can be used to expand evidence-proven practices. But will it in fact have this impact? That remains to be proven.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

“Substantively Important” Isn’t Substantive. It Also Isn’t Important

Since it began in 2002, the What Works Clearinghouse has played an important role in finding, rating, and publicizing findings of evaluations of educational programs. It performs a crucial function for evidence-based reform. For this very reason, it needs to be right. But in several important ways, it uses procedures that are indefensible and have a big impact on its conclusions.

One of these relates to a study rating called “substantively important-positive.” This refers to study outcomes with an effect size of at least +0.25, but that are not statistically significant. I’ve written about this before, but the WWC has recently released a database of information on its studies that makes it easy to analyze WWC data on a large scale, and we have learned a lot more about this topic.

Study outcomes rated as “substantively important – positive” can qualify a study as “potentially positive,” the second-highest WWC rating. “Substantively important-negative” findings (non-significant effect sizes less than -0.25) can cause a study to be rated as potentially negative, which can keep a study from getting a positive rating forever, as a single “potentially negative” rating, under current rules, ensures that a program can never receive a rating better than “mixed,” even if other studies found hundreds of significant positive effects.

People who follow the WWC and know about “substantively important” may assume that it may be a strange rule, but relatively rare in practice. But that is not true.

My graduate student, Amanda Inns, has just done an analysis of WWC data from their own database, and if you are a big fan of the WWC, this is going to be a shock. Amanda has looked at all WWC-accepted reading and math studies. Among these, she found a total of 339 individual outcomes rated “positive” or “potentially positive.” Of these, 155 (46%) reached the “potentially positive” level only because they had effect sizes over +0.25, but were not statistically significant.

Another 36 outcomes were rated “negative” or “potentially negative.” 26 of these (72%) were categorized as “potentially negative” only because they had effect sizes less than -0.25 and were not significant. I’m sure patterns would be similar for subjects other than reading and math.

Put another way, almost half (48%) of outcomes rated positive/potentially positive or negative/potentially negative by the WWC were not statistically significant. As one example of what I’m talking about, consider a program called The Expert Mathematician. It had just one study with only 70 students in 4 classrooms (2 experimental and 2 control). The WWC re-analyzed the data to account for clustering, and the outcomes were nowhere near statistically significant, though they were greater than +0.25. This tiny study, and this study alone, caused The Expert Mathematician to receive the WWC “potentially positive” rating and to be ranked seventh among all middle school math programs. Similarly, Waterford Early Learning received a “potentially positive” rating based on a single tiny study with only 70 kindergarteners in 6 schools. The outcomes ranged from -0.71 to +1.11, and though the mean was more than +0.25, the outcome was far from significant. Yet this study alone put Waterford on the WWC list of proven kindergarten programs.

I’m not taking any position on whether these particular programs are in fact effective. All I am saying is that these very small studies with non-significant outcomes say absolutely nothing of value about that question.

I’m sure that some of you nerdier readers who have followed me this far are saying to yourselves, “well, sure, these substantively important studies may not be statistically significant, but they are probably unbiased estimates of the true effect.”

More bad news. They are not. Not even close.

The problem, also revealed in Amanda Inns’ data, is that studies with large effect sizes but not statistical significance tend to have very small sample sizes (otherwise, they would have been significant). Across WWC reading and math studies that used individual-level assignment, median sample sizes were 48, 74, or 86, for substantively important, significant, or indeterminate (non-significant with ES < +0.25), respectively. For cluster studies, they were 10, 17, and 33 clusters respectively. In other words, “substantively important” outcomes averaged less than half the sample sizes of other outcomes.

And small-sample studies greatly overstate effect sizes. Among all factors that bias effect sizes, small sample size is the most important (only use of researcher/developer-made measures comes close). So a non-significant positive finding in a small study is not an unbiased point estimate that just needs a larger sample to show its significance. It is probably biased, in a consistent, positive direction. Studies with sample sizes less than 100 have about three times the mean effect sizes of studies with sample sizes over 1000, for example.

But “substantively important” ratings can throw a monkey wrench into current policy. The ESSA evidence standards require statistically significant effects for all of its top three levels (strong, moderate, and promising). Yet many educational leaders are using the What Works Clearinghouse as a guide to which programs will meet ESSA evidence standards. They may logically assume that if the WWC says a program is effective, then the federal government stands behind it, regardless of what the ESSA evidence standards actually say. Yet in fact, based on the data analyzed by Amanda Inns for reading and math, 46% of the outcomes rated as positive/potentially positive by WWC (taken to correspond to “strong” or “moderate,” respectively, under ESSA evidence standards) are non-significant, and therefore do not qualify under ESSA.

The WWC needs to remove “substantively important” from its ratings as soon as possible, to avoid a collision with ESSA evidence standards, and to avoid misleading educators any further. Doing so would help make the WWC’s impact on ESSA substantive. And important.

 

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Title I: A 20% Solution

Here’s an idea that would cost nothing and profoundly shift education funding and the interest of educators and policy makers toward evidence-proven programs. Simply put, the idea is to require that schools receiving Title I funds use 20% of the total on programs that meet at least a moderate standard of evidence. Two thin dimes on the dollar could make a huge difference in all of education.

In terms of federal education policy, Title I is the big kahuna. At $15 billion per year, it is the largest federal investment in elementary and secondary education, and it has been very politically popular on both sides of the aisle since the Johnson administration in 1965, when the Elementary and Secondary Education Act (ESEA) was first passed. Title I has been so popular because it goes to every congressional district, and provides much-needed funding by formula to help schools serving children living in poverty. Since the reauthorization of ESEA as the Every Student Succeeds Act in 2015, Title I remains the largest expenditure.

In ESSA and other federal legislation, there are two kinds of funding. One is formula funding, like Title I, where money usually goes to states and is then distributed to districts and schools. The formula may adjust for levels of poverty and other factors, but every eligible school gets its share. The other kind of funding is called competitive, or discretionary funding. Schools, districts, and other entities have to apply for competitive funding, and no one is guaranteed a share. In many cases, federal funds are first given to states, and then schools or districts apply to their state departments of education to get a portion of it, but the state has to follow federal rules in awarding the funds.

Getting proven programs into widespread use can be relatively easy in competitive grants. Competitive grants are usually evaluated on a 100-point scale, with all sorts of “competitive preference points” for certain categories of applicants, such as for rural locations, inclusion of English language learners or children of military families, and so on. These preferences add perhaps one to five points to a proposal’s score, giving such applicants a leg up but not a sure thing. In the same way, I and others have proposed adding competitive preference points in competitive proposals for applicants who propose to adopt programs that meet established standards of evidence. For example, Title II SEED grants for professional development now require that applicants propose to use programs found to be effective in at least one rigorous study, and give five points if the programs have been proven effective in at least two rigorous studies. Schools qualifying for school improvement funding under ESSA are now required to select programs that meet ESSA evidence standards.

Adding competitive preference points for using proven programs in competitive grants is entirely sensible and pain-free. It costs nothing, and does not require applicants to use any particular program. In fact, applicants can forego the preference points entirely, and hope to win without them. Preference points for proven programs is an excellent way to nudge the field toward evidence-based reform without top-down mandates or micromanagement. The federal government states a preference for proven programs, which will at least raise their profile among grant writers, but no school or district has to do anything different.

The much more difficult problem is how to get proven programs into formula funding (such as Title I). The great majority of federal funds are awarded by formula, so restricting evidence-based reform to competitive grants is only nibbling at the edges of practice. One solution to this would be to allocate incentive grants to districts if they agree to use formula funds to adopt and implement proven programs.

However, incentives cost money. Instead, imagine that districts and schools get their Title I formula funds, as they have since 1965. However, Congress might require that districts use at least 20% of their Title I, Part A funding to adopt and implement programs that meet a modest standard of evidence, similar to the “moderate” level in ESSA (which requires one quasi-experimental study with positive effects). The adopted program could be anything that meets other Title I requirements—reading, math, tutoring, technology—except that the program has to have evidence of effectiveness. The funds could pay for necessary staffing, professional development, materials, software, hardware, and so on. Obviously, schools could devote more than 20% if they choose to do so.

There are several key advantages to this 20% solution. First, of course, children would immediately benefit from receiving programs with at least moderate evidence of effectiveness. Second, the process would instantly make leaders of the roughly 55,000 Title I schools intensely interested in evidence. Third, the process could gradually shift discussion about Title I away from its historical focus on “how much?” to an additional focus on “for what purpose?” Publishers, software developers, academics, philanthropy, and government itself would perceive the importance of evidence, and would commission or carry out far more high-quality studies to meet the new standards. Over time, the standards of evidence might increase.

All of this would happen at no additional cost, and with a minimum of micromanagement. There are now many programs that would meet the “moderate” standards of evidence in reading, math, tutoring, whole-school reform, and other approaches, so schools would have a wide choice. No Child Left Behind required that low-performing schools devote 20% of their Title I funding to after-school tutoring programs and student transfer policies that research later showed to make little or no difference in outcomes. Why not spend the same on programs that are proven to work in advance, instead of once again rolling the dice with the educational futures of at-risk children?

20% of Title I is a lot of money, but if it can make 100% of Title I more impactful, it is more than worthwhile.