Nevada Places Its Bets on Evidence

blog_3-29-18_HooverDam_500x375In Nevada, known as the land of big bets, taking risks is what they do. The Nevada State Department of Education (NDE) is showing this in its approach to ESSA evidence standards .  Of course, many states are planning policies to encourage use of programs that meet the ESSA evidence standards, but to my knowledge, no state department of education has taken as proactive a stance in this direction as Nevada.

 

Under the leadership of their state superintendent, Steve Canavero, Deputy Superintendent Brett Barley, and Director of the Office of Student and School Supports Seng-Dao Keo, Nevada has taken a strong stand: Evidence is essential for our schools, they maintain, because our kids deserve the best programs we can give them.

All states are asked by ESSA to require strong, moderate, or promising programs (defined in the law) for low-achieving schools seeking school improvement funding. Nevada has made it clear to its local districts that it will enforce the federal definitions rigorously, and only approve school improvement funding for schools proposing to implement proven programs appropriate to their needs. The federal ESSA law also provides bonus points on various other applications for federal funding, and Nevada will support these provisions as well.

However, Nevada will go beyond these policies, reasoning that if evidence from rigorous evaluations is good for federal funding, why shouldn’t it be good for state funding too? For example, Nevada will require ESSA-type evidence for its own funding program for very high-poverty schools, and for schools serving many English learners. The state has a reading-by-third-grade initiative that will also require use of programs proven to be effective under the ESSA regulations. For all of the discretionary programs offered by the state, NDE will create lists of ESSA-proven supplementary programs in each area in which evidence exists.

Nevada has even taken on the holy grail: Textbook adoption. It is not politically possible for the state to require that textbooks have rigorous evidence of effectiveness to be considered state approved. As in the past, texts will be state adopted if they align with state standards. However, on the state list of aligned programs, two key pieces of information will be added: the ESSA evidence level and the average effect size. Districts will not be required to take this information into account, but by listing it on the state adoption lists the state leaders hope to alert district leaders to pay attention to the evidence in making their selections of textbooks.

The Nevada focus on evidence takes courage. NDE has been deluged with concern from districts, from vendors, and from providers of professional development services. To each, NDE has made the same response: we need to move our state toward use of programs known to work. This is worth undergoing the difficult changes to new partnerships and new materials, if it provides Nevada’s children better programs, which will translate into better achievement and a chance at a better life. Seng-Dao Keo describes the evidence movement in Nevada as a moral imperative, delivering proven programs to Nevada’s children and then working to see that they are well implemented and actually produce the outcomes Nevada expects.

Perhaps other states are making similar plans. I certainly hope so, but it is heartening to see one state, at least, willing to use the ESSA standards as they were intended to be used, as a rationale for state and local educators not just to meet federal mandates, but to move toward use of proven programs. If other states also do this, it could drive publishers, software producers, and providers of professional development to invest in innovation and rigorous evaluation of promising approaches, as it increases use of approaches known to be effective now.

NDE is not just rolling the dice and hoping for the best. It is actively educating its district and school leaders on the benefits of evidence-based reform, and helping them make wise choices. With a proper focus on assessments of needs, facilitating access to information, and assistance with ensuring high quality implementation, really promoting use of proven programs should be more like Nevada’s Hoover Dam: A sure thing.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Photo by: Michael Karavanov [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
Advertisements

“We Don’t Do Lists”

blog218_Santa_500x332 (2)

Watching the slow, uneven, uncertain rollout of the ESSA evidence standards gives me a mixture of hope and despair. The hope stems from the fact that from coast to coast, educational leaders are actually talking about proven programs and practices at all. That was certainly rare before ESSA. But despair in that I hear many educational leaders trying to find the absolute least their states and districts can do to just barely comply with the law. The ESSA evidence standards apply in particular to schools seeking school improvement funding, which are those in the lowest 5% of their states in academic performance. A previous program with a similar name but more capital letters, School Improvement, was used under NCLB, before ESSA. A large-scale evaluation by MDRC found that the earlier School Improvement made no difference in student achievement, despite billions of dollars in investments. So you’d imagine that this time around, educators responsible for school improvement would be eager to use the new law to introduce proven programs into their lowest-achieving schools. In fact, there are individual leaders, districts, and states who have exactly this intention, and may ultimately provide good examples to the rest. But they face substantial obstacles.

One of the obstacles I hear about often is an opposition among state departments of education to disseminating lists of proven programs. I very much understand and sympathize with their reluctance, as schools have been over-regulated for a long time. However, I do not see how the ESSA evidence standards can make much of a difference if everyone makes their own list of programs. Determining which studies meet ESSA evidence standards is difficult, and requires a great deal of knowledge about research (I know this, of course, because we do such reviews ourselves; see www.evidenceforessa.org).

Some say that they want programs that have been evaluated in their own states. But after taking into account demographics (e.g., urban/rural, ELL/not ELL, etc), are state-to-state differences so great as to require different research in each? We used to work with a school located on the Ohio-Indiana border, which ran right through the building. Were there really programs that were effective on one side of the building but not on the other?

Further, state department leaders frequently complain that they have too few staff to adequately manage school improvement across their states. Should that capacity be concentrated on reviewing research to determine which programs meet ESSA evidence standards and which do not?

The irony of opposing lists for ESSA evidence standards is that most states are chock full of lists that restrict the textbooks, software, and professional development schools can select using state funds. These lists may focus on paperweight, binding, and other minimum quality issues, but they almost never have anything to do with evidence of effectiveness. One state asked us to review their textbook adoption lists for reading and math, grades K-12. Collectively, there were hundreds of books, but just a handful had even a shred of evidence of effectiveness.

Educational leaders are constantly buffeted by opposing interest groups, from politicians to school board members to leaders of unions, from PTAs presidents to university presidents, to for-profit companies promoting their own materials and programs. Educational leaders need a consistent way to ensure that the decisions they make are in the best interests of children, not the often self-serving interests of adults. The ESSA evidence standards, if used wisely, give education leaders an opportunity to say to the whole cacophony of cries for special consideration, “I’d love to help you all, but we can only approve programs for our lowest-achieving schools that are known from rigorous research to benefit our children. We say this because it is the law, but also because we believe our children, and especially our lowest achievers, deserve the most effective programs, no matter what the law says.”

To back up such a radical statement, educational leaders need clarity about what their standards are and which specific programs meet those standards. Otherwise, they either have an “anything goes’ strategy that in effect means that evidence does not matter, or they have competing vendors claiming an evidence base for their favored program. Lists of proven programs can disappoint those whose programs aren’t on the list, but they are at least clear and unambiguous, and communicate to those who want to add to the list exactly what kind of evidence they will need.

States or large districts can create lists of proven programs by starting with existing national lists (such as the What Works Clearinghouse or Evidence for ESSA) and then modifying them, perhaps by adding additional programs that meet the same standards and/or eliminating programs not available in a given location. Over time, existing or new programs can be added as new evidence appears. We, at Evidence for ESSA, are willing to review programs being considered by state or local educators for addition to their own lists, and we will do it for free and in about two weeks. Then we’ll add them to our national list if they qualify.

It is important to say that while lists are necessary, they are not sufficient. Thoughtful needs assessments, information on proven programs (such as effective methods fairs and visits to local users of proven programs), and planning for high-quality implementation of proven programs are also necessary. However, students in struggling schools cannot wait for every school, district, and state to reinvent the wheel. They need the best we can give them right now, while the field is working on even better solutions for the future.

Whether a state or district uses a national list, or starts with such a list and modifies it for its own purposes, a list of proven programs provides an excellent starting point for struggling schools. It plants a flag for all to see, one that says “Because this (state/district/school) is committed to the success of every child, we select and carefully implement programs known to work. Please join us in this enterprise.”

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Why the What Works Clearinghouse Matters

In 1962, the most important breakthrough in modern medicine took place. It was not a drug, not a device, not a procedure. It did not immediately save a single life, or cure a single person of disease. But it profoundly changed medicine worldwide, and led to the rapid progress in all of medicine that we have come to take for granted.

This medical miracle was a law, passed in the U.S. Congress, called the Kefauver-Harris Drug Act. It required that drugs sold in the U.S. be proven safe and effective, in high-quality randomized experiments. This law was introduced by Senator Estes Kefauver of Tennessee, largely in response to the thalidomide disaster, when a widely used drug was found to produce disastrous birth defects.

From the moment the Act was passed, medical research changed utterly. The number of randomized experiments shot up. There are still errors and debates and misplaced enthusiasm, but the progress that has made in every area of medicine is undeniable. Today, it is unthinkable in medicine that any drug would be widely sold if it has not been proven to work. Even though Kefauver-Harris itself only applies to the U.S., all advanced countries now have similar laws requiring rigorous evidence of safety and effectiveness of medicines.

One of the ways the Kefauver-Harris Act made its impact was through reviews and publications of research on the evidence supporting the safety and efficacy of medicines. It’s no good having a law requiring strong evidence if only experts know what the evidence is. Many federal programs have sprung up over the years to review the evidence of what works and communicate it to front-line practitioners.

In education, we are belatedly going through our own evidence revolution. Since 2002, the function of communicating the findings of rigorous research in education has mostly been fulfilled by the What Works Clearinghouse (WWC), a website maintained by the U.S. Department of Education’s Institute of Education Sciences (IES). The existence of the WWC has been enormously beneficial. In addition to reviewing the evidence base for educational programs, the WWC’s standards set norms for research. No funder and no researcher wants to invest resources in a study they know the WWC will not accept.

In 2015, education finally had what may be its own Kefauver-Harris moment. This was the passage by the U.S. Congress of the Every Student Succeeds Act (ESSA), which contains specific definitions of strong, moderate, and promising levels of evidence. For certain purposes, especially for school improvement funding for very low-achieving schools, schools must use programs that meet ESSA evidence standards. For others, schools or districts can receive bonus points on grant applications if they use proven programs.

ESSA raises the stakes for evidence in education, and therefore should have raised the stakes for the WWC. If the government itself now requires or incentivizes the use of proven programs, then shouldn’t the government provide information on what individual programs meet those standards?

Yet several months after ESSA was passed, IES announced that the WWC would not be revised to align itself with ESSA evidence standards. This puts educators, and the government itself, in a bind. What if ESSA and WWC conflict? The ESSA standards are in law, so they must prevail over the WWC. Yet the WWC has a website, and ESSA does not. If WWC standards and ESSA standards were identical, or nearly so, this would not be a problem. But in fact they are very far apart.

Anticipating this situation, my colleagues and I at Johns Hopkins University created a new website, www.evidenceforessa.org. It launched in February, 2017, including elementary and secondary reading and math. We are now adding other subjects and grade levels.

In creating our website, we draw from the WWC every day, and in particular use a new Individual Study Database (ISD) that contains information on all of the evaluations the WWC has ever accepted.

The ISD is a useful tool for us, but it has made it relatively easy to ask and answer questions about the WWC itself, and the answers are troubling. We’ve found that almost half of the WWC outcomes rated “positive” or “potentially positive” are not even statistically significant. We have found that measures made by researchers or developers produce effect sizes more than three times those that are independent, yet they are fully accepted by the WWC.

As reported in a recent blog, we’ve discovered that the WWC is very, very slow to add new studies to its main “Find What Works” site. The WWC science topic is not seeking or accepting new studies (“This area is currently inactive and not conducting reviews”). Character education, dropout prevention, and English Language Learners are also inactive. How does this make any sense?

Over the next couple of months, starting in January, I will be releasing a series of blogs sharing what we have been finding out about the WWC. My hope in this is that we can help create a dialogue that will lead the WWC to reconsider many of its core policies and practices. I’m doing this not to compete or conflict with the WWC, but to improve it. If evidence is to have a major role in education policy, government has to help educators and policy makers make good choices. That is what the WWC should be doing, and I still believe it is possible.

The WWC matters, or should matter, because it expresses government’s commitment to evidence, and evidence-based reform. But it can only be a force for good if it is right, timely, accessible, comprehensible, and aligned with other government initiatives. I hope my upcoming blogs will be read in the spirit in which they were written, with hopes of helping the WWC do a better job of communicating evidence to educators eager to help young people succeed in our schools.

 

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Title I: A 20% Solution

Here’s an idea that would cost nothing and profoundly shift education funding and the interest of educators and policy makers toward evidence-proven programs. Simply put, the idea is to require that schools receiving Title I funds use 20% of the total on programs that meet at least a moderate standard of evidence. Two thin dimes on the dollar could make a huge difference in all of education.

In terms of federal education policy, Title I is the big kahuna. At $15 billion per year, it is the largest federal investment in elementary and secondary education, and it has been very politically popular on both sides of the aisle since the Johnson administration in 1965, when the Elementary and Secondary Education Act (ESEA) was first passed. Title I has been so popular because it goes to every congressional district, and provides much-needed funding by formula to help schools serving children living in poverty. Since the reauthorization of ESEA as the Every Student Succeeds Act in 2015, Title I remains the largest expenditure.

In ESSA and other federal legislation, there are two kinds of funding. One is formula funding, like Title I, where money usually goes to states and is then distributed to districts and schools. The formula may adjust for levels of poverty and other factors, but every eligible school gets its share. The other kind of funding is called competitive, or discretionary funding. Schools, districts, and other entities have to apply for competitive funding, and no one is guaranteed a share. In many cases, federal funds are first given to states, and then schools or districts apply to their state departments of education to get a portion of it, but the state has to follow federal rules in awarding the funds.

Getting proven programs into widespread use can be relatively easy in competitive grants. Competitive grants are usually evaluated on a 100-point scale, with all sorts of “competitive preference points” for certain categories of applicants, such as for rural locations, inclusion of English language learners or children of military families, and so on. These preferences add perhaps one to five points to a proposal’s score, giving such applicants a leg up but not a sure thing. In the same way, I and others have proposed adding competitive preference points in competitive proposals for applicants who propose to adopt programs that meet established standards of evidence. For example, Title II SEED grants for professional development now require that applicants propose to use programs found to be effective in at least one rigorous study, and give five points if the programs have been proven effective in at least two rigorous studies. Schools qualifying for school improvement funding under ESSA are now required to select programs that meet ESSA evidence standards.

Adding competitive preference points for using proven programs in competitive grants is entirely sensible and pain-free. It costs nothing, and does not require applicants to use any particular program. In fact, applicants can forego the preference points entirely, and hope to win without them. Preference points for proven programs is an excellent way to nudge the field toward evidence-based reform without top-down mandates or micromanagement. The federal government states a preference for proven programs, which will at least raise their profile among grant writers, but no school or district has to do anything different.

The much more difficult problem is how to get proven programs into formula funding (such as Title I). The great majority of federal funds are awarded by formula, so restricting evidence-based reform to competitive grants is only nibbling at the edges of practice. One solution to this would be to allocate incentive grants to districts if they agree to use formula funds to adopt and implement proven programs.

However, incentives cost money. Instead, imagine that districts and schools get their Title I formula funds, as they have since 1965. However, Congress might require that districts use at least 20% of their Title I, Part A funding to adopt and implement programs that meet a modest standard of evidence, similar to the “moderate” level in ESSA (which requires one quasi-experimental study with positive effects). The adopted program could be anything that meets other Title I requirements—reading, math, tutoring, technology—except that the program has to have evidence of effectiveness. The funds could pay for necessary staffing, professional development, materials, software, hardware, and so on. Obviously, schools could devote more than 20% if they choose to do so.

There are several key advantages to this 20% solution. First, of course, children would immediately benefit from receiving programs with at least moderate evidence of effectiveness. Second, the process would instantly make leaders of the roughly 55,000 Title I schools intensely interested in evidence. Third, the process could gradually shift discussion about Title I away from its historical focus on “how much?” to an additional focus on “for what purpose?” Publishers, software developers, academics, philanthropy, and government itself would perceive the importance of evidence, and would commission or carry out far more high-quality studies to meet the new standards. Over time, the standards of evidence might increase.

All of this would happen at no additional cost, and with a minimum of micromanagement. There are now many programs that would meet the “moderate” standards of evidence in reading, math, tutoring, whole-school reform, and other approaches, so schools would have a wide choice. No Child Left Behind required that low-performing schools devote 20% of their Title I funding to after-school tutoring programs and student transfer policies that research later showed to make little or no difference in outcomes. Why not spend the same on programs that are proven to work in advance, instead of once again rolling the dice with the educational futures of at-risk children?

20% of Title I is a lot of money, but if it can make 100% of Title I more impactful, it is more than worthwhile.

Evidence and Freedom

In 1776, a small group of American patriots had a vision of a government of, by, and for the people, and they risked their lives to make it so. Their commitment to liberty was not just ideological, it was also pragmatic. They knew that people who were empowered to make their own decisions were more likely to be committed to the implementation of those decisions. The same should apply to education today.

One of the most important aspects of the Every Student Succeeds Act (ESSA) is how it balances evidence with freedom. The Act defines proven programs and mentions evidence 60 times. It encourages use of proven programs throughout. It provides for additional preference points for proposals in seven areas that meet evidence requirements. Yet only in the area of school improvement for the lowest 5% of schools does it require use of proven programs. This is probably a good thing.

Americans, even more than other people, don’t like to be told what to do. If the evidence movement turns into a set of mandates, telling educators which programs they can or cannot implement, it will probably be doomed. Even when evidence for or against given programs is solid and widely replicated, many political forces opposing evidence-based reform would surely come into play if educators felt compelled to use certain programs and avoid others.

Years ago, I had an experience that reinforced my view that teachers respond better to proven practices if they are free to choose them. I was doing a cooperative learning workshop in a large urban district. A surly-looking teacher raised her hand. “Do we have to do this?” she asked. “Of course not” I answered. “These are ideas for you to use or not, as you wish”

“In this district,” said the teacher “if we’re not required to use something, we’re not allowed to do it.”

How can we avoid compulsion? The answer is easy. Federal, state, and local policies need to provide incentives for schools to use certain programs with strong evidence of effectiveness from rigorous experiments, but not mandates to do so. That’s what ESSA will do in several areas. Incentives may mean providing a few points on competitive grant proposals, or modest financial incentives, for schools that adopt proven programs. These incentives should be enough to get educators’ attention, but not enough to force them to pick a given program.

Incentives should cause educators to eagerly volunteer to use proven programs, to raise their hands, not their hackles. They could lead educators to learn more about the proven programs available to them and about the research process itself. This in turn could encourage political leaders to support education R & D, as educators and the public at large begin to clamor for more programs and better research.

Government cannot and should not try to get 3 million teachers in 100,000 schools in 14,000 districts to use any particular set of programs, no matter what their evidence of effectiveness. What it can and should do is set in motion policies that gradually expand the availability, adoption, and spread of proven programs, eventually pushing less effective approaches to improve or disappear. Development and evaluation of promising programs continues in ESSA, in the new Education Innovation Research (EIR), which along with R & D funded by other agencies will continuously add to the set of proven programs ready for adoption. As the number and quality of proven programs grow, educators will become more and more comfortable about using them.

From our nation’s founding, freedom to make informed choices has been an essential foundation stone of our system of governance. So it should be in education policy.

Evidence can inform key decisions for children, and government can encourage and incent adoption of proven programs. However, educators need the freedom to do what is right for their children, guided but not steered by valid and useful research.

Half a Worm: Why Education Policy Needs High Evidence Standards

There is a very old joke that goes like this:

What’s the second-worst thing to find in your apple?  A worm.

What’s the worst?  Half a worm.

The ESSA evidence standards provide clearer definitions of “strong,” “moderate,” and “promising” levels of evidence than have ever existed in law or regulation. Yet they still leave room for interpretation.  The problem is that if you define evidence-based too narrowly, too few programs will qualify.  But if you define evidence-based too broadly, it loses its meaning.

We’ve already experienced what happens with a too-permissive definition of evidence.  In No Child Left Behind, “scientifically-based research” was famously mentioned 110 times.  The impact of this, however, was minimal, as everyone soon realized that the term “scientifically-based” could be applied to just about anything.

Today, we are in a much better position than we were in 2002 to insist on relatively strict evidence of effectiveness, both because we have better agreement about what constitutes evidence of effectiveness and because we have a far greater number of programs that would meet a high standard.  The ESSA definitions are a good consensus example.  Essentially, they define programs with “strong evidence of effectiveness” as those with at least one randomized study showing positive impacts using rigorous methods, and “moderate evidence of effectiveness” as those with at least one quasi-experimental study.  “Promising” is less well-defined, but requires at least one correlational study with a positive outcome.

Where the half-a-worm concept comes in, however, is that we should not use a broader definition of “evidence-based”.  For example, ESSA has a definition of “strong theory.”  To me, that is going too far, and begins to water down the concept.  What program in all of education cannot justify a “strong theory of action”?

Further, even in the top categories, there are important questions about what qualifies. In school-level studies, should we insist on school-level analyses (i.e., HLM)? Every methodologist would say yes, as I do, but this is not specified. Should we accept researcher-made measures? I say no, based on a great deal of evidence indicating that such measures inflate effects.

Fortunately, due to investments made by IES, i3, and other funders, the number of programs that meet strict standards has grown rapidly. Our Evidence for ESSA website (www.evidenceforessa.org) has so far identified 101 PK-12 reading and math programs, using strict standards consistent with ESSA definitions. Among these, more than 60% meet the “strong” standard. There are enough proven programs in every subject and grade level to give educators choices among proven programs. And we add more each week.

This large number of programs meeting strict evidence standards means that insisting on rigorous evaluations, within reason, does not mean that we end up with too few programs to choose among. We can have our apple pie and eat it, too.

I’d love to see federal programs of all kinds encouraging use of programs with rigorous evidence of effectiveness.  But I’d rather see a few programs that meet a strict definition of “proven” than to see a lot of programs that only meet a loose definition.  20 good apples are much better than applesauce of dubious origins!

This blog is sponsored by the Laura and John Arnold Foundation

Getting Past the Dudalakas (And the Yeahbuts)

Phyllis Hunter, a gifted educator, writer, and speaker on the teaching of reading, often speaks about the biggest impediments to education improvement, which she calls the dudalakas. These are excuses for why change is impossible.  Examples are:

Dudalaka         Better students

Dudalaka         Money

Dudalaka         Policy support

Dudalaka         Parent support

Dudalaka         Union support

Dudalaka         Time

Dudalaka is just shorthand for “Due to the lack of.” It’s a close cousin of “yeahbut,” another reflexive response to ideas for improving education practices or policy.

Of course, there are real constraints that teachers and education leaders face that genuinely restrict what they can do. The problem with dudalakas and yeahbuts is not that the objections are wrong, but that they are so often thrown up as a reason not to even think about solutions.

I often participate in dudalaka conversations. Here is a composite. I’m speaking with a principal of an elementary school, who is expressing concern about the large number of students in his school who were struggling in reading. Many of these students were headed for special education. “Could you provide them with tutors?” I ask. “Yes, they get tutors, but we use a small group method that emphasizes oral reading (not the phonics skills that the students are actually lacking) (i.e., yeahbut).”

“Could you change the tutoring to focus on the skills you know students need?”

“Yeahbut our education leadership requires we use this system” (dudalaka political support). Besides, we have so many failing students (dudalaka better students) so we have to work with small groups of students (dudalaka tutors).”

“Could you hire and train paraprofessionals or recruit qualified volunteers to provide personalized tutoring?”

“Yeahbut we’d love to, but we can’t afford them (dudalaka money). Besides, we don’t have time for tutoring (dudalaka time).”

“But you have plenty of time in your afternoon schedule.”

“Yeahbut in the afternoon, children are tired. (Dudalaka better students).”

This conversation is not of course a rational discussion of strategies for solving a serious problem. It is instead an attempt by the principal to find excuses to justify his school’s continuing to do what it is doing now. Dudalakas and yeahbuts are merely ways of passing blame to other people (school leaders, teachers, children, parents, unions, and so on) and to shortages of money, time, and other resources that hold back change. Again, these excuses may or may not be valid in a particular situation, but there is a difference between rejecting potential solutions out of hand (using dudalakas and yeahbuts) as opposed to identifying and then carefully and creatively considering potential solutions. Not every solution will be possible or workable, but if the problem is important, some solution must be found. No matter what.

An average American elementary school with 500 students has an annual budget of approximately $6,000,000 ($12,000 per student). Principals and teachers, superintendents, and state superintendents think their hands are tied by limited resources (dudalaka money). But creativity and commitment to core goals can overcome funding limitations if school and district leaders are willing to use resources differently or activate underutilized resources, or ideally, find a way to obtain more funding.

The people who start off with the very human self-protective dudalakas and yeahbuts may, with time, experience, and encouragement, become huge advocates for change. It’s only natural to start with dudalakas and yeahbuts. What is important is that we don’t end with them.

We know that our children are capable of succeeding at much higher rates than they do today. Yet too many are failing, dudalaka quality implementation of proven programs. Let’s clear away the other dudalakas and yeahbuts, and get down to this one.

This blog is sponsored by the Laura and John Arnold Foundation