The Farmer and the Moon Rocks: What Did the Moon Landing Do For Him?

Many, many years ago, during the summer after my freshman year in college, I hitchhiked from London to Iran.  This was the summer of 1969, so Apollo 11 was also traveling.   I saw television footage of the moon landing in Heraklion, Crete, where a television store switched on all of its sets and turned them toward the sidewalk.  A large crowd watched the whole thing.  This was one of the few times I recall when it was really cool to be an American abroad.

After leaving Greece, I went on to Turkey, and then Iran.  In Teheran, I got hold of an English-language newspaper.  It told an interesting story.  In rural Iran, many people believed that the moon was a goddess.  Obviously, a spaceship cannot land on a goddess, so many people concluded that the moon landing must be a hoax.

A reporter from the newspaper interviewed a number of people about the moon landing.  Some were adamant that the landing could not have happened.  However, one farmer was more pragmatic.  He asked the reporter, “I hear the astronauts brought back moon rocks.  Is that right?”

“That’s what they say!” replied the reporter.

“I am fixing my roof, and I could sure use a few of those moon rocks.  Do you think they might give me some?”

blog_8-1-19_moonfarmer_500x432 (002)

The moon rock story illustrates a daunting problem in the dissemination of educational research. Researchers do high-quality research on topics of great importance to the practice of education. They publish this research in top journals, and get promotions and awards for it, but in most cases, their research does not arouse even the slightest bit of interest among the educators for whom it was intended.

The problem relates to the farmer repairing his roof.  He had a real problem to solve, and he needed help with it.  A reporter comes and tells him about the moon landing. The farmer does not think, “How wonderful!  What a great day for science and discovery and the future of mankind!”  Instead, he thinks, “What does this have to do with me?”  Thinking back on the event, I sometimes wonder if he really expected any moon rocks, or if he was just sarcastically saying, “I don’t care.”

Educators care deeply about their students, and they will do anything they can to help them succeed.  But if they hear about research that does not relate to their children, or at least to children like theirs, they are unlikely to care very much.  Even if the research is directly applicable to their students, they are likely to reason, perhaps from long experience, that they will never get access to this research, because it costs money or takes time or upsets established routines or is opposed by powerful groups or whatever.  The result is status quo as far as the eye can see, or implementation of small changes that are currently popular but unsupported by evidence of effectiveness.  Ultimately, the result is cynicism about all research.

Part of the problem is that education is effectively a government monopoly, so entrepreneurship or responsible innovation are difficult to start or maintain.  However, the fact that education is a government monopoly can also be made into a positive, if government leaders are willing to encourage and support evidence-based reform.

Imagine that government decided to provide incentive funding to schools to help them adopt programs that meet a high standard of evidence.  This has actually happened under the ESSA law, but only in a very narrow slice of schools, those very low achieving schools that qualify for school improvement.  Imagine that the government provided a lot more support to schools to help them learn about, adopt, and effectively implement proven programs, and then gradually expanded the categories of schools that could qualify for this funding.

Going back to the farmer and the moon rocks, such a policy would forge a link between exciting research on promising innovations and the real world of practice.  It could cause educators to pay much closer attention to research on practical programs of relevance to them, and to learn how to tell the difference between valid and biased research.  It could help educators become sophisticated and knowledgeable consumers of evidence and of programs themselves.

One of the best examples of the transformation such policies could bring about is agriculture.  Research has a long history in agriculture, and from colonial times, government has encouraged and incentivized farmers to pay attention to evidence about new practices, new seeds, new breeds of animals, and so on.  By the late 19th century, the U.S. Department of Agriculture was sponsoring research, distributing information designed to help farmers be more productive, and much more.  Today, research in agriculture is a huge enterprise, constantly making important discoveries that improve productivity and reduce costs.  As a result, world agriculture, especially American agriculture, is able to support far larger populations at far lower costs than anyone ever thought possible.  The Iranian farmer talking about the moon rocks could not see how advances in science could possibly benefit him personally.  Today, however, in every developed economy, farmers have a clear understanding of the connection between advances in science and their own success.  Everyone knows that agriculture can have bad as well as good effects, as when new practices lead to pollution, but when governments decide to solve those problems, they turn to science. Science is not inherently good or bad, but if it is powerful, then democracies can direct it to do what is best for people.

Agriculture has made dramatic advances over the past hundred years, and continues to make rapid progress by linking science to practice.  In education, we are just starting to make the link between evidence and practice.  Isn’t it time to learn from the experiences of medicine, technology, and agriculture, among many other evidence based fields, to achieve more rapid progress in educational practice and outcomes?

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Advertisements

On Progress

My grandfather (pictured below with my son Ben around 1985) was born in 1900, and grew up in Argentina. The world he lived in as a child had no cars, no airplanes, few cures for common diseases, and inefficient agriculture that bound the great majority of the world to farming. By the time he died, in 1996, think of all the astonishing progress he’d seen in technology, medicine, agriculture, and much else.

blog_5-2-19_ben_359x500
Pictured are Bob Slavin’s grandfather and son, both of whom became American citizens: one born before the invention of airplanes, the other born before the exploration of Mars.

I was born in 1950. The progress in technology, medicine, and agriculture, and many other fields, continues to be extraordinary.

In most of our society and economy, we confidently expect progress. When my father needed a heart valve, his doctor suggested that he wait as long as possible because new, much better heart valves were coming out soon. He could, and did, bet his life on progress, and it paid off.

But now consider education. My grandfather attended school in Argentina, where he was taught in rows by teachers who did most of the talking. My father went to school in New York City, where he was taught in rows by teachers who did most of the talking. I went to school in Washington, DC, where I was taught in rows by teachers who did most of the talking. My children went to school in Baltimore, where they mostly sat at tables, and did use some technology, but still, the teachers did most of the talking.

 

My grandchildren are now headed toward school (the oldest is four). They will use a lot of technology, and will sit at tables more than my own children did. But the basic structure of the classroom is not so different from Argentina, 1906. All who eagerly await the technology revolution are certainly seeing many devices in classroom use. But are these devices improving outcomes on, for example, reading and math? Our reviews of research on all types of approaches used in elementary and secondary schools are not finding strong benefits of technology. Across all subjects and grade levels, the average effect size is similar, ranging from +0.07 (elementary math) to +0.09 (elementary reading). If you like “additional months of learning,” these effects equate to one month in a year. Ok, better than zero, but not the revolution we’ve been waiting for.

There are other approaches much more effective than technology, such as tutoring, forms of cooperative learning, and classroom management strategies. At www.evidenceforessa.org, you can see descriptions and outcomes of more than 100 proven programs. But these are not widely used. Your children or grandchildren, or other children you care about, may go 13 years from kindergarten to 12th grade without ever experiencing a proven program. In our field, progress is slow, and dissemination of proven programs is slower.

Education is the linchpin for our economy and society. Everything else depends on it. In all of the developed world, education is richly funded, yet very, very little of this largesse is invested in innovation, evaluations of innovative methods, or dissemination of proven programs. Other fields have shown how innovation, evaluation, and dissemination of proven strategies can become the engine of progress. There is absolutely nothing inevitable about the slow pace of progress in education. That slow pace is a choice we have made, and keep making, year after year, generation after generation. I hope we will make a different choice in time to benefit my grandchildren, and the children of every family in the world. It could happen, and there are many improvements in educational research and development to celebrate. But how long must it take before the best of educational innovation becomes standard practice?

 This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Evidence, Standards, and Chicken Feathers

In 1509, John Damian, an alchemist in the court of James IV of Scotland proclaimed that he had developed a way for humans to fly. He made himself some wings from chicken feathers and jumped from the battlements of Stirling Castle, the Scottish royal residence at the time. His flight was brief but not fatal.  He landed in a pile of manure, and only broke his thigh.  Afterward, he explained that the problem was that he used the wrong kind of feathers.  If only he had used eagle feathers, he could have flown, he asserted.  Fortunately for him, he never tried flying again, with any kind of feathers.

blog_11-15-18_humanornithopter_500x314

The story of John Damian’s downfall is humorous, and in fact the only record of it is a contemporary poem making fun of it. Yet there are important analogies to educational policy today from this incident in Scottish history. These are as follows:

  1. Damian proclaimed the success of his plan for human flight before he or anyone else had tried it and found it effective.
  2. After his flight ended in the manure pile, he proclaimed (again without evidence) that if only he’d used eagle feathers, he would have succeeded. This makes sense, of course, because eagles are much better flyers than chickens.
  3. He was careful never to actually try flying with eagle feathers.

All of this is more or less what we do all the time in educational policy, with one big exception.  In education, based on Damian’s experience, we might have put forward policies stating that from now on human powered flight must only be done with eagle feathers, not chicken feathers.

What I am referring to in education is our obsession with standards as a basis for selecting textbooks, software, and professional development, and the relative lack of interest in evidence. Whole states and districts spend a lot of time devising standards and then reviewing materials and services to be sure that they align with these standards. In contrast, the idea of checking to see that texts, software, and PD have actually been evaluated and found to be effective in real classrooms with real teachers and students has been a hard slog.

Shouldn’t textbooks and programs that meet modern standards also produce higher student performance on tests closely aligned with those standards? This cannot be assumed. Not long ago, my colleagues and I examined every reading and math program rated “meets expectations” (the highest level) on EdReports, a website that rates programs in terms of their alignment with college- and career-ready standards.  A not so grand total of two programs had any evidence of effectiveness on any measure not made by the publishers. Most programs rated “meets expectations” had no evidence at all, and a smaller number had been evaluated and found to make no difference.

I am not in any way criticizing EdReports.  They perform a very valuable service in helping schools and districts know which programs meet current standards. It makes no sense for every state and district to do this for themselves, especially in the cases where there are very few or no proven programs. It is useful to at least know about programs aligned with standards.

There is a reason that so few products favorably reviewed on EdReports have any positive outcomes in rigorous research. Most are textbooks, and very few textbooks have evidence of effectiveness. Why? The fact is that standards or no standards, EdReports or no EdReports, textbooks do not differ very much from each other in aspects that matter for student learning. Textbooks differ (somewhat) in content, but if there is anything we have learned from our many reviews of research on what works in education, what matters is pedagogy, not content. Yet since decisions about textbooks and software depend on standards and content, decision makers almost invariably select textbooks and software that have never been successfully evaluated.

Even crazy John Damian did better than we do. Yes, he claimed success in flying before actually trying it, but at last he did try it. He concluded that his flying plan would have worked if he’d used eagle feathers, but he never imposed this untested standard on anyone.

Untested textbooks and software probably don’t hurt anyone, but millions of students desperately need higher achievement, and focusing resources on untested or ineffective textbooks, software, and PD does not move them forward. The goal of education is to help all students succeed, not to see that they use aligned materials. If a program has been proven to improve learning, isn’t that a lot more important than proving that it aligns with standards? Ideally, we’d want schools and districts to use programs that are both proven effective and aligned with standards, but if no programs meet both criteria, shouldn’t those that are proven effective be preferred? Without evidence, aren’t we just giving students and teachers eagle feathers and asking them to take a leap of faith?

Photo credit: Humorous portrayal of a man who flies with wings attached to his tunic, Unknown author [Public domain], via Wikimedia Commons/Library of Congress

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

 

Miss Evers’ Boys (And Girls)

Most people who have ever been involved with human subjects’ rights know about the Tuskegee Syphilis Study. This was a study of untreated syphilis, in which 622 poor, African American sharecroppers, some with syphilis and some without, were evaluated over 40 years.

The study, funded and overseen by the U.S. Public Health Service, started in 1932. In 1940, researchers elsewhere discovered that penicillin cured syphilis. By 1947, penicillin was “standard of care” for syphilis, meaning that patients with syphilis received penicillin as a matter of course, anywhere in the U.S.

But not in Tuskegee. Not in 1940. Not in 1947. Not until 1972, when a whistle-blower made the press aware of what was happening. In the meantime, many of the men died of syphilis, 40 of their wives contracted the disease, and 19 of their children were born with congenital syphilis. The men had never even been told the nature of the study, they were not informed in 1940 or 1947 that there was now a cure, and they were not offered that cure. Leaders of the U.S. Public Health Service were well aware that there was a cure for syphilis, but for various reasons, they did not stop the study. Not in 1940, not in 1947, not even when whistle-blowers told them what was going on. They stopped it only when the press found out.

blog_11-1-18_tuskegee_500x363

In 1997 a movie on the Tuskegee Syphilis Study was released. It was called Miss Evers’ Boys. Miss Evers (actually, Eunice Rivers) was the African-American public health nurse who was the main point of contact for the men over the whole 40 years. She deeply believed that she, and the study, were doing good for the men and their community, and she formed close relationships with them. She believed in the USPHS leadership, and thought they would never harm her “boys.”

The Tuskegee study was such a crime and scandal that it utterly changed procedures for medical research in the U.S. and most of the world. Today, participants in research with any level of risk, or their parents if they are children, must give informed consent for participation in research, and even if they are in a control group, they must receive at least “standard of care”: currently accepted, evidence-based practices.

If you’ve read my blogs, you’ll know where I’m going with this. Failure to use proven educational treatments, unlike medical ones, is rarely fatal, at least not in the short term. But otherwise, our profession carries out Tuskegee crimes all the time. It condemns failing students to ineffective programs and practices when effective ones are known. It fails to even inform parents or children, much less teachers and principals, that proven programs exist: Proven, practical, replicable solutions for the problems they face every day.

Like Miss Rivers, front-line educators care deeply about their charges. Most work very hard and give their absolute best to help all of their children to succeed. Teaching is too much hard work and too little money for anyone to do it for any reason but for the love of children.

But somewhere up the line, where the big decisions are made, where the people are who know or who should know which programs and practices are proven to work and which are not, this information just does not matter. There are exceptions, real heroes, but in general, educational leaders who believe that schools should use proven programs have to fight hard for this position. The problem is that the vast majority of educational expenditures—textbooks, software, professional development, and so on—lack even a shred of evidence. Not a scintilla. Some have evidence that they do not work. Yet advocates for those expenditures (such as sales reps and educators who like the programs) argue strenuously for programs with no evidence, and it’s just easier to go along. Whole states frequently adopt or require textbooks, software, and services of no known value in terms of improving student achievement. The ESSA evidence standards were intended to focus educators on evidence and incentivize use of proven programs, at least for the lowest-achieving 5% of schools in each state, but so far it’s been slow going.

Yet there are proven alternatives. Evidence for ESSA (www.evidenceforessa.org) lists more than 100 PK-12 reading and math programs that meet the top three ESSA evidence standards. The majority meet the top level, “Strong.” And most of the programs were researched with struggling students. Yet I am not perceiving a rush to find out about proven programs. I am hearing a lot of new interest in evidence, but my suspicion, growing every day, is that many educational leaders do not really care about the evidence, but are instead just trying to find a way to keep using the programs and providers they already have and already like, and are looking for evidence to justify keeping things as they are.

Every school has some number of struggling students. If these children are provided with the same approaches that have not worked with them or with millions like them, it is highly likely that most will fail, with all the consequences that flow from school failure: Retention. Assignment to special education. Frustration. Low expectations. Dropout. Limited futures. Poverty. Unemployment. There are 50 million children in grades PK to 12 in the U.S. This is the grinding reality for perhaps 10 to 20 million of them. Solutions are readily available, but not known or used by caring and skilled front-line educators.

In what way is this situation unlike Tuskegee in 1940?

 Photo credit: By National Archives Atlanta, GA (U.S. government) ([1], originally from National Archives) [Public domain], via Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

First There Must be Love. Then There Must be Technique.

I recently went to Barcelona. This was my third time in this wonderful city, and for the third time I visited La Sagrada Familia, Antoni Gaudi’s breathtaking church. It was begun in the 1880s, and Gaudi worked on it from the time he was 31 until he died in 1926 at 74. It is due to be completed in 2026.

Every time I go, La Sagrada Familia has grown even more astonishing. In the nave, massive columns branching into tree shapes hold up the spectacular roof. The architecture is extremely creative, and wonders lie around every corner.

blog_7-19-18_Barcelona_333x500

I visited a new museum under the church. At the entrance, it had a Gaudi quote:

First there must be love.

Then there must be technique.

This quote sums up La Sagrada Familia. Gaudi used complex mathematics to plan his constructions. He was a master of technique. But he knew that it all meant nothing without love.

In writing about educational research, I try to remind my readers of this from time to time. There is much technique to master in creating educational programs, evaluating them, and fairly summarizing their effects. There is even more technique in implementing proven programs in schools and classrooms, and in creating policies to support use of proven programs. But what Gaudi reminds us of is just as essential in our field as it was in his. We must care about technique because we care about children. Caring about technique just for its own sake is of little value. Too many children in our schools are failing to learn adequately. We cannot say, “That’s not my problem, I’m a statistician,” or “that’s not my problem, I’m a policymaker,” or “that’s not my problem, I’m an economist.” If we love children and we know that our research can help them, then it’s all of our problems. All of us go into education to solve real problems in real classrooms. That’s the structure we are all building together over many years. Building this structure takes technique, and the skilled efforts of many researchers, developers, statisticians, superintendents, principals, and teachers.

Each of us brings his or her own skills and efforts to this task. None of us will live to see our structure completed, because education keeps growing in techniques and capability. But as Gaudi reminds us, it’s useful to stop from time to time and remember why we do what we do, and for whom.

Photo credit: By Txllxt TxllxT [CC BY-SA 4.0  (https://creativecommons.org/licenses/by-sa/4.0)], from Wikimedia Commons

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Lessons from China

blog_3-22-18_Confucius_344x500Recently I gave a series of speeches in China, organized by the Chinese University of Hong Kong and Nanjing Normal University. I had many wonderful and informative experiences, but one evening stood out.

I was in Nanjing, the ancient capital, and it was celebrating the weeks after the Chinese New Year. The center of the celebration was the Temple of Confucius. In and around it were lighted displays exhorting Chinese youth to excel on their exams. Children stood in front of these displays to have their pictures taken next to characters saying “first in class,” never second. A woman with a microphone recited blessings and hopes that students would do well on exams. After each one, students hit a huge drum with a long stick, as an indication of accepting the blessing. Inside the temple were thousands of small silk messages, bright red, expressing the wishes of parents and students that students will do well on their exams. Chinese friends explained what was going on, and told me how pervasive this spirit was. Children all know a saying to the effect that the path to riches and a beautiful wife was through books. I heard that perhaps 70% of urban Chinese students go to after-school cram schools to ensure their performance on exams.

The reason Chinese parents and students take test scores so seriously is obvious in every aspect of Chines culture. On an earlier trip to China I toured a beautiful house, from hundreds of years ago, in a big city. The only purpose of the house was to provide a place for young men of a large clan to stay while they prepared for their exams, which determined their place in the Confucian hierarchy.

As everyone knows, Chinese students do, in fact, do very well on their exams. I would note that these data come in particular from urban Eastern China, such as Shanghai. I’d heard about but did not fully understand policies that contribute to these outcomes. In all big cities in China, students can only attend schools in their city neighborhoods, where the best schools in the country are, if they were born there or own apartments. In a country where a small apartment in a big city can easily cost a half million dollars (U.S.), this is no small selection factor. If parents work in the city but do not own an apartment, their children may have to remain in the village or small city they came from, living with grandparents and attending non-elite schools. Chinese cities are growing so fast that the majority of their inhabitants come from the rest of China. This matters because admirers of Chinese education often cite the amazing statistics from the rich and growing Eastern Chinese cities, not the whole country. It’s as though the U.S. only reported test scores on international comparisons from suburbs in the Northeastern states from Maryland to New England, the wealthiest and highest-achieving part of our country.

I do not want to detract in any way from the educational achievements of the Chinese, but just to put it in context. First, the Chinese themselves have doubts about test scores as the only important indicators, and admire Western education for its broader focus. But just sticking to test scores, China and other Confucian cultures such as Japan, South Korea, and Singapore have been creating a culture valuing test scores since Confucius, about 2500 years ago. It would be a central focus of Chinese culture even if PISA and TIMSS did not exist to show it off to the world.

My only point is that when American or European observers hold up East Asian achievements as a goal to aspire to, these achievements do not exist in a cultural vacuum. Other countries can potentially achieve what China has achieved, in terms of test scores and other indicators, but they cannot achieve it in the same way. Western culture is just not going to spend the next 2500 years raising its children the way the Chinese do. What we can do, however, is to use our own strengths, in research, development, and dissemination, to progressively enhance educational outcomes. The Chinese can and will do this, too; that’s what I was doing traveling around China speaking about evidence-based reform. We need not be in competition with any nation or society, as expanding educational opportunity and success throughout the world is in the interests of everyone on Earth. But engaging in fantasies about how we can move ahead by emulating parts of Chinese culture that they have been refining since Confucius is not sensible.

Precisely because of their deep respect for scholarship and learning and their eagerness to continue to improve their educational achievements, the Chinese are ideal collaborators in the worldwide movement toward evidence-based reform in education. Colleagues at the Chinese University of Hong Kong and the Nanjing Normal University are launching Chinese-language and Asian-focused versions of our newsletter on evidence in education, Best Evidence in Brief (BEiB). We and our U.K. colleagues have been distributing BEIB for several years. We welcome the opportunity to share ideas and resources with our Chinese colleagues to enrich the evidence base for education for children everywhere.

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.

Why the What Works Clearinghouse Matters

In 1962, the most important breakthrough in modern medicine took place. It was not a drug, not a device, not a procedure. It did not immediately save a single life, or cure a single person of disease. But it profoundly changed medicine worldwide, and led to the rapid progress in all of medicine that we have come to take for granted.

This medical miracle was a law, passed in the U.S. Congress, called the Kefauver-Harris Drug Act. It required that drugs sold in the U.S. be proven safe and effective, in high-quality randomized experiments. This law was introduced by Senator Estes Kefauver of Tennessee, largely in response to the thalidomide disaster, when a widely used drug was found to produce disastrous birth defects.

From the moment the Act was passed, medical research changed utterly. The number of randomized experiments shot up. There are still errors and debates and misplaced enthusiasm, but the progress that has made in every area of medicine is undeniable. Today, it is unthinkable in medicine that any drug would be widely sold if it has not been proven to work. Even though Kefauver-Harris itself only applies to the U.S., all advanced countries now have similar laws requiring rigorous evidence of safety and effectiveness of medicines.

One of the ways the Kefauver-Harris Act made its impact was through reviews and publications of research on the evidence supporting the safety and efficacy of medicines. It’s no good having a law requiring strong evidence if only experts know what the evidence is. Many federal programs have sprung up over the years to review the evidence of what works and communicate it to front-line practitioners.

In education, we are belatedly going through our own evidence revolution. Since 2002, the function of communicating the findings of rigorous research in education has mostly been fulfilled by the What Works Clearinghouse (WWC), a website maintained by the U.S. Department of Education’s Institute of Education Sciences (IES). The existence of the WWC has been enormously beneficial. In addition to reviewing the evidence base for educational programs, the WWC’s standards set norms for research. No funder and no researcher wants to invest resources in a study they know the WWC will not accept.

In 2015, education finally had what may be its own Kefauver-Harris moment. This was the passage by the U.S. Congress of the Every Student Succeeds Act (ESSA), which contains specific definitions of strong, moderate, and promising levels of evidence. For certain purposes, especially for school improvement funding for very low-achieving schools, schools must use programs that meet ESSA evidence standards. For others, schools or districts can receive bonus points on grant applications if they use proven programs.

ESSA raises the stakes for evidence in education, and therefore should have raised the stakes for the WWC. If the government itself now requires or incentivizes the use of proven programs, then shouldn’t the government provide information on what individual programs meet those standards?

Yet several months after ESSA was passed, IES announced that the WWC would not be revised to align itself with ESSA evidence standards. This puts educators, and the government itself, in a bind. What if ESSA and WWC conflict? The ESSA standards are in law, so they must prevail over the WWC. Yet the WWC has a website, and ESSA does not. If WWC standards and ESSA standards were identical, or nearly so, this would not be a problem. But in fact they are very far apart.

Anticipating this situation, my colleagues and I at Johns Hopkins University created a new website, www.evidenceforessa.org. It launched in February, 2017, including elementary and secondary reading and math. We are now adding other subjects and grade levels.

In creating our website, we draw from the WWC every day, and in particular use a new Individual Study Database (ISD) that contains information on all of the evaluations the WWC has ever accepted.

The ISD is a useful tool for us, but it has made it relatively easy to ask and answer questions about the WWC itself, and the answers are troubling. We’ve found that almost half of the WWC outcomes rated “positive” or “potentially positive” are not even statistically significant. We have found that measures made by researchers or developers produce effect sizes more than three times those that are independent, yet they are fully accepted by the WWC.

As reported in a recent blog, we’ve discovered that the WWC is very, very slow to add new studies to its main “Find What Works” site. The WWC science topic is not seeking or accepting new studies (“This area is currently inactive and not conducting reviews”). Character education, dropout prevention, and English Language Learners are also inactive. How does this make any sense?

Over the next couple of months, starting in January, I will be releasing a series of blogs sharing what we have been finding out about the WWC. My hope in this is that we can help create a dialogue that will lead the WWC to reconsider many of its core policies and practices. I’m doing this not to compete or conflict with the WWC, but to improve it. If evidence is to have a major role in education policy, government has to help educators and policy makers make good choices. That is what the WWC should be doing, and I still believe it is possible.

The WWC matters, or should matter, because it expresses government’s commitment to evidence, and evidence-based reform. But it can only be a force for good if it is right, timely, accessible, comprehensible, and aligned with other government initiatives. I hope my upcoming blogs will be read in the spirit in which they were written, with hopes of helping the WWC do a better job of communicating evidence to educators eager to help young people succeed in our schools.

 

This blog was developed with support from the Laura and John Arnold Foundation. The views expressed here do not necessarily reflect those of the Foundation.