Education Research isn’t a “Field of Dreams”
In 1989, Kevin Costner’s character in the movie Field of Dreams heard a voice that said, “If you build it, he will come,” and then became convinced that if he built a baseball field on his Iowa farm, then the 1919 Chicago White Sox baseball team would come and play. Of course, after building a fully functioning baseball field, complete with backstop, bleachers, and lights, the Chicago White Sox actually showed up and played with great enthusiasm.
Akin to Costner’s character, the education research community has been working hard over the past decade to build an evidence base that identifies policies, programs, and interventions that actually work better than current practice in enhancing student learning. This has led to a dramatic increase in the number of program evaluations involving randomized experiments and various other research designs that support causal inference (i.e., that differences in outcomes were actually caused by the program). The hope is that if the research community provides evidence about “what works,” then education practitioners will use that evidence to inform their decisions about policy and practice. Unfortunately, we don’t know how much this new playing field is actually being used, and what we do know suggests that the game that’s being played is not really what researchers and federal policymakers envisioned.
A major product of researchers’ construction effort is the What Works Clearinghouse, which has grown into a sizeable database that currently includes reports on 548 interventions based on reviews of more than 11,000 research studies. The Clearinghouse webpage now states, “For over a decade, the WWC has been a central and trusted source of scientific evidence for what works in education to improve student outcomes.”
Unfortunately, recent research conducted by the National Center for Policy and Practice suggests that the Clearinghouse and other federal resources are not frequently used by school and district leaders as a source for evidence to inform their decisions. Instead, their findings suggest that school and district leaders tend to rely more on professional associations, conferences, and state and local education agencies as sources of evidence. Furthermore, when asked what forms of evidence were used, the majority of school and district leaders cited books as sources of evidence.
Since much of my career has been spent conducting field experiments and publishing research reports and academic journal articles, I can’t help but feel as though I’ve just been practicing in the bullpen this entire game. While other researchers might be reading and citing my publications, and some are included in the What Works Clearinghouse, I don’t have any data to show that my work has been used to inform school or district decisions.
Or maybe it has, and I just don’t know it.
The connections between the research and practice communities are not always direct. Therefore, a key concept for the Center for Research Use in Education is that of “brokerage”. The idea is that research evidence can follow a complicated path from an original publication (e.g., a journal article) into the minds of school and district decision makers. Brokerage is the process through which people or organizations help to connect practitioners with research evidence. And while the original research publication might not be read by a practitioner, key findings from that publication can find their way into school and district decision processes. Identifying those involved in brokerage and describing that process is a key goal for our Center.
We also acknowledge that schools’ and districts’ decision processes are far more complicated than federal legislation seems to imply. Not all decisions are focused on selecting an effective program or intervention. Coburn and colleagues have argued that the use of research in decision-making can be more like a learning process. This is why our work will include not only surveys, but qualitative case studies of schools that are identified as deep users of research and are well-connected to the research community.
At this point, I expect that strong connections between the research and practice communities will be rare. The structures and incentives for most researchers focus on production of research products, not building connections to practitioners.
All of this suggests that although the research community has built a pretty remarkable playing field, the teams are actually playing a different game on a different field. It’s almost as if the research community has been expecting to see a baseball game, when instead the practitioner community is playing cricket instead. Both are excellent games, but they have remarkably different rules, different positions, and the playing fields are organized in completely different ways. It’s highly unlikely that federal policies or the research community will convince the practitioner community to start playing baseball. To better position rigorous research evidence to inform practitioners’ decisions, we need to resurface and reorganize our new playing field so it can accommodate a nice game of cricket.