This is the last in the series, Bracing for Impact, by Dr. Elizabeth N. Farley-Ripple. Follow this link to the first in the series, Exploring the Concept of Research Impact and this link to the second in the series, Is research impact different than research use? The third in the series, A layer cake of motivations, discussed how research impact matters. Is research impact observable is the focus of the fourth in this series. Making Impact Happen, the fifth blog in the series, describes the conditions to support research impact.
My journey has explored the idea of research impact in the context of U.S. education, and I appreciate you accompanying me along the way. To wrap up this series, I will share my own reflections on the bigger picture, thinking across themes, and what the path ahead might look like.
Emphasis on ecosystem.
Evident throughout the conversations I had is the complexity of the ecosystems in which research and practice are situated. From the production of research to the context of implementation, there are a multitude of actors and institutions that shape relationships between those communities, and, subsequently, the potential for research to have impact. In prior posts, we see conceptualizations of impact, motivations for seeking impact, and approaches to measurement, vary by role in that ecosystem. And, of course, supporting conditions are needed across that ecosystem, not merely within research, policy, or practice communities. If, as suggested by the leaders I spoke with, research impact is valued across the system, then it is critical to understand how all of our work is intertwined, to reflect on our own roles, to develop shared understandings and common goals across communities, and to focus on aligning systems to achieve those goals.
Comparison to other contexts.
The conversation about research impact in the U.S. is clearly still emergent, and I find that our understanding of research impact is much closer to “use” than U.K. definitions. In spite of conceptualizing research impact and research use as distinct, albeit in different ways, in many of my conversations, the two terms might have been interchangeable (with some important exceptions). This may mean that in spite of new language, there is less of a shift in our thinking about the relationship between research and policy or practice than imagined. It may also mean that there is a need for greater dialogue among stakeholders about the language we use and what it means for our collective and individual work.
Accountability for impact.
Relatedly, there is little Accountability in our system (big A), but some degree of accountability (little a). I refer here to high stakes – i.e. institutional funding, employment – Accountability with a capital A, and lower stakes – i.e. self-worth, – as accountability with a lowercase a. Research impact in the UK, and increasingly elsewhere, is part of a larger Accountability framework for the research enterprise, though there was very little mention of higher stakes decisions based on impact in my conversations. Most mentions were about lower stakes decisions – funding of particular projects, views about one’s own contribution to the field, and a larger moral obligation to future generations. The distinction is notable. As we’ve dually noted here in the U.S., education Accountability has profound effects on how organizations operate. An Accountability framework featuring research impact would demand notably different metrics and supports than described in my conversations, which were acknowledged as problematic and inadequate. I offer a caution to the field that a shift in language research use to research impact feels like the beginning of a shift from the little a to the big A. Absent a much broader dialogue and significant alignment throughout the ecosystem, this is concerning.
Measurement and observability are equally problematic here in the U.S. and in contexts where impact is part of larger Accountability policies. As many scholars have noted (see here, here, here, and here for examples), as well as citations in my first post, the idea of measuring and observing research impact is complex and varies across disciplines. Often the most feasible and simple metrics are least adequate to the task, and true impact is likely to demand significant time and expense. In this sense, our contexts are similar. In light of the caution about Accountability above, the inadequacy of measures may ultimately be consequential, as “what gets measured gets done”. A natural implication could be the narrowing of the kind of research that is funded, that “counts” for promotion and tenure, and that ultimately is available. Under the guise of impact and relevance, a narrowing of the field would directly contradict the complexity of problems facing the educational system and limit innovation in both the research and practice spaces.
Imbalances in directionality persist in both spaces. The idea of research impact necessarily entails policymakers or practitioners using research to make decisions, but at the same time relies on assumptions of accessibility and relevance that are problematic and empirically questionable. In the conversations I had, I noted above the importance of the practice to research pipeline of ideas and needs, and when it is addressed, it often demands researcher time and effort with little attention to how we can improve two-way interactions. This is often absent from our own dialogue, and it is also absent from the dominant research impact frameworks in the UK. Though research impact itself implies a directionality, the supporting mechanisms demand two-way channels.
The emergence of research impact language in the U.S. is certainly worthy of further exploration, and I hope that this blog series and its embedded dialogue on social media promotes conversation across the research, policy, and practice ecosystem. These themes are potential starting points, but I’ll offer one additional opportunity to advance work in this area. As co-editor of the International Journal of Education Policy and Leadership, which has a standing section on Research Use, I am posting a call for papers for publication in 2019.
About the call
Lessons and learnings described above suggest that impact is one way of thinking about use, whether consumed within use or an alternative form of use, and that impact might be defined as changing the outcome of a decision. However, even anecdotally, we find it hard to document research impact, to show evidence that it happens or how it happens. I’d like to change that. The International Journal of Education Policy and Leadership call for papers seeks to encourage this work and to make it more visible. The editorial team, myself included, therefore invites education scholars to submit to IJEPL-Research Use empirical manuscripts that present cases of research impact (or cases of intended impact that did not achieve said goal) in early education, K-12, and post-secondary policy or practice. We encourage those who study research use to contribute as well as those who have mobilized their own or others’ research to effect change in education settings. Building from the ideas featured in this blog series, we seek papers that clearly articulate:
- A conceptualization of what it means for research to have impact
- The research that is intended to have impact
- How research impact is operationalized and observed
- What conditions contributed to impact (or lack thereof)
Expressions of interest are due April 15, 2019, and authors invited to submit completed articles must submit by July 1. For more information, please see the call.
Whether you contribute to the special issue or not, please stay tuned to this issue in the literature, in policy, and in practice. We look forward to continued engagement as we navigate this terrain together.
*Author’s Note: I’d like to acknowledge the support of the William T. Grant Foundation for creating the opportunities that resulted in this line of work, and Vivian Tseng and Mark Rickinson for their generosity in letting me bounce ideas off of them. I’d especially like to acknowledge the six thought leaders who volunteered their valuable time to contribute to this project.
About the Author:
Dr. Elizabeth N. Farley-Ripple, is an Associate Professor in the School of Education at the University of Delaware. Her research focuses on policy analysis and evidence-based decision-making in schools. She can be reached at email@example.com.