Making Impact Happen

Making Impact Happen

This is the last in the series, Bracing for Impact, by Dr. Elizabeth N. Farley-Ripple. Follow this link to the first in the series, Exploring the Concept of Research Impact and this link to the second in the series, Is research impact different than research use?  The third in the series, A layer cake of motivations, discussed how research impact matters. Is research impact observable is the focus of the fourth in this series.

The last theme of this blog series on research impact, is, not surprisingly the longest.  The focus my comments here are on the conditions needed to support research impact.  I’ll be up front in noting that this series has not revealed consensus on what constitutes impact, how important impact is, nor how to observe it.  Yet at the same time, a number of strategies and conditions were woven through my conversations, many of which prompted me to think in different and more nuanced ways, beyond the barriers and facilitators long documented in the research use literature.  The discussion below starts from the premise that if we believe research impact is important (and everyone did, to varying degrees for the various reasons described in earlier posts), then there must be efforts in place to support it.  This turns out to be true in some places and less so in others.  For example, the head of one federal funding agency articulate a three-pronged approach to helping the research it funds to have impact while another admits as a higher education institution they haven’t quite figured it out in the context of a traditional academic rewards system (not for lack of trying, to be fair). Conversations highlighted five broad lines of thinking: relevance, objectivity, rewards, capacity building, and accessibility.

Relevance. Relevance of research to the problems decision-makers face emerged in a multitude of ways: through political salience, content focus, source of producer, and directionality.  One respondent recalled a framework suggesting the likelihood of research impact was contingent on a combination of political salience and uncertainty, noting relevance has a political element, reminding me of Kingdon’s policy windows. The head of one SEA’s research and evaluation unit articulated research has to “scratch the current itch” to be helpful, denoting a content relevance, and exerts significant effort curating research that could inform current department initiatives.  Or, on the other hand, they might produce research evidence directly.  The idea of producing research to inform a specific decision, often in response to a request for it, is a third way of ensuring relevance.  This happens in SEAs and LEAs, but also in programs that embed researchers and data scientists in agencies and in research practice partnerships.  As one respondent explained, it almost guarantees impact. Relatedly, the fourth version of relevance raises the issue of directionality in connections between research and practice.  Questions driven from knowledge about policy and practice (P to R) increase the likelihood of relevance, and as one person put it, practitioners and policymakers define the problems for themselves, often in ways that are not attractive to researchers (unsexy to be exact). This framing of relevance begs the question of whether dominant conceptualizations of research informing practice are misguided, or at least capture only half of the equation; should research impact be guided by an P-R-P framework?

Objectivity. Related to relevance, characterization of research as “objective” was common.  This includes scholars seeking themselves to be viewed as objective, neutral sources of information for policymakers, as well as funders seeking to ensure they are not perceived as promoting an agenda.  The assumption of research or researcher neutrality was strong in some conversations, while others implied that ideology can be a driver of research, demanding the need for users to be critical of research and creating opportunities for use of research for justification of choices. 

Rewards. Incentivizing research impact also arose in multiple conversations, most often in the form of recognizing that traditional academic rewards systems do not explicitly value research impact beyond the academy.  In promotion and tenure processes, impact is often guided by metrics such as citations and related indices – which as noted above may be at least partially useful in to understanding impact more broadly. But in the absence of more explicit recognition of impact, higher education incentives drive article production and related behaviors. An alternative incentive is funding, which is often necessary to sustain research agendas and careers but is also frequently recognized as important for career advancement (including promotion and tenure decisions in higher education).  Funding, therefore, could be a lever for increasing impact.  One of the explicit strategies identified by the leader of a federal funding agency included setting a funding agenda on a) issues of practical and policy relevance, and b) ensuring that the plan included dissemination efforts likely to promote impact. 

Capacity building.  The discussion of rewards, however, was generally focused on incentives for researchers.  There was no mention of rewarding policymakers or practitioners for making decisions based on research.  In contrast, the idea of building capacity for both researchers and decision-makers emerged at multiple points.  The respondent from a SEA, for example, has a direct approach to training district and school leaders as well as SEA staff on “critically consuming” research, which in turn, increases the opportunity for research impact.  Another respondent, a policy researcher, acknowledges that through his teaching and mentorship, he may be shaping the next generation of policymakers and practitioners, and that his availability and accessibility can help create conditions for greater research impact in the future.  Other efforts build capacity among researchers to engage with practitioners and policymakers in the hopes of increasing research relevance and building the skills to span research/practice boundaries in their future roles. 

Visibility and accessibility. Visibility and accessibility emerged as related themes in my conversations.  Visibility relates to both the message about the value of research impact and to the research/research need itself.  The dean I spoke with highlighted the bully pulpit as a tool to reinforce his institution’s commitment to research impact, signaling the importance of leadership in advancing a culture of research impact.  Other efforts mentioned above, such as inclusion of particular dissemination requirements for funders and programs that embed researchers in policy and practice environments, also send strong signals about what is valued by these institutions and may ultimately elevate the visibility of research impact in the education ecosystem. 

An alternative perspective on visibility is more closely tied to accessibility.  From this perspective, there is concern for the extent to which research and researcher are visible or accessible to decision-makers, and if the problem and decision-maker are visible or accessible to the researcher.  Noted above, among the precursors for research impact is research accessibility.  If it never reaches the decision-maker’s desk, it can’t be part of the decision.  This means that research and researchers themselves, since we have strong evidence to suggest that this is about relationships, must be both visible and accessible.  One researcher seeks out opportunities to contribute to programs such as NPR, not simply to make people aware of the research, but to establish himself and his work as trustworthy and accessible.  On the other hand, there were no mentions of strategies to make problems or decision-makers more visible or accessible to researchers – all examples involve researchers reaching out or embedding themselves in policy or practice.  This reinforces the need to address directionality in the ecosystem – what supports or what opportunities can we create – at scale – to make the needs of policymakers and practitioners more visible?

I continue to wrestle with these questions: What conditions support research impact? In what ways are these different than supporting research use?  I wonder with this distinction, which reflects an issue I acknowledged in my second post – that research impact can be defined distinctly from research use – but that in conversations – and likely in our work as well – those lines are blurred.  Some of these supports make sense if use and impact follow a logical chain – e.g. access to research permits use which may lead to impact.  Others reflect the adage what gets measured, gets done. If our best metrics of impact remain citation rates, then we need more support – such as visible leadership – to move toward more meaningful impact.  And still others reflect broad views of impact – via students, for example – that extend to the researcher more than the research, opening up a new set of issues and questions about impact. 

And, of course, context is everything – what is needed for one researcher or organization may not be needed in another.  I invite your thoughts on this matter: what would help you or your research be more impactful? We are standing by on Twitter @rsrch4schls #researchimpact.

*Author’s Note: I’d like to acknowledge the support of the William T. Grant Foundation for creating the opportunities that resulted in this line of work, and Vivian Tseng and Mark Rickinson for their generosity in letting me bounce ideas off of them. I’d especially like to acknowledge the six thought leaders who volunteered their valuable time to contribute to this project.

About the Author:
Dr. Elizabeth N. Farley-Ripple, is an Associate Professor in the School of Education at the University of Delaware. Her research focuses on policy analysis and evidence-based decision-making in schools. She can be reached at enfr@udel.edu.

 

Published by