Most practitioners – whether in education, health care, or any other field — would agree that research should play a part in guiding how they do their work. But research papers are written for other researchers, who are concerned mainly with whether the analysis was done correctly rather than whether it is relevant to practice. So it’s no wonder reading research papers is often a frustrating experience for practitioners–little of their content is germane to their interests. But I have some good news: As a practitioner, you can skip over most of the paper and still find the information you want. You just need to know where to look.
Often the research practitioners are reading attempts to measure the impact of a program or intervention on some outcome. Papers that evaluate program impact can help practitioners predict whether a program that has been successful elsewhere might work for them, as well. These types of papers are also often among the most technically sophisticated, making them difficult for practitioners to parse. To unearth the most important pieces of an impact analysis, I suggest focusing on what I call R3I: relevance, inference, impact, and importance.
Begin by considering relevance. If it’s not relevant to a problem you are trying to solve, there’s no point in reading further. Is this intervention aimed at an outcome you want to change? If so, is the intervention something you might actually try? Here you’ll need to consider how well your local context matches where the intervention was tested, to gauge whether you’d be likely to obtain the same result. Journal articles will usually include information about the size of the district or school, the demographics of the students it serves, and sometimes its city, state, or region.
In gauging relevance, you likely also want to know about how and why the intervention caused the outcome: the specific program design, the resources required for implementation, and the local social, historical, cultural, and legal conditions that allowed it to be successful. These details, unfortunately, are unlikely to be reported in academic journals, so you may need to follow up elsewhere if the paper looks promising otherwise.
If the paper is relevant, then consider how well the study’s methodology supports the inference it is trying to make. For papers that aim to make an inference that an intervention caused an outcome (versus just being correlated with it), the bar is high. It has to have a strong strategy for accounting for the fact that people who choose to participate in an intervention are likely to be different than those who choose not to participate–and the differences that drive that choice likely also affect outcomes directly. Without accounting for this problem, researchers are just measuring correlation, not causation.
This is the point where you may start to get bogged down. How can you evaluate the validity of the inference the researcher is claiming without being a research expert? One strategy is to look for keywords that indicate the researcher has been attentive to this concern. Random assignment to the intervention versus a comparison group is the best way to handle this problem, so words like experiment, quasi-experiment, randomized controlled trial, or random assignment are encouraging. You might also encounter words that describe specific methodologies. Regression discontinuity, comparative interrupted time series, difference in differences, and propensity score matching are some of the most common.
Another tactic is to pay attention to the verbs. If the researchers describe their findings with words like caused, led to, resulted in, increased, decreased, and so on, that suggests they feel confident their methodology has handled this problem well. If they instead use verbs such as associated with, correlated with, or related to, that suggests they don’t think they have a strong causal claim. This of course leaves the judgment in the researcher’s hands, not yours. Ideally with some practice, you would get better at judging methodology for yourself. But we need to trust researchers and the peer review process to suss out unfounded claims, rather than demanding that practitioners develop expertise that it took researchers themselves years to acquire.
Impact & Importance
The final two Is in R3I are impact and importance. Impact has to do with direction and magnitude: Is the direction of the effect positive or negative, and how big is it? Importance focuses on statistical and practical significance. Statistical significance is a technical criterion that measures whether the estimated impact is likely to be “real” or just due to chance. Practical significance is a judgment call: Is this impact big enough to matter educationally, and is it worth the resources it would require to implement?
Now that you know what to look for, where will you find it in an academic paper? This table points the way.
|Should you bother?
|What will you find here?
|Most or all of R3I
|Most or all of R3I
|Background / literature review
|Intervention / context / setting
|Data and sample
|Results / findings
|All of R3I
While most academic articles are organized similarly, journals vary in the specific labels they use for the section titles. I’ve listed some of the more common variations, but you may need to quickly skim to figure out what section you are in.
Start with the abstract, the short summary of the paper that appears on the first page. It will typically include at least a little information about most of the R3I elements. Next, proceed to the section that describes the intervention and context, to glean more information about whether the intervention is relevant to your needs. If it is, then read the discussion, where the researchers will discuss the main findings and put them in a broader context. If you still haven’t found all of R3I, then skim through the other sections as needed. Often just reading the topic sentence of each paragraph is enough to uncover the rest.
By focusing your reading where the information you want is most likely to be, you can efficiently cull out the most important information for your practice and skip all the parts that are only there to demonstrate to other researchers that the analysis was done correctly. I hereby grant you permission to skip over all the Greek, guilt-free. If something about the study design is critical to know for correctly interpreting the findings or limits how you can generalize from the results, it’s the researcher’s obligation to point that out in the abstract or discussion section.
Now that you know where to look when you’re reading a research paper, you can spend all the time you’ve saved on your main job: solving problems to improve outcomes for kids.
About the author:
Carrie Conaway is a senior lecturer at the Harvard Graduate School of Education and has nearly 20 years of experience in integrating research and practice to improve public policy. Carrie can be reached at Carrie_Conaway@gse.harvard.edu