Teachers might teach an evidence-based reading, mathematics, or other curricular program. They might learn scientifically verified instructional strategies in professional development settings. But many observers believe scientific evidence mostly plays a limited role in day-to-day teacher planning and instruction.
Does that mean teachers use no evidence in their day to day work? Certainly not. Teachers rely on evidence collected in their daily practice. Some more than others. For example, formative evaluation is a well-researched process: observations, work products, and assessments collected for individual students constitute potentially powerful evidence on which teachers base instructional decisions. Students achieve more when they learn from teachers who use more formative evaluation. This evidence-based process identifies what students find hard to grasp and helps teachers tailor follow-up instruction.
Another example of evidence-based practices (EBPs) was documented in Japanese classrooms (Ermeling & Graff-Ermeling, 2016). Japanese mathematics teachers assign novel problems for which small groups of students must struggle to develop a solution method. The problem chosen is novel, and “just beyond” what the students already learned to solve. As the students work, the teacher circulates “between the desks” observing and taking notes. This information (evidence) reveals to the teacher areas of confusion and inaccuracy to address with later instruction. It also enables the teacher to assess and expand the range of solution approaches presented to all students.
Another kind of EBP is observed in some U.S. professional learning communities (PLCs). Collaborating teachers identify a learning problem their students share, jointly develop and try out corrective instruction, collect, share and discuss assessment results, and continue refining instruction until they see tangible learning improvement.
Although common, teachers use of EBP is hardly a standard equivalent to, for example, those employed in hospital emergency rooms. A patient suffering severe chest pains can reasonably expect a ER physician trained in a national standard of examination and response. No such expectation is reasonable if you choose a random school to visit to see evidence-based practices — such as systematic formative evaluation, or collaborative and repeated efforts to resolve persistent learning problems.
Even when found, consistency of EBP use is a continuing problem. Though there are some hopeful trends in the wide-spread interest in PLCs, in too many instances, PLCs do not focus on the continuous improvement of instruction and specific student learning challenges (Ermeling & Gallimore, 2013).
There is another, related problem. Even in PLCs and other EBP approaches in the US, there is only a haphazard accessing of researcher generated evidence to improve instruction. For example, in our research on teacher learning teams we observed only a few, limited instances in which teachers sought ideas and new strategies in scientific research literature (Saunders, Goldenberg, & Gallimore, 2009). Some sought guidance from local experts in educational research. But all instances were the exception not the rule–even in schools recognized for doing a superb job with PLCs and using EBPs such as formative assessment.
So, happily, we have two evidence-generating systems at least. Scientists and researchers constructing robust, dependable information that is (usually) published and disseminated in various ways. And we have many (but not all) teachers generating powerful evidence-based knowledge, most of which is only haphazardly accumulated, stored and shared.
Carried away on a momentary flight of fantasy, I imagine a day when knowledge produced by the research community is put to the ultimate scientific test of multiple replication by an increasingly large and engaged evidence-based practice community.
No scientific study can control every variable that might potentially impact student learning in public classrooms in a culture as diverse as ours.
If we could capture the knowledge generated by thousands of EBP practitioners every year, we could through multiple replications learn something that eludes and bedevils even the most careful and conscientious scientist: do these results generalize widely or not? If they do not generalize, then failures to replicate might generate new and productive hypotheses to test.
Fantasy over, I am back to the reality that neither the research or practice communities have in the past had any incentive to join forces to generate, test, store and share knowledge for teaching (Hiebert, Gallimore, & Stigler, 2002). Each labors under its own constraints and contingencies.
Even when a sterling example of researcher-practitioner collaboration appears and its stories told and appreciated, it doesn’t seem to last beyond a Brigadoon moment — Brigadoon, that mythical, magical village in Scotland that appears in full, colorful glory only one day every 100 years.
Ermeling, B. A. & Gallimore, R. (2013). Learning to be a community: Schools need adaptable models to create successful programs. Journal of Staff Development, 34, 2, 43-45
Ermeling, B. A. & Graff-Ermeling, G. (2016). Teaching between the desks. Educational Leadership, 72, 2, 55-60. Retrieved July 19, 2016 from
Hiebert, J., Gallimore, R., & Stigler, J. (2002). A Knowledge Base for the Teaching Profession: What Would It Look Like, and How Can We Get One? Educational Researcher, 31, 5, 3-15.
Saunders, W., Goldenberg, C. , & Gallimore, R. (2009) Increasing achievement by focusing grade level teams on improving classroom learning: A Prospective, Quasi-experimental Study of Title 1 Schools. American Educational Research Journal, 46, 4, 1006-1033.