Improving the Odds for Students

Improving the Odds for Students

There are two reasons to jump on the “evidence-based” bandwagon.  The first reason being that we can and need to do a better job educating our young people, and we likely will need to accomplish this with less funding. Although for the last 40 years, average student achievement levels in the U.S. have been stagnant[1] per-pupil funding has increased steadily[2] and achievement gaps between children from different socio-economic and socio-cultural backgrounds have continually widened[3]. These trends foreshadowed widening wage gaps between lower and higher skilled workers[4] and across race/ethnic groups.

The second reason to jump on the bandwagon is that sizeable shares of federal and private funds for K-12 education are now being reserved for policies and practices that are “evidence-based.” For example, the Every Student Succeeds Act (ESSA) requires recipients of federal funds to use the monies for evidence-based programs, policies or practices[5][6].  Similarly, other federal, state and private funders have incorporated evidence criteria into their award decision rubrics (Table 1- see below). With a prioritized focus on evidence-based entities, state and local education agencies must consider the nature and degree of evidence available in decision-making.

Embedding evidence standards in funding processes signals to prospective education organizations that (1) they must use funds in ways that maximize the expected “bang for the buck,” given the current state of knowledge and (2) they will be accountable for generating evidence to corroborate or adjust the expected price for performance assumption. The establishment of policies such as the ESSA was foreshadowed by the decision to establish an Institute of Education Sciences in 2002, which has created a robust infrastructure to support evidence-based decision-making. This foundation includes:

  • development of guidelines for education research and development projects;
  • the funding of more than 1,500 research projects adhering to these guidelines;
  • the What Works Clearinghouse to provide easy, reliable and free access to what we do and do not know about the effectiveness of various education programs, policies, and practices;
  • and a revamped Education Resources Information Center (ERIC) to improve public access to education research and instructional resources, including access to a large share of the IES-funded studies.

With the goal to improve organizational outcomes, targeted, evidence-based strategies are embedded in the makeup of effective funding agencies. For example, IES’s infrastructure provides agencies such as State Education Agencies (SEAs) and Local Education Agencies (LEAs) and myriad Community Based Organizations (CBOs) with a well-lit path to access tools necessary to access and use existing evidence, as well as to help them generate and apply new evidence. In addition, the U.S. Department of Education’s Non-regulatory Guidance for Strengthening the Effectiveness of ESSA Investments adds two very useful steps to the popular Plan-Do-Study-Act model for continuous improvement. One of those steps is the use evidence to inform prioritization needs and the selection of strategies for meeting those needs. The other step focuses on routine use of effective evidence for promising, but unproven strategies.

Tools like What Works Clearinghouse and other similar clearinghouses that have sprung up in various public agencies can be powerful for helping to identify promising strategies for achieving specific areas of improvement.  In the context of serious efforts to improve the effectiveness and efficiency of educational policy and practice,  consider three scenarios: (1) the need is decided, but the solution is not (e.g., improving reading among early grade English language learners (ELLs));  (2) there are competing needs and a preference for maximizing the chances of a success (e.g., improving kindergarten readiness or raising high school graduation rates); and (3) the education organization is being pitched a “sure solution” for an acknowledged need (e.g., READ). To make wise decisions in any of these situations, it would be important to have ready-access to the state of evidence regarding the likely impacts and costs of the trade-offs under consideration.  Having quick access to trusted sources of what we know is a first step toward improving decision-making.  Instituting continuous improvement practices into education-serving institutions and making sure credible evidence on impacts and costs of various improvement strategies makes it into the evidence review platforms is equally important if we are to accelerate the process.

For more information on how to find evidence-based programs, policies, and practices along with education research studies check out our practitioner’s resources page.

[1] See reading and mathematics score trends: https://nces.ed.gov/programs/coe/pdf/coe_cnj.pdf

[2] See report on Revenues and Expenditures for Public Elementary and Secondary Education: https://nces.ed.gov/programs/digest/d15/figures/fig_11.asp?referrer+figures.pdf

[3] See report on School Segregation and Racial Academic Achievement Gaps: http://blogs.edweek.org/edweek/inside-school-research/reardon segregation and achievement gaps apr2016.pdf

[4] See report on Job Market Polarization and U.S. Worker Skills: https://www.brookings.edu/wp-content/uploads/2016/06/polarization_jobs_policy_holzer.pdf

[5] See information on evidence-based programming (i.e. Results 4 America): http://results4america.org/wp-content/uploads/2016/11/ESSA-evidence-provisions-explainer-7.22.16-Update.pdf

[6] See information on evidence-based programming  (i.e. Chiefs for Change): http://chiefsforchange.org/wp-content/uploads/2016/07/ESSA-and-Evidence-Why-It-Matters.pdf

Published by