Determining whether a product has the right kind of research base to show its effectiveness is often a confusing process. The Every Student Succeeds Act (ESSA) now requires that the intervention programs that school districts purchase have sufficient evidence to demonstrate that they improve student outcomes. Choosing a program with research-based effectiveness increases the likelihood that it will improve student achievement. Understanding ESSA’s tiers of evidence standards and knowing what to look for will help districts make informed choices about an intervention program.

ESSA’s Tiers of Evidence

Tier 1. Strong Evidence: from a well-designed experimental study demonstrating statistically significant positive effects on student outcomes.

Tier 2. Moderate Evidence: from a quasi-experimental study that meets ESSA standards. A quasi-experimental study is still an empirical study but without the random assignment that makes Tier 1 studies the gold standard. This level of research also shows significant positive effects.

Tier 3. Promising Evidence: from a well-designed and implemented correlational study, statistically controlled for selection bias.

Tier 4. Demonstrates a Rationale: there is a well-defined logic model based on rigorous research. An effort to study the effects of the intervention is planned or currently underway.

Districts should be able to expect that even brand-new products have been created with solid, peer-reviewed research in either the topic area or pedagogy or both.

How to Make the Best Intervention Selection

It is important that schools and districts understand ESSA’s tiers of evidence so they can quickly determine whether a program’s research claim is truth or fiction. Use the following steps to choose the best solution for your district’s unique situation.

1. Identify Local Needs

First, engage all district stakeholders to determine specific student needs. Then compare current student outcomes to the district’s performance goals. Creating an inventory of current practices and interventions will inform the types of interventions the district requires. Finally, identify learning gaps and determine which of them to prioritize.

2. Select Evidence-Based Interventions

Several clearinghouses help educators find instructional programs that are research-based. Evidence for ESSA, for instance, provides the most up-to-date and user-friendly review of research based on the ESSA tiers. The What Works Clearinghouse has begun to align its evaluation process with the ESSA tiers, and the National Center for Intensive Intervention provides evaluations of other program components such as academic screeners.

By choosing programs that have rigorous evidence of effectiveness, the intervention is more likely to produce successful results in your district. Studies that meet “strong” and “moderate” evidence are preferred. Because gold-standard studies may not be feasible with all subpopulations, relying on programs deemed “promising” can also be useful when researching solutions for sub-populations—for example, English language learners or students with disabilities.

3. Review District Capacities

Funding, technology infrastructure, staffing needs and skills, administration leadership support, and even scheduling requirements are all success factors. It is critical to determine whether the program is a good match to support district goals. Review the district’s ability to implement a given program with fidelity and certify sufficient allocation of resources.

Also, note whether the product’s instructional model was originally developed for the reason it is being purchased. For example, there are supplemental ELA/reading programs that originally were not developed as intervention programs. Such a program may not provide effective adaptation and differentiation for students who are significantly below grade level and need to accelerate their reading growth.

It makes the most sense to choose a program designed for the purpose you will be using it for.

Reading the Fine Print

Be careful to read the fine print in marketing materials and when negotiating your contracts. You will sometimes see wording that promises “up to 2X or 3X expected growth.” The red flag here is “up to.” This phrase often implies greater results than the product may achieve for most students. If the product’s marketers could say, “On average, students with certain characteristics achieve X% of growth,” they would.

Another red flag is reliance on self-referential data. Proprietary company results and performance data should be correlated with a national metric, such as SBAC or NWEA MAP to be a compelling comparison.

Technology platforms that feature multiple subject areas, such as reading and math, may provide administrative and pricing convenience, but these platforms may not provide equally good instructional support or results for all subjects. Academic experts advise assessing each subject area’s instruction individually against your district’s needs to ensure an “apples to apples” comparison with other programs being considered. The goal is to select the most effective instructional tools to meet your district’s goals.

Learn more about Reading Plus

Based on decades of reading research, Reading Plus has been shown to produce statistically significant increases in reading proficiency, comprehension, and motivation. Reading Plus is found to have “strong evidence” to support ESSA.

Request a demo with your local representative for a closer look at our adaptive literacy program. A demo is an easy way to see how Reading Plus fits into your district’s literacy plans.

Leave a Reply

Your email address will not be published.