Style Selector
Layout Style
Boxed Background Patterns
Boxed Background Images
Color Scheme

Accelerating Student Growth

 

Accelerating Student Growth

Getting Even More from Your Existing Interventions

You're using effective interventions, and your team is well-trained and working diligently. Yet, your intervention results are falling short of expectations.

In many cases, small adjustments to your existing process can significantly boost student outcomes. Here, we'll address a straightforward strategy that can potentially improve results for at least an additional 30% of your students. This approach doesn't involve acquiring new interventions or matching interventions to specific student problems by increasing intervention efforts. Rather, it's a "big picture" change that focuses on the initial steps of intervention design. Before implementing any intervention, accurately determining the root cause of a student's struggles is essential. A student scoring markedly below level on a reading screener requires a tailored approach based on the underlying issue. I'll outline a simple, sequential process to identify or rule out whether the student's problem is a systemic Tier 1 issue, a "can't do" skill deficit, or a "won't do" motivational challenge. Misidentifying the problem leads to misdirected interventions, wasting valuable student and teacher time and resources.

Crucially, the effectiveness of any intervention, regardless of its design, depends on the accuracy of the initial problem identification.

Research supports a three-step method for refining problem identification. These steps should be used sequentially:

 

Step 1: Check for Tier 1 problem. If a student is performing low, and so are many other students, your approach should differ (see class wide graph of scores below.  A referred student is marked).  Deno and Mirkin’s research indicated a classwide or Tier 1 problem when more than 50% of the students are below level. In such cases, classwide intervention in the specific area is generally recommended. This process, while seemingly complex, can be implemented systematically and simply.   For further information, consult the  work of Amanda VanDerHeyden and her colleagues (VanDerHeyden & Codding, (2015). Their work provides a strong foundation for implementing effective classwide interventions.  Therefore, Step 1 is simply to determine if more than 50% of students are below level.

 

 

Step 2: Can't Do vs. Won't Do (CDWD). If Step 1 rules out a classwide problem, proceed to Step 2, a Can't Do/Won't Do assessment, as discussed in a previous article.  This straightforward process involves assessing a key skill, such as Oral Reading Fluency.  First, administer the assessment under standard conditions.  Then, repeat the assessment, offering a reward (e.g., extra computer time) for exceeding the initial score.  If the score improves by more than 20%, the student likely has a performance deficit (a "won't do" problem) rather than a skill deficit.  Approximately 20% of students experience "won't do" problems.  Implementing instructional interventions in these cases is often ineffective, as student engagement is likely to be low.  This assessment is particularly important for lengthy computer-based tests, where student engagement is a common concern.  Intervention Central provides a user-friendly process for this assessment, supported by research studies and reviews (Duhon et al., 2004, VanDerHeyden & Witt, 2008). 

Step 3: Skill Deficit Diagnostics. If Steps 1 and 2 rule out classwide and "won't do" problems, proceed to Step 3.  This step addresses skill deficits, where the student lacks proficiency in a key skill like ORF.  Employ diagnostic assessments that break down skills into sub-skills, such as phonological awareness, blending, and reading fluency.  This detailed examination of sub-skills is supported by the National Mathematics Advisory Panel (NMAP) Report (2008) and the National Reading Panel (NRP), which emphasize the importance of foundational skills as building blocks for advanced learning.

These three steps, when used sequentially, 
are highly effective.  

The decision rules presented here were evaluated by VanDerHeyden, Witt, and Gilbertson (2007) as part of a multi-year evaluation of MTSS efficacy.

 

References:

Deno, S. L., & Mirkin, P. K. (1977). Data-based program modification: A manual. Council for Exceptional Children.

Duhon, G. J., Noell, G. H., Witt, J. C., Freeland, J. T., Dufrene, B. A., & Gilbertson, D. N. (2004). Identifying academic skill and performance deficits: The experimental analysis of brief assessments of academic skills. School Psychology Review, 33(3), 429-443.  

VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of the effects of a Response to Intervention (RTI) model on identification of children for special education. Journal of School Psychology, 45, 225-256.  

Vanderheyden, A. M., & Codding, R. (2015). Classwide interventions: A meta-analysis of single-case research. School Psychology Review, 44(2), 169-190.