Teach By Design
Implementation
Fidelity
Survey
Aug 14, 2018

Implementation Fidelity: It's More Than a Score

The higher your fidelity score, the better your odds for improving student outcomes. How well you can match the intervention to your context matters almost as much.

No items found.
Apple Podcast Button

Editor's Note: Often when we talk about positive behavioral interventions and supports (PBIS), we focus on student behavior — what it looks like, how we can improve it, reward it, discipline it, and report it. What about our role? This is the first in our series taking a closer look at the roles adults play in implementation.

fi·del·i·ty
noun
     1: faithfulness to a person, cause, or belief, demonstrated by continuing loyalty and support.
     2: the degree of exactness with which something is copied or reproduced.
      synonyms: loyalty, allegiance, accuracy, precision

We hear the word ‘fidelity’ a lot when we’re talking about implementation. Working in an office full of researchers, the word ‘fidelity’ ranks high on the list of ‘Things I Heard in Yesterday’s Meeting’ – right up there with ‘scaling up’ and ‘sustainability’. A school’s fidelity score is its touchstone for the kind of outcomes they can expect from their implementation. The closer they implement a program the way research said it should be done, the better the outcomes. When I started thinking about the beginning of the year and the series we wanted to write for this blog, I figured fidelity of implementation would be an easy breezy place to start. I assumed there would be endless research to cite and so many strategies I would have to pare down the list. The topic appeared to be black and white. I soon found out, the truth is in the gray area.

The more I read on the topic, the more light it shed on a reality you know all too well: Schools are complicated places. Every moving part contributes something to your system’s success and failure. Don’t get me wrong. Fidelity of implementation absolutely matters, but maybe when class sizes grow by 50%, there’s a new math curriculum this term, and you have all new leadership in your building, this year was always going to be tough…no matter how closely you followed that implementer’s handbook. With all these variables, it's difficult to say with absolute certainty that your fidelity score always leads to specific outcomes.

Here’s what I mean.

Researchers from Loyola University looked through 30 years’ worth of implementation studies and focused on interventions associated with child and adolescent physical health and development, academic performance, drug use, and social mental health issues.[1] Any guesses on how many interventions they looked at? It turns out, between 1978 and 2008 there were 542 interventions that fit their study. As they read the results of five meta-analyses and 59 individual studies, they were most concerned with how each intervention played out over the course of its implementation. Specifically, they wanted to know:

  1. Does implementation affect program outcomes?
  2. What factors affect implementation?

Their findings are clear: When an organization implements a program carefully, monitoring the implementation along the way, kids experience positive outcomes two to three times higher than when an organization implements a program haphazardly. They also learned schools experienced improved outcomes when they adapted an intervention to fit their context, which almost always resulted in lower fidelity. How can that be?

GIF of a toddler looking confused

Beth Harn, Danielle Parisi, and Mike Stoolmiller explored the nuances of fidelity in their literature review.[2] They found an intervention’s ability to be adapted to fit a certain context really makes a difference. An example they cite is a small-group literacy intervention where teachers were given specific wording, activities, correction procedures, and recommendations for time management.[3] Most teachers scored high on their fidelity of implementation, but there were significant gaps in student improvement between the small groups. So, researchers looked a little closer at two small groups in particular: One with high fidelity instruction, but low achievement scores, and one with lower fidelity instruction and high achievement scores.

The difference between the two groups in this study was their teacher’s ability to adapt. The first group’s teacher –the one with high fidelity scores – appeared to be so focused on delivering the intervention, she lost focus on her student’s engagement in the lesson. The second group’s teacher delivered the core elements of the intervention while paying close attention to her student’s responsiveness. When her students got a concept, she’d skip the review questions and move on to the next lesson. Her flexibility lowered her fidelity score and also kept her student’s engaged and actively learning throughout.

When it comes to checking in on your implementation, measuring your fidelity is important. The closer you are to meeting the benchmark for the fidelity measure, the better your odds for improving student outcomes. What we learn from applying research to reality is: Implementation is more than your fidelity score. Implementation depends on how well you fit the intervention to your context.

Here are a few dos and don'ts to consider as your school works to find that sweet spot between high fidelity and contextual fit.

When an organization implements a program carefully, monitoring the implementation along the way, kids experience positive outcomes two to three times higher than when an organization implements a program haphazardly. They also learned schools experienced improved outcomes when they adapted an intervention to fit their context, which almost always resulted in lower fidelity.

1. Don’t: Set It and Forget It

GIF of a football player, with no opposing player around, about to score a touchdown only to drop the football just before scoring.

You can’t go through all that time in the gym, run-throughs at practice, pouring over playbooks only to show up at game time and drop the football inches from the goal line. You have to see the play through to the end. [On a scale of one to the worst, how much do you hate a sports analogy? Me too. I apologize.] Remember the 30 years’ worth of implementation studies? Well, in their analysis, Durlak and Dupre called the assessment of implementation “an absolute necessity in program evaluations.” They even went so far as to call determining a program’s effectiveness without collecting this information “flawed and incomplete.” If you aren’t regularly asking yourself whether you’re still implementing what you said you would, you’ve dropped the ball.

Do: Assess Fidelity of Implementation

With any intervention, it’s important to check in on how well the adults involved are implementing it the way you expected. There are lots of ways to check fidelity. Go to where the intervention happens and observe it. Ask the teacher to fill out a self-rating on how well they think they’re doing and where they could use additional support. Schools implementing PBIS have several surveys they can take to measure how well they’ve done at implementing systems to improve student behavior. The best survey to take as a team is the Tiered Fidelity Inventory (TFI).

The TFI is an efficient, reliable survey assessing how closely a school implements the critical elements of PBIS. Because you can assess all three tiers, taking the TFI may even reduce the number of additional surveys your school takes throughout the year. It doesn’t matter whether your school is in the first year of implementing PBIS or has been doing this stuff for more than a decade, taking the TFI annually is something every PBIS school should do.

2. Don’t: Go Wild With Adaptations

ExercizeBacon.jpg

Making an intervention fit for your building is important. It should feel naturally part of the culture you’ve already built for your students. However… Change an intervention too much and you’re eating bacon instead of going for a run.

Do: Make It Work With a Practice Profile

There is a reason benchmarks on fidelity measures aren’t set at 100%. You need to implement enough parts of a program for it to be effective and still be able to say you’re doing what you said you would do. The trick is to focus on the core elements of the practice – what are the gold standards for that element and what could be considered acceptable variations. Answering these questions creates a Practice Profile everyone can refer to when they start to consider changing some part of their implementation.

To start building a Practice Profile for something you’re doing in your school:

  1. Identify the critical components of the practice. If your practice is backed by research, chances are, there are some core features outlined for it. Write those down.
  2. Name the gold standards for each feature. That same research probably identifies exactly what you should be doing to create positive changes in student outcomes. What would students do in the best case scenario? What would teachers do? How do you collect data? What do teams do?
  3. Name the harmful variation of each feature. If you weren’t implementing this practice with high fidelity, what would happen? What would you see students doing? Teachers? Administrators?
  4. Review your gold standards and harmful variations and come up with both acceptable and unacceptable variations. You’ll know it’s acceptable because if you did it, you would still see improved outcomes. For example, an acceptable variation of a No Screen Time rule in your house might be to allow using an iPad to do math games. An unacceptable variation might be to allow uninterrupted Netflix binges on the weekends.

If any part of this process seems difficult, that’s because it takes practice and experience to develop something like this. That’s why you shouldn’t do it alone.

3. Don’t: Go It Alone

GIF of Milhouse from The Simpsons playing Frizbee by himself.

In the first year or two of PBIS implementation, you invite your coach to every meeting and consult them on your end-of-year action plans. Your coach is your right hand and your implementation is ultimately better for it. By the time you’re into your fifth, tenth, or 17th year of doing PBIS in your building, you can name the TFI subscales in your sleep. School-wide expectations are how you do business in your building. You feel like the Michael Jordan of PBIS implementation. What could a coach possibly tell you about how to improve? Even Jordan had Phil Jackson. [I swear, that was the last sports analogy.]

Do: Enlist a Coach

Your PBIS coach brings experience to the table. They know the intervention. They know your school. They are your best option for working through how these two things can work together seamlessly. Use your coach’s knowledge to develop a Practice Profile, or to work with teachers on how they can make the intervention work better in their classrooms. Your coach is someone who helps during initial implementation to iron out the wrinkles. Your coach can also be the person who takes your practices to the next level when you feel like you’ve reached a plateau. There’s always something you can do to improve. So, ask your coach what they see as an area where you can do better.

Think back to the moment you decided to use an intervention to solve a problem in your building. The first day of that implementation, you were careful to follow the steps. You collected every point card and taught every expectation three different ways. The first few days of implementation you followed the implementation handbook to the letter. The farther away from the first day you get, the easier it is to drop some of those important practices. Your check-ins get a little lazier. You refer to your classroom expectations a little less often. It's important to regularly check in to ask yourself, "Am I doing what I said I would do?" Instead of getting lax in your implementation, use the experience you gained in the intervention to find ways to make it work naturally in your classroom. You know your students better than most people. Find adaptations to fit their personalities and their learning styles. If you get stuck, don't be afraid to reach out to your coach to ask for help. Their creativity just might give you inspiration for what to do next.

1.  Durlak, J. and DuPre, E. (2008). Implementation Matters: A Review of Research on the Influence of Implementation on Program Outcomes and the Factors Affecting Implementation. American Journal of Community Psychology, 41(3-4), pp.327-350.
2.  Harn, B., Parisi, D. and Stoolmiller, M. (2013). Balancing Fidelity with Flexibility and Fit: What Do We Really Know about Fidelity of Implementation in Schools?. Exceptional Children, 79(2), pp.181-193.
3.  Simmons, D., Kame'enui, E., Harn, B., Coyne, M., Stoolmiller, M., Edwards Santoro, L., Smith, S., Beck, C. and Kaufman, N. (2007). Attributes of Effective and Efficient Kindergarten Reading Intervention. Journal of Learning Disabilities, 40(4), pp.331-347.

Download Transcript

Megan Cave

About

Megan Cave

Megan Cave is a member of the PBISApps Marketing and Communication team. She is the writer behind the user manuals, scripted video tutorials, and news articles for PBISApps. She also writes a monthly article for Teach by Design and contributes to its accompanying Expert Instruction podcast episode. Megan has completed four half marathons – three of which happened unintentionally – and in all likelihood, will run another in the future.

No items found.