Using Evidence-Based Practices in Intervention Pilots

It’s a familiar story to those involved in education or social service program delivery: a new intervention has been piloted to enhance outcomes, data have been collected to varying degrees of success, and soon the funder or sponsor will expect to see a final report.  The team is hopeful that data will show the intervention’s impact, but demonstrating impact within a short window could be difficult. Hiccups may have hindered the collection of outcomes data.  Still another measurement complication might have been if the logic model for the intervention didn’t even plan for demonstrable impacts until after the final report is due.

CHS&A has come across this scenario many times in working with clients. Most recently, we worked with a state agency that had dispersed funding to 20 grantees to engage in innovative strategies so that students with disabilities would more successfully transition from school to independent living and to a job or higher education.  Our task was to measure the impact of those pilot interventions, and our evaluation team was facing the two challenges described earlier.  First, we did not have input into the data collection and reporting plan of the individual grantees in the beginning of project implementation, and we were left hoping that the data collection protocols were sufficient to yield answers to our research questions.  Second, so many of the intended impacts of these interventions were scheduled to take place after the reporting period.  Despite our keen awareness of these complications, it was early in the process when our concerns where alleviated by a key project design criterion: the client had required that the grantees base their interventions in evidence-based practices and predictors (EBPs).

The movement to adopt EBPs in health and education has been on the rise in recent decades in education and health sectors, and for good reason.  Many well-meaning service-providers have promoted ideas for interventions based on their past experiences, intuition, or the opinions of their peers and trainers, but these don’t always prove effective.  EBPs are the result of putting these ideas to a rigorous test to determine which are reliably effective.  This means that service-providers who use EBPs aren’t enacting an intervention solely because they believe it will work, but because there is empirical evidence demonstrating results in past applications.

CHS&A was pleased to find that our task was to evaluate interventions based on EBPs.  It meant that even though we only had limited access to output and short-term outcome data, and though the medium and long-term outcomes were still too far off to measure, there was still a line in the research literature that connected the interventions in question to specific longer-term outcomes.  For example, we wouldn’t know the graduation outcomes of freshmen students who received the intervention for at least three more years, but we did know that there was evidence suggesting that the EBP led to increased graduation rate among the target population.  Essentially, EBPs allow some degree of extrapolation into the future when it comes to outcomes.  To draw this link effectively, research resources should be used to ensure that the EBP-based intervention is implemented with fidelity to the original EBP description.

If you are preparing a proposal for a pilot intervention in education or health, consider basing that intervention on EBPs, especially if the reporting window is for only a year or two.  Your proposal preparation efforts are more straight-forward since the theoretical rationales for EBPs tend to be robust.  At the end of project, you will also be able to speak more confidently about potential longer-term outcomes even if you aren’t able to demonstrate them.

Information on EBPs is widely available. In order to increase the use of EBP in practitioner communities, a multitude of research and professional organizations have provided clearinghouse websites to help search EBP databases.  See the listing below for some example resources that might be a fit for your projects.  The first listing is the one that CHS&A drew on for our evaluation in post-secondary transitions.

Education

Health/Mental Health

About Andrew Menger-Ogle, PhD

Dr. Andrew Menger-Ogle is Research and Evaluation Associate at C H Smith & Associates, LLC. Andrew currently provides methodological and analytical expertise for program evaluations.
This entry was posted in Data Driven-Decision making. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *