This Program is Evidence-Based. Then Why Doesn't It Work?

This program works.  I guarantee it.  Maybe.  It depends, of course.

A recent post from Lisbeth Schorr urges a change to our understanding of what counts as “evidence”.  Schorr writes that for too long that what has counted as “evidence-based” are programs that have been tested in a randomized control trials and shown to have a positive impact.  Schorr argues that, while it this type of research of research is valuable, it also leads to a sort of "silver bullet syndrome" where we spend our time trying to find the perfect program.  Interventions are likely to get differing results under differing conditions and when we commit too much attention to whether a program is "proven" in another context we lose sight of the need to monitor whether it is working in our local context.  Schorr concludes that a focus on programs that “actually work” (or are "evidence-based") is keeping us from getting better results. 

Instead, of focusing in on finding the "silver bullet" solution more energy should be devoted to using approaches that can help us expand our understanding of how interventions behave in the local context and how we can improve results.  A change to our concept of “evidence” is not a matter of lowering expectations for “proof”, but rather recognizing that our job is to understand local complexities and achieve positive results for students.  A process that develops improved “practice-based evidence” is what we need in education. 

Schorr argues that a change that focuses on the behavior of interventions in the local context encourages greater innovation by acknowledging that local conditions impact the results we achieve.  Instead of accepting that interventions that worked elsewhere will work locally, we should be doing rapid tests of innovations to determine what variables impact success.  One way we can do this is by using improvement science and implementing networked improvement communities (NICs).  NICs, which were initially conceived in the 1960s, have begun to spring up in education in recent years.  NICs are mostly connected with the work of the Carnegie Foundation for the Advancement of Teaching in Stanford, CA.  The six core principles of Carnegie guide the work of NICs.  

  1. Make the work problem-specific and user-centered.
  2. Variation in performance is the core problem to address.
  3. See the system that produces the current outcomes. 
  4. We cannot improve at scale what we cannot measure. 
  5. Anchor practice improvement in disciplined inquiry.
  6. Accelerate improvements through NICs.

As we change our concept of "evidence" and acknowledge that local conditions matter, we must also adopt a disciplined approach to being using "evidence".  Implementing the core principles of Carnegie (or similar approaches from analogous organizations like the Institute for Healthcare Improvement), are one path to improving the performance of our systems and improving outcomes for kids.