Does training in evidence based practice make clinicians better decision makers?

We elves love our evidence and evidence-based practice; where treatment is guided by evidence of treatment effectiveness integrated with clinical expertise and the values and preferences of the patient.

While conducting an entirely unrelated systematic review (I have to question the precision of my literature search), I stumbled upon this paper by Baker-Ericzen and her colleagues. It isn’t a systematic review, randomised controlled trial or even a cohort study, so it’s not our usual elven fodder but interesting nonetheless.

Clinical decision making skill is integral to evidence based practice. In fact, it appears in most definitions of evidence-based practice because even when evidence is available, you need clinical skill to apply treatments appropriately to individual patients and conditions.


Trying to help a depressed person with a prescription for a broken bone treatment… is the same as trying to eat your dinner with an axe and a pair of pliers.


The study analysed the decision making skills of community based mental health clinicians who were trained in evidence based treatment (EBT) for disruptive behaviour disorders (DBD’s) versus equally experienced clinicians who had not been trained in EBT.

The idea was that those actively trained in evidence based treatments may have developed better ‘meta-cognitive’ skills, or as Professor Carol McGuinness puts it “skilful thinking”.


We are vulnerable to all kinds of biases in our thinking because we are prone to taking shortcuts (aka using heuristics). Having better metacognitive skill may go some way to counteracting those biases and ultimately make us better decision makers.


Participants were 48 paediatric clinicians (e.g. psychologists, marriage and family counsellors, social workers) practicing in community mental health outpatient settings in California who self-identified as:

  • Trained in providing therapy to children with disruptive behaviour disorders;
  • Experienced in working with these children.

Interestingly, 38 non-EBT trained clinicians and only 6 EBT trained clinicians initially volunteered to take part. A recruitment drive resulted in a final sample of 34 non-EBT and 14 EBT trained clinicians.

“EBT trained” clinicians had to have been trained in a child or family therapy approach cited in a published review of effective interventions for DBD’s (such as multisystemic therapy, parent-child interaction therapy) and training included case management, ongoing clinical supervision and treatment fidelity monitored and reached. The “non EBT trained” clinicians had read about or attended workshops on relevant EBTs but had not undergone formal training.

The clinicians were given complex case vignettes about a child with disruptive behaviour problems and asked to “think aloud” about their conception of the case and treatment planning. Between 1 and 4 vignettes were presented over one hour, depending on how long the participants took over each case.

Authors then coded the transcript of the “think aloud” according to “naturalistic decision making” (NDM) theory, which essentially is a framework for understanding how people make decisions in complex real world situations. For example, expert decision makers tend to focus on solutions, are more efficient at organising information and ignoring distracting information (Elliot 2005; Glaser & Chai 1988).

This framework was used to classify the clinicians’ decisions into “expert” or “novice” based on their decision making process using five categories;

  1. Type of reasoning
  2. Organization of information and deriving hypotheses
  3. Attention to information and level of abstraction
  4. Finding solutions
  5. Incorporating actuarial information and flexibility in application


The results demonstrated that EBT trained clinicians used ‘forward reasoning’ by using relevant clinical information and asking fewer irrelevant questions. They assigned diagnoses in line with how the case vignette was constructed (i.e. diagnosing the child with a DBD).

Non-EBT trained clinicians tended to be distracted by parent and family information and assigned a wide range of diagnoses (from bipolar disorder to substance abuse) and were more likely to assign multiple diagnoses.

EBT trained clinicians gave more specific detail and rationale for their treatment plan and, on average 64% were EBTS compared to 16% for non-EBT trained clinicians.

Trained clinicians produced on average 64% evidence based treatment plans compared to only 16% in control group

Trained clinicians produced on average 64% evidence based treatment plans compared to only 16% in control group.

The authors’ conclusion

EBT trained clinicians displayed cognitive processes more closely aligned with “expert” decision makers… Non-EBT trained clinicians assigned significantly more diagnoses, provided less detailed treatment plans and discussed fewer EBTs.

Strengths and limitations

While this study isn’t up to the normal evidence standards we elves look out for, I still found it an interesting and well conducted example of cross-sectional research.

The choice of DBDs as the focus of the case vignettes was sensible as EBTs for DBDs are well researched (current NICE guidelines) and clinicians have a range of options for evidence based treatment.

The authors report that they had experts review each vignette to make sure they were consistent and each child was described as exhibiting symptoms consistent with oppositional defiant disorder (ODD) but with varying contextual characteristics.

The coding scheme (to my novice eyes) looks to be well constructed, manualized and implemented.

The participants were blind to the research question.

The two authors who coded the transcripts were blind to clinician training group and reported good interrater reliability (although they don’t report how many transcripts were double coded or how interrater reliability was calculated).


On first reading this it occurred to me that trained clinicians might just be better at articulating what they would do as they have practiced this skill through clinical supervision. But the detailed analysis of the quality of decision making and the fact that non-EBT trained clinicians made such different diagnoses convinced me that the difference wasn’t just about being articulate.

So, EBT trained clinicians seemed to be making better quality decisions and treatment planning. Of course this doesn’t tell us anything about differences in outcomes for their patients or, indeed, what is really driving the differences in decision quality.

To my mind, clinical supervision is the important difference between the “trained” and “non-trained” groups. So, perhaps this paper provides a good argument for providing adequate time for clinical supervision during initial training and beyond, to give clinicians sufficient feedback and practice to become ‘experts’.

Now I’m wondering if I can apply naturalistic decision making to speed up my systematic review screening process… only 6,483 records left to screen…

Perhaps, supervision during training and beyond can give clinicians sufficient feedback and practice to become ‘experts’?

Perhaps, supervision during training and beyond can give clinicians sufficient feedback and practice to become ‘experts’?


Primary link