Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


CBT: Exploring the Evidence Base

cyCBT Theory & Practice

Emily Taylor

on 8 November 2011

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of CBT: Exploring the Evidence Base

CBT: The Evidence Base
Fundamentals of CBT with Children and Young People

This presentation heavily references Fonagy, P., Target, M., Cottrell, D., Phillips, J. & Kurtz, Z. (2005). What Works for Whom? A Critical Review of Treatments for Children and Adolescents. New York, Guilford. (Chapter 1)
A developmental approach to CBT
Is research done with adults relevant to children?
Age-stage awareness
Top-down development
Therapy for adults modified for children
The nature of modifications
Target audience
Age-stage awareness: research in children and adolescents frequently refers to a single intervention being delivered to children of a broad span of ages. How clearly does the research explain how adjustments are made for age and stage and how aspects of cognitive development are taken into account. CBT rests on the assumption that the individual can generalise from the principle to specific situations, but we know that children struggle to do this because it requires abstraction, the ability to hypothesise, and to predict consequences.
Top-down development
Therapy for adults modified for children: There has been a move away from adapting adult models of intervention for children which ties in with the growing acceptance of developmental psychopathology as a framework for understanding childhood disorder. Nonetheless, research continues to be published which describes CBT that has not taken developmental, contextual or systemic factors into account in its conceptualisation or delivery.
The nature of modifications
Conceptualisation: little c, big b, systemic orientation, social factors (e.g. groups)?
Application: age-specific, part of combined treatment, language and materials
Target audience: diagnostics/symptoms/contextual factors
Economic and political climate
A dominant therapy because it is good or because it fits in with the ‘gold standard’ of RCTs?
Delivering clearly labelled therapies for clearly labelled disorders?
Contextual Issues
The demand by purchasers to know that they are obtaining the best treatment for the price
The emergence of quasi-markets in UK
The wish to develop policies that limit the damaging consequences of the fragmentation of children’s health services
The wish to evaluate new treatments, care settings, or new ways of organising delivery
The wish to justify and promote the use of new pharmaceutical treatment products

Substantial Evidence = Substantial Value, but ensure a critical reading of the evidence base
7964 returns on peer-reviewed articles with cognitive-behavioural in the title, 884 returns for play therapy, 707 returns for interpersonal psychotherapy

In psychology, the positivist movement was influential in the development of behavioralism and operationalism. Postivism and postpositivism both adhere to Scientific method. To be termed scientific, a method of inquiry must be based on gathering observable, empirical and measurable evidence subject to specific principles of reasoning.
Efficacy Research
Lab-based findings (i.e. highly controlled)
Effectiveness Research
Naturalistic setting
Doesn’t interfere with therapist behaviour
Large-scale possible
Effectiveness v Efficacy
Effectiveness Research: might include surveying a clinical population: findings are very representative, but also difficult to apply in specific situation because the population is so heterogeneous (mixed). Also capture patients who have stayed with the service so risk ‘cloning’ or continuing to deliver the same service to the same people, because the drop-outs aren’t captured. No control groups.
Cost-effective but at cost of quality? Larger sample size but higher attrition.
Don’t interfere with therapist style/behaviour but also means that model fidelity and individual factors aren’t controlled, and may influence outcomes
Efficacy research criticised because frequently based on highly selected samples that do not reflect real clinic populations
Therapist/researcher investment in outcome – a problem for both types of research.
Some combination of the two is probably best
Problems in principle
Practical problems
Ethical problems
Validity and generalisability
The least bad option?
RCTs are problematic:
a condition or expected outcome is so rare that there is little chance of recruiting sufficient numbers
randomisation cuts across preferences of clinician and/or patient
clinician refusal to participate
just because something is effective doesn’t make it right, e.g. aversive conditioning for challenging behaviour
clinician as researcher – what takes priority or is right for the patient?
scope of the task: all the conditions and all the treatments available would require over 100,000 RCTS to evidence them
Validity and generalisability
Clinician enthusiasm and experience may be difficult to replicate (manuals can to some extent address this)
Participants are often single disorder, first presentation, short duration – not representative of actual presentations to clinics (although IMPACT trial by Fuggle and Verduyn may show otherwise - no outcomes yet). Also RCT participants tend to have contextual risk factors such as family psychiatric morbidity, stress and impairment. However, before throwing RCTs out because of this, need to bear in mind that there is no strong established link between severity and efficacy.
Treatment might be different (superior, better supervised, more likely to be sustained over duration, model fidelity. Treatment manuals have reduced this problem to a great extent, but are not always popular with clinicians.

Mental health interventions have a larger evidence base than any other medical intervention. Why is this?
Developmental psychopathology discourages a singular focus on the observable symptoms, steering more towards a complex interplay biopsychosocial factors. Is measuring symptoms within this framework redundant?
Makes use of ‘effect size’
Problems include:
Comparability between studies
Categorisation of studies (e.g. Shirk &
Russell, 1992)
Publication bias (Lipsey & Wilson, 1993)
Inclusion criteria
Weighting of sample size
Meta-analytic Reviews
Effect size is essentially the magnitude of the difference in outcome between the control and experimental group. Meta-analyses will calculate these for individual studies and then pool them. Effect size of .20 is small, .5 medium and .8 large. Effect sizes can be sestablished for all kinds of different outcomes, therapeutic factors and characteristics, or moderators of outcome. The average ES for psychotherapy in general is .7 – quite large!
Comparability between studies: meta-analyses may not control length of treatment, severity of disorders, dependent varibles, treatment standardisation etc. This is out of necessity as every study is unique.
The methods by which the researcher categorises studies needs scrutiny – may be over-inclusive to enhance effects.
Publication bias towards positive findings tends to create impression of better outcomes than actually the case. Lipsey & Wilson in a meta-analysis of 92 meta-analyses found that the average ES of published studies was .53 but .39 for unpublished studies. 95% of published studies yield significant findings, which is clearly disproportionate. Sohn, 1996, has suggested that this bias is significant enough to account for the positive finding in meta-analyses.
Meta-analyses exclude within-subject and single-case studies. The criteria for inclusion is not quality or relevance driven.
They don’t weight for sample size.

In sum, are the results over generalised and therefore lacking in applicability?
Levels of Outcome Measurement
Symptomatic or Diagnostic Level
Level of Adaptation
Mechanism Level
Transactional Level
Service Utilisation/Satisfaction with Services
Fonagy et al (2005)
Symptomatic or Diagnostic Level: low levels of agreement between respondents, and moderate levels of agreement btw checklists and interviewer–rated measures. Highlights context-specificity of children’s observed behaviour/symptoms
Adaptation level: extent to which child’s adaptive functioning changes over the course of treatment. This is classically measured using tools such as CGAS, which has variable inter-rater reliability. Other measures tend to be interviewer-based and are in-depth and multi-domain.
Mechanism level: the measurement of emotional and cognitive processes which underpin symptomatology and adaptation. Push to develop research that explores why something works, and describes therapeutic action. Important for the credibility of specific techniques. Sometimes not a strong relationship between symptom changes and changes in process. In addition, performance v competence an issue (especially where children have been coached in cognitive techniques as part of therapy). Collecting information at the level of mechanism provides an important bridge between research on the causes of disorder and studies of treatment efficacy.
Transactional Level: transactional interactions between mental states, behavioural predisposition of the child, and the reactions of the environment to the child across time (principles of developmental psychopathology). Contextual or transactional measures of outcome are almost limitless in scope. Global measures of family functioning are available but lack specificity in terms of impact on the individual child and the relationship between specific family factors and outcomes. Poor distinction between genetic and environmental influences. Impact of a treatment almost inevitably extends beyond the immediate symptoms and some impact might be perceived as negative (for instance, the child treated with CBT for anxiety who then relapses because the parents cannot tolerate their new-found independence). This is typically considered in CAMHS clinical work, but less so in research.
Service Utilization: this focuses on consequent drops in service utilisation. This goes beyond CAMHS as many children requiring mental health assessment and intervention are users of multiple services. This is not yet a common practice in Britain despite the push towards showing efficacy of service delivery. Finally, satisfaction with service measures are fairly commonly used, and has utility in that the acceptability of services is an important factor in therapeutic outcome. They should be used in conjunction with service utilisation measures because one outcome for a satisfied service user might be an increase in use of service (e.g. a non-engaging teenager who is making little use of services but causing concern might start making use of VTSS, a day programme or a paediatric service once they are successfully engaged in a specific treatment).
‘Clinically Significant Change’
Psychometric measures that have a clinical and normal range
Units of standard deviation
Diagnostic criteria
Independent evaluation
External criteria
Outcomes–based medical research – how appropriate is this when the nature and outcomes of the problem are so varied and subtle?
A quantitative measure is not the same as the disorder it purports to measure.
E.g. quality of life (see the Tory party’s recent claim that they will be measuring quality of life, also known as psychological and physical wellbeing, happiness, mood, and subjective experience)
Psychometric measures that have a clinical and normal range: need to know this in advance and for it to have been validated on an adequately sized population
Units of standard deviation: clinical significance is decided by how many standard deviations from the norm a score is. This is statistically arbitrary (IQ tests being the best example of this
Diagnostic criteria: also tends to be arbitrary but for the opposite reason. Diagnostic criteria can be very non-specific. For example the DSM-IV criteria for ADHD refers to ‘often’ as the frequency for occurrence of symptoms and in terms of onset and domains of symptom presence, asks for ‘some’ symptoms. Furthermore diagnosis is not always relevant, e.g. symptoms of anxiety in somebody with ASD.
Independent evaluation: Asking someone else who is involved with the child. Subjectivity a problem, asking another clinician who does not know the child can be a problem, but getting information from parents, although subjective, might be more helpful than straight symptom scales (e.g. anxiety interventions)
External criteria: some criteria that is not directly connected with the problem or intervention. E.g. rates of calls to NHS-24 following a CBT intervention for panic disorder. Social and personal significance. Practical difficulties in getting reliable data from external bodies – sensitive to changes in external system, and need to control for multiple factors.
CBT v other approaches
CBT v controls (W/L)
Non-specific therapeutic benefits
CBT v ‘supportive counselling’
CBT v other therapies
Psychodynamic approaches
Questions to Ask
What type of study is it?
RCT, meta-analysis, cohort study, single-case study, case control study
What is being measured?
Is this what should have been studied?
How have they controlled the unmeasured variables?
How is it being measured?
Questionnaire, interview, observation, clinician-rating
There is no such thing as the perfect piece of research when a science of observed phenomena is used to measure internal and indirect effects

Judge each piece of research based on what it contributes to our understanding, and what its limitations are
And finally…
Full transcript