当前位置: 首页 > 期刊 > 《英国医生杂志》 > 2005年第17期 > 正文
编号:11385498
The need for outcome measures in medical education
http://www.100md.com 《英国医生杂志》
     Complex educational interventions demand complex and appropriate evaluations

    How can we ever be sure that educational approaches such as problem based learning are better than traditional ones? Change merely for the sake of change is futile. Changes in medical education should lead to better outcomes, but what is the best way to show cause and effect?

    For simple research questions straightforward methods suffice, but more complex questions require more complicated study designs. A question such as "Is drug A more effective than a placebo?" is highly relevant, and the methods needed to answer it may be relatively straightforward. However, the question "Why does drug A lead to a better outcome than a placebo?" is more complicated, and "Does using drug A lead to better health for the population?" even more so. Answering more complicated questions often requires a programme of research rather than a single study.

    Some authors would say that a randomised controlled trial is the best way to answer a question such as "Is problem based learning more likely than traditional education to produce good doctors?"1—and some would even say that anything less was unethical.2 Others argue that randomised controlled trials of large scale educational interventions are doomed to failure and should not be tried.3

    In this week's BMJ Tamblyn and colleagues report how they have taken up the gauntlet. They did not do a trial, however: they compared the quality—using a range of outcome measures—of the doctors who graduated before and after the introduction of a problem based learning curriculum.4 They found interesting differences between the groups, thus providing important material for debate and further research. The design of the study also provides food for thought.

    Deciding whether problem based learning produces better doctors requires, at least, clear consensus on what constitutes a better doctor. In trials of therapeutic interventions the outcome of each patient's management is a product of the interaction between multiple variables. These include the patient's personal characteristics such as age, sex, social status, type of disease, and concordance with treatment, as well as healthcare issues such as travelling distance to hospital and availability of diagnostic facilities and support staff. Furthermore, societal factors such as litigation and rationing may limit doctors' options. Comparing two cohorts while controlling for all these confounding variables is a tall order.

    In addition, there are many factors in doctors' lives other than the formal educational system that may influence their performance. These encompass not only personal preferences but also the time lag between education and starting practice and the influence of further specialist training.

    Lastly, the authors' selection of outcome measures may prove controversial. For example, a doctor's rate of carrying out breast cancer screening, even if it is an indicator of other preventive work, may not necessarily be a good indicator of overall medical competence and performance.

    Does this mean that changes in competence and performance are not measurable and that evaluation is pointless? We think not. It is essential to collect such data, not only to seek evidence for the notion that some broad changes in education are for the better, but also to gain more insight into exactly which elements of education work best. A single large scale study is unlikely to achieve all of this.5 Nor will research that looks only at one dimension using oversimplified outcome measures6 or describing no more than convictions or beliefs. Evaluating a complex educational intervention such as a new curriculum demands a complete programme of research.6 7 Studies such as that by Tamblyn and colleagues add pieces to the puzzle rather than provide definitive answers.

    Lambert Schuwirth, associate professor

    Department of Educational Development and Research, Maastricht University, Netherlands (l.schuwirth@educ.unimaas.nl)

    Peter Cantillon, senior lecturer

    Department of General Practice, National University of Ireland, Galway, Republic of Ireland

    Learning in practice p 1002

    Competing interests: None declared.

    References

    Colliver JA. Effectiveness of problem-based learning curricula: research and theory. Acad Med 2000;75: 259-66.

    Torgerson CJ. Educational research and randomised trials. Med Educ 2002;36: 1002-3.

    Norman GR, Schmidt HG. Effectiveness of problem-based learning curricula: theory, practice and paper darts. Med Educ 2000;34: 721-8.

    Tamblyn R, Abrahamowicz M, Dauphinee D, Girard N, Bartlett G, Grand'Maison P, et al. Effect of a community oriented problem based learning curriculum on quality of primary care delivered by graduates: historical cohort comparison study. BMJ 2005;331: 1002-5.

    Norman G. RCT = results confounded and trivial: the perils of grand educational experiments. Med Educ 2003;37: 582-4.

    Regehr G. Trends in medical education research. Acad Med 2004;79: 939-47.

    Colliver JA. Full-curriculum interventions and small-scale studies of transfer: implications for psychology-type theory. Med Educ 2004;38: 1212-4.