当前位置: 首页 > 期刊 > 《英国医生杂志》 > 2004年第1期 > 正文
编号:11356077
Making decisions about benefits and harms of medicines
http://www.100md.com 《英国医生杂志》
     1 Department of Primary Care and Population Sciences, University College London, London N19 5LW, 2 Department of Primary Care and General Practice, University of Birmingham, Birmingham, 3 Department of Psychology, University College London, London

    Correspondence to: T Greenhalgh p.greenhalgh@pcps.ucl.ac.uk

    Even when good scientific data are available, people's interpretation of risks and benefits will differ

    Introduction

    Drug regulatory authorities, such as the Medicines and Healthcare Products Regulatory Agency in the United Kingdom and the Food and Drug Administration in the United States, award product licences by assessing the balance between benefit and harm. The decision to revoke a licence generally hangs on evidence of lack of efficacy or risk of serious adverse effects, taking account of the seriousness of the condition and the range of other treatments available.

    The authorities work at the level of the whole population. But individual patients may believe (rightly in some cases) that a particular regulatory decision is not in their own best interests, and vociferous campaigns sometimes result (box 1). Involvement of patients can be a powerful driver for improving services.5 But both lay people and professionals are susceptible to several biases when making health related decisions (box 2). What can be done to ensure that the care of individual patients is not compromised by regulatory decisions intended to protect the population as a whole, and to encourage objective and dispassionate decision making in the face of cognitive biases?

    Sources and selection criteria

    Suppose that, based on population estimates, a person's chance of benefiting from a drug is 75% and their chance of a fatal adverse effect is 1 in 1000. Assuming the condition itself is not life threatening, revoking the drug's licence would, for every 1000 users, prevent one death, spare 249 people a drug that would have had no effect, and deny 750 people a drug they would have benefited from. How can we help the 750 without risking one life? The regulatory body should consider which of three categories the drug being considered falls into.

    Known susceptibility to adverse effect

    For some drugs it is possible to identify in advance which people are going to be susceptible to the adverse effect. A licence might be granted on condition that the drug is absolutely contraindicated in certain high risk groups (such as the under 16s in the case of aspirin, or women of childbearing potential in the case of retinoids for acne). In practice, however, enforcing such restrictions may be impossible, especially in developing countries (box 1, thalidomide). More speculatively, given the emergence of pharmacogenomics,6 future licences for such drugs might be granted on condition that individual patients are tested for susceptibility before a prescription is issued.

    Detection of adverse effect by surveillance

    The adverse effect of some drugs can be detected at a reversible stage by surveillance. In this situation, the patient can be offered the option of taking the drug and having regular check ups, or not taking it at all—for example, the combined oral contraceptive (blood pressure every six months), penicillamine (monthly urine analysis), and warfarin (regular blood tests). Examples of surveillance programmes being written into drug licensing decisions include clozapine and alosetron (box 1).

    Box 1: Drug regulatory decisions: rational assessment of benefit and harm?

    Clozapine

    This "new" antipsychotic for schizophrenia was withdrawn in the 1970s after reports of fatal neutropenia but reinstated after manufacturers proposed a monitoring scheme. The scheme proved cumbersome but as clozapine was the only drug of its class, it was considered worthwhile. Drugs with comparable efficacy but better safety profiles later appeared on the market.1

    Alosetron

    This selective 5-HT3 receptor antagonist was licensed in the United States in February 2000 for irritable bowel syndrome (IBS). After reviewing 70 adverse drug reaction reports and receiving two substantial petitions from consumer organisations, the US Food and Drug Administration (FDA) withdrew the drug's licence in November 2000 (the 1 in 700 risk of ischaemic colitis with alosetron was deemed unacceptable—IBS is not life threatening). After further public protest, chiefly from women who were prepared to monitor their own response to the drug using strict surveillance protocols, the FDA reversed its decision in 2002.2

    Kava

    This herbal anxiolytic, widely used for centuries by aboriginal populations worldwide, has recently been shown to be hepatotoxic. A ban on its sale in health food shops in North America was never fully implemented because of the tenacious minority view that the product is "natural," therefore safe. Attempts to restrict the lucrative kava trade have exposed regulatory authorities to accusations of meddling with the traditions of indigenous people. Robust data are lacking on the prevalence of the hepatotoxicity; the traditional way of using kava is as an aqueous extract but it is marketed in the West as a lipid extract (the latter may be more toxic).3

    Thalidomide

    This was originally marketed as a hypnotic for use in pregnancy. It was approved by the German regulatory authorities in 1957 and sold widely in Europe but not in the United States, where the newly formed FDA refused approval. But in 1998 the FDA approved thalidomide for treating the debilitating and disfiguring lesions associated with erythema nodosum leprosum. It invoked unprecedented regulatory authority to tightly control the marketing of thalidomide in the United States, including a programme that limits authorised prescribers and pharmacies, provides extensive patient education, and requires all users to be entered on a register. But leprosy is predominantly a disease of the developing world, and internet sales of the drug have almost certainly contributed to the re-emergence of thalidomide related embryopathy in recent years.4

    Surveillance of large numbers of individuals for extremely rare side effects is a poor use of clinicians' time. In practice, we balance risks and benefits on a case by case basis and prescribe certain drugs only in patients who are more likely than average to benefit—or less likely than average to develop adverse effects. Increasingly, such complex clinical decisions are effectively written into regulatory decisions, as when a drug licence is granted "only for prescription by a specialist"—for example, acitretin in psoriasis, thioridazine in schizophrenia, and corticosteroid eye drops in anterior segment inflammation.

    Unique benefit ("named patient")

    For some drugs, the adverse effect is not identifiable in advance but some patients with some conditions are likely to benefit uniquely. The drug may then be given a licence on a "named patient" basis—a bureaucratic hurdle that effectively restricts its prescription to tiny numbers of patients. Examples include tiabendazole for strongyloidiasis, ivermectin for scabies, and quinolone ear drops for chronic otitis media.

    Cognitive and social influences on risk decisions

    Even when the risks and benefits of a particular drug are not disputed, different people will make different decisions on whether it should be granted a licence or prescribed in a particular case. Why is this?

    We often assume that, when faced with any decision involving a range of possible outcomes, we should subjectively estimate how nice or nasty each outcome will be, weight these by the probability that each outcome will occur, and intuitively choose the option with the highest weighted score. This line of reasoning (known as subjective expected utility theory) implicitly underpins much research into health related decision making.

    But in reality, neither patients nor the members of regulatory bodies make choices in this fundamentally rational way. Limits to our capacity to process information, for example, prevent us from considering all options, outcomes, and likelihoods at once. Those that we focus on will inevitably influence us more. Anxiety associated with decision making (in uncertain situations) that may do us, as patients, harm (or, as professionals, may get us sued) both exaggerates the narrow focus of our attention and draws it towards more threatening potential outcomes. Even when not anxious, we tend to use simplification strategies in our perception of probabilities and potential outcomes. We tend to see things as either safe or risky (and tend to be "risk averse"), use rules of thumb ("heuristics") to judge likelihood, and consider losses as more serious than gains ("loss aversion"). When trying to imagine how we may feel in the future, we are influenced mainly by our current health state and fail to consider the multiple aspects of future health states or the adaptation to those states that comes with time.7

    The well established cognitive biases listed in box 2 help to explain several non-rational influences on drug regulatory decisions and campaigns to overturn them. How information is framed (a treatment that "saves eight lives out of 10" seems better than one that "fails to save two in every 10") is one reason why even objective evidence can be interpreted differently in different contexts.8 The conflation of "natural" with "risk free" is a widely used framing tactic in the herbal medicines industry (box 1, kava; see also figure). The widely reported (but scientifically unproved) link between the MMR (measles, mumps, and rubella) vaccine and autism9 is partly explained by a combination of "availability bias" (in this case, the emotional impact of a severely brain damaged child) and "illusory correlation" (box 2).

    Attempts to ban the sale of kava because of hepatoxicity have angered indigenous populations, who regard it as a safe anxiolastic

    Credit: UNIVERSITY OF HAWAII BOTANICAL INSTITUTE

    Box 2: Cognitive biases in perception of benefit and harm

    Acceptable risk—Some risks (such as lung cancer from smoking) are subjectively viewed as more acceptable than others (such as vaccine damage), even when the probabilities of occurrence are in the other direction. Hazards generally deemed acceptable are familiar, perceived as under the individual's control, have immediate rather than delayed consequences, and are linked to perceived benefits.w1

    Anchoring—In the absence of objective probabilities, people judge risk according to a reference point.w2 This may be arbitrary—for example, the status quo or some perception of what is "normal."

    Availability bias—Events that are easier to recall are judged as more likely to happen.w2 Recall is influenced by recency, strong emotions, and anything that increases memorability (such as press coverage and personal experience).

    Categorical safety and danger—People may perceive things as either "good" or "bad," irrespective of exposure or context.w3 This may make them unreceptive to explanations that introduce complexity into the decision (such as balance of benefit and harm).

    Appeal of zero risk—The elimination of risk is more attractive than reduction, even if the relative reduction is of the same order of magnitude as the elimination.w4

    Framing of information—A glass can be described as "half empty" or "half full"—the problem is the same but it is framed differently. This can have a direct and powerful impact on decisions of lay people and professionals.w2 w5 And losses can loom larger than gains.w6

    Illusory correlation—Prior beliefs and expectations about what correlates with what leads people to perceive correlations that are not in the data.w7

    Distinguishing between small probabilities—We cannot meaningfully compare very small risks (for example, of different adverse effects), such as 1 in 20 000 and 1 in 200 000. Expressing harm as relative rather than absolute risk dramatically shifts the subjective benefit-harm balance because the risk of harm seems greater.w8

    Personal v impersonal risk—Health professionals and patients may have different preferences,w9 w10perhaps due to different knowledge about outcomes and inherent differences in making decisions about oneself or others. Those making judgments about others tend to be less risk averse than those making judgments about themselves.w11

    Preference for status quo—Most people are reluctant to change current behaviours, such as taking a particular drug, even when the objective evidence of benefit changes.w11 It may be due to persistence of illusory correlation.

    Probability v frequency—Poor decision making is exacerbated by the use of absolute and relative probabilities. Judgment biases are less common when information is presented as frequencies.w12

    Preference for the status quo and illusory correlation explain why both patients and doctors resist change when a regulatory decision requires adjustment in someone's treatment. Doust and del Mar recently reviewed a host of historical examples, from blood letting to giving insulin for schizophrenia, which showed that doctors too are remarkably resistant to discontinuing treatment when evidence emerges of lack of efficacy or even potential harm.10

    On the other hand, the way we make decisions might be well adapted to the complex environment in which we operate—a concept known as bounded rationality.11 12 Gigerenzer and colleagues offer some compelling examples of decisions made on the basis of "fast and frugal" rules of thumb that equal or outperform those of more complex analytical procedures.13

    Patients' decision making about risk and benefit is also influenced by beliefs, attitudes, and perceived control (box A, bmj.com) and may also have psychoanalytical explanations—in terms of repression, denial, and transference (box B, bmj.com). These decisions may be distorted by a host of past experience and social influences.14 Regulatory bodies and campaign groups have their own unwritten codes of behaviour (perhaps respectively summed up as "protect the public—if necessary by erring on the side of caution" and "defend the individual's right to autonomy"), which probably set unconscious parameters for individual behaviour. The influence of accountability and social and institutional contexts on decision making should not be underestimated.15

    Summary points

    Like all decisions made by humans, drug regulatory decisions are influenced by cognitive biases

    These biases include anchoring against what is seen to be "normal," inability to distinguish between small probabilities, and undue influence from events that are easy to recall

    Stories (about the harmful effects of medicines) have a particularly powerful impact, especially when presented in the media as unfolding social dramas

    Narrative influences on decision making

    The balance between benefit and harm in medicine is neither simple nor static. Conclusions derived from clinical trials (however rigorously conducted) may not apply to individual patients for a host of genetic, physiological, psychological, and sociocultural reasons. It will therefore never be possible to legislate for every eventuality at the level of national drug licensing bodies.

    When drug licensing decisions are overturned (box 1), it is generally not because new scientific evidence emerges but because existing evidence is reinterpreted—especially in the light of context and personal values. In other words, the evidence base for drug regulatory decisions is to some extent socially constructed through active and ongoing negotiation between patients, practitioners, and policy makers.20 Consumer groups, scientists, and the media all have an important role to play in this process, but all parties should recognise that non-rational factors are likely to have a major influence on their perceptions. Greater awareness of affective factors as well as our cognitive biases should help us understand why different stakeholders interpret the benefit-harm balance of medicines differently, and this awareness could provide the basis for strategies to counter such influences.

    Two more boxes and further references (w1-w17) are available on bmj.com

    We thank Jeff Aronson for detailed advice and information about drug regulatory decisions.

    Contributors: All authors reviewed published literature and produced drafts of sections for this article. TG is the guarantor.

    Funding: None.

    Competing interests: None declared.

    References

    Bastani B, Alphs LD, Meltzer HY. Development of the clozaril patient management system. Psychopharmacology (Berl) 1989;99(suppl): S122-5.

    Mayer EA, Bradesi S. Alosetron and irritable bowel syndrome. Expert Opin Pharmacother 2003;4: 2089-98.

    Ernst E. Patient choice and complementary medicine. J R Soc Med 2004;97: 41.

    Ances BM. New concerns about thalidomide. Obstet Gynecol 2002;99: 125-8.

    Berwick DM. The total customer relationship in health care: broadening the bandwidth. Joint Commission Journal on Quality Improvement 1997;23: 245-50.

    Tucker G. The promise of pharmacogenetics. BMJ 2004;329: 4-6.

    Lloyd AJ. The extent of patients' understanding of the risk of treatments. Qual Health Care 2001;10: 114-8.

    Edwards A, Elwyn G, Covey J, Matthews E, Pill R. Presenting risk information—a review of the effects of "framing" and other manipulations on patient outcomes. J Health Commun 2001;6: 61-82.

    Jefferson T. Informed choice and balance are victims of the MMR-autism saga. Lancet Infect Dis 2004;4: 135-6.

    Doust J, del Mar C. Why do doctors use treatments that do not work? BMJ 2004;328: 474-5.

    Gigerenzer G, Selten R. Bounded rationality: the adaptive toolbox. Cambridge, MA: MIT Press, 2001.

    Simon HA. Models of bounded rationality. Cambridge, MA: MIT Press, 1982.

    Gigerenzer G, Todd PM, ABC Research Group. Simple heuristics that make us smart. New York: Oxford University Press, 1999.

    Freud S, Strachey J, Strachey A, Gay P. Inhibitions, symptoms and anxiety. London: Norton, 1977.

    Tetlock, PE. An alternative metaphor in the study of judgment and choice: people as politicians. Theory and Psychology 1991;1: 451-75.

    Denning S. The springboard: how storytelling ignites action in knowledge-era organisations. New York: Butterworth-Heinemann, 2001.

    Hall, C. MP backs doctor in row over single dose MMR. Telegraph 2001;31 Dec. http://portal.telegraph.co.uk/news/main.jhtml?xml=/news/2001/08/05/nmmr05.xml

    Newman T. The power of stories over statistics. BMJ 2003;327: 1444-7.

    Bruner J. Acts of meaning. Cambridge: Harvard University Press, 1990.

    Lomas J. Improving research dissemination and uptake in the health sector: beyond the sound of one hand clapping. Hamilton, Ontario: McMaster University, Centre for Health Economics and Policy Analysis, 1997. (Policy commentary C97-1.)(Trisha Greenhalgh, profes)