当前位置: 首页 > 期刊 > 《新英格兰医药杂志》 > 2005年第13期 > 正文
编号:11329122
Accidental Deaths, Saved Lives, and Improved Quality
http://www.100md.com 《新英格兰医药杂志》
     More than five years ago, the Institute of Medicine (IOM) issued its pathbreaking report, To Err Is Human, and fundamentally changed the debate about health care quality in the United States.1 The publication reconfigured how we think about the quality of care, attracted greater interest among payers and employers in the improvement of care and patient safety, focused attention on the need to inform patients who have been victims of iatrogenic injury, and produced a substantial increase in research support. The report recently has been characterized as the most influential health care publication in the past two decades.2

    Yet at this point, there is a sense of disappointment about the results of the patient-safety movement. There have been few breakthrough interventions. There is little evidence that the health care system is safer today than it was five years ago and certainly nothing to suggest that deaths from error have been cut in half, as the IOM called for in its report. Leading advocates have not found evidence of the transformation of the health care system, or even of individual hospitals, that was thought necessary to make the health care system as safe as other industries.3,4,5

    Hence, the answer to the question being asked five years later — how many deaths have been prevented? — is disappointing. But so is the question. The problem lies in an overreliance on the notion of the individual accidental death. This notion oversimplifies the causal realities of iatrogenic injuries, overpromises on achievable gains, and threatens to skew priorities in quality-improvement initiatives. Moving away from a focus on saving lives solely by preventing errors and instead emphasizing the implementation of evidence-based practices to improve the quality of care more generally will yield better long-term results. Fortunately, there are signs within the safety movement that this shift is already under way — a change that promises a more productive next five years.

    The Premise and Promise of Safety

    Traditionally, research into quality of care has focused on three areas: variation, patient-centered care, and compliance with guidelines.6,7,8,9 The IOM report added the domain of safety to quality research. Researchers linked insights into causation from cognitive psychology, human-factors engineering, and systems science with existing data on the incidence of iatrogenic injury10 — data drawn from studies in the 1990s that were oriented toward understanding medical malpractice, rather than improving health care.11 The product was the concept of "preventable injury," whose burden became the cornerstone of the IOM report.

    Patient safety sparked a level of public interest that the rest of the quality-improvement field in health care had failed to excite. This was due, at least in part, to the ability of the patient-safety movement to harness a public fascination with the accidental death — an individual patient sustaining preventable harm from an error of either omission or commission. This notion prompted the popular analogy to an airline crash. When a plane goes down, investigators count the victims and then try to figure out what caused the crash and how it could have been avoided. The IOM estimated there were 44,000 to 98,000 preventable hospital deaths per year, and the public could easily make the leap from a press report about a death, such as the chemotherapy overdose of medical reporter Betsy Lehman at the Dana–Farber Cancer Institute in Boston, to this alarming statistic.

    By contrast, the other dimensions of quality improvement track changes in populations over time. Increments and decrements are measured in "statistical lives" — better outcomes across populations resulting from the consistent and appropriate provision of effective interventions, such as the administration of beta-blockers and statins and the performance of such procedures as aortic aneurysm repair. For instance, if a large group of providers sedulously monitor glycosylated hemoglobin levels in their patients, over time they will prevent some of the complications of diabetes. But precisely who benefits from the practice is unclear; retinopathy or nephropathy will still develop in some patients with diabetes. In statistical terms, the number of complications that are prevented will be small and depersonalized12 and therefore difficult for the public to appreciate.

    In a later report,13 the IOM quality committee defined safety and distinguished it from what it termed effectiveness. The distinction is critical. Safety is freedom from accidental injury. Effectiveness, on the other hand, "refers to care that is based on systematically acquired evidence to determine whether an intervention, such as a preventive service, diagnostic test, or therapy, produces better outcomes than alternatives." Measures of effectiveness are evidence-based and broad and go well beyond an analysis of accidental injury14; they entail much of what we referred to earlier as compliance with evidence-based guidelines.

    Shortcomings of the Accidental-Death Construct

    Unfortunately, the accident construct does not correspond exactly with medical reality. The identification of the causes and outcomes of medical injuries can be far murkier than an investigation of accidents in other settings. With hospitalized patients in particular — and the overwhelming majority of existing data on patient safety are from the hospital setting — the levels of sickness and fragility among patients make it difficult both to identify errors and to disentangle their effects from the progression of patients' underlying diseases. Legal scholars have long noted such differences between malpractice litigation and other types of accident law.15

    This problem was first manifested in the safety movement as a prolonged debate about the estimates of mortality resulting from medical error. Researchers questioned the real effect on mortality of inattention to safety, given the absence of control groups to test the counterfactual situation.16 This critique correctly noted that in both of the original epidemiologic studies that the IOM relied on for its mortality estimates,17,18 the initial physician reviewers did not assess the number of days the patient might have lived had an iatrogenic injury not occurred. For example, if a patient on a ventilator who had terminal lung cancer died as a result of a pneumothorax induced by placement of a central venous line, the death counted as a preventable one even though the patient might have lived only another few days.

    But the debate over the number of deaths went to another, more fundamental issue — that of the difficulty of measuring progress in safety improvement. Attention was called to intrinsic vagaries of judgment regarding errors in chart review, manifested in poor reliability among reviewers about what constituted adverse events and preventability.19 It has long been recognized that the implicit judgments used in these studies have weak reliability — in other words, what one reviewer might call a preventable adverse event, another might see as not preventable or even not an adverse event.20 This reality contrasts sharply with the explicit judgments made in effectiveness studies, in which the reliability tends to be quite high.

    To examine what the reliability problem means in terms of estimates of injuries, we looked at multiple independent reviews of the same charts that were done for the Utah–Colorado Medical Practice Study. We found that the decisional thresholds chosen for confirming adverse events heavily influenced the number of injuries identified. If it was stipulated that three reviewers had to agree that an error had occurred, the error rate was less than 1/10 of the rate when the vote of only one of the three was required to make the determination.21

    The problem worsens when consideration is given to the various measurement techniques and definitions that researchers use.22 Two techniques have dominated research on medical injuries: chart review and direct observation. Two studies of the medical records of randomly selected hospitalized patients in New York17 and in Utah and Colorado18 provided the foundational epidemiologic data that made possible extrapolations to numbers of patients with iatrogenic injuries and the IOM's further extrapolation to the number of those injuries that were preventable. Nearly a decade ago, Andrews and colleagues' direct-observation analysis of intensive care demonstrated a level of injury that was several times the level stated in the chart-review studies.23 A more recent direct-observation study of errors also revealed much higher levels of preventable medical injuries.24,25 Because all the studies used different methods, focused on different inpatient populations, and defined events slightly differently, it is difficult to compare them.

    None of this means that we do not have a safety problem in U.S. health care. But iatrogenic injury will remain prohibitively difficult to measure in a reliable way. Indeed, the IOM's call for a 50 percent reduction in such injuries by 2005 was doomed from the start. How could we know that such improvement had occurred? The public will continue to be disappointed by the failure of the medical profession to move from the causal clarity of particular medical accidents to any rapid and demonstrable increase in lives saved across populations.

    Thus, as the safety movement and, more important, the larger quality movement matures, we might expect some backing away from the notion of preventing accidental injury and more of a tilt toward effectiveness. Gains in effectiveness are more readily measured and compared. However, such a change will require that we loosen our attachment to the constructs of accidents and accidental death.

    A Return to Effectiveness

    In 2001, the Agency for Healthcare Research and Quality (AHRQ) funded an effort to review the literature on safety and quality and identify those initiatives that were evidence-based. It was a mammoth undertaking, entailing the review of many hundreds of potential interventions, and it has been widely praised for its scope and accuracy.26 But perhaps not surprisingly, among the interventions recommended as having the greatest strength of evidence, very few were drawn from patient-safety research. Most of the recommendations came from what the IOM would call the domain of effectiveness. The reason is that few if any safety interventions have demonstrated evidence of decreasing preventable medical injuries. Even computerized physician order entry (CPOE) and work-hours limitations, which appear to reduce errors,26,27 have not been proved to reduce the key outcome of preventable injuries.

    The AHRQ initiative turned out to be controversial in the safety-and-quality movement. Leaders of the patient-safety movement questioned whether the AHRQ focus had been too narrow.28 They called attention to the many small, untested interventions that had made anesthesia and cardiac surgery safer, none of which had a formal evidence base at their inception. They also questioned a focus on issues such as central-line placement and ventilator-associated pneumonia, given their narrow and limited application. The organizers of the AHRQ efforts answered effectively, citing the historical problems with investments of commonsense but untested practices that eventually backfire.29 The exchange captures a pivotal dilemma: the problems of moving forward without evidence and the hazards of waiting for evaluation of commonsense approaches.

    The AHRQ initiatives have caused ferment in the field of safety, which now appears poised to divide into two paths. One school will continue to advocate for practices that promise to reduce accidents and stand ready to be "bolted on" in modular form by hospitals. The Leapfrog Initiative is the best example of this philosophy, with its advocacy of the use of high-volume hospitals for complex surgery, CPOE systems, and full-time specialists in intensive care units and its justification of these efforts by the number of lives saved through error reductions. Although there is some provocative research literature on each intervention, none had sufficient evidence to make the AHRQ list. Moreover, the Leapfrog research on these interventions may have overplayed the lives-saved card: for example, the 50,000 lives saved annually by the use of intensive care physicians is approximately 10 times the number of preventable deaths that the chart-review studies suggested occur in intensive care units. The Leapfrog intention is commendable, but two questions confront its future: Will it be efficacious? And are the expensive initiatives it advocates the ones that every hospital should invest in first?

    The alternative approach is to reemphasize effectiveness and evidence-based improvement. What the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) refers to as "core measure sets" exemplifies an attempt to do just that. The core measures are explicitly based on proven methods of improving outcomes for care of patients with acute myocardial infarction, congestive heart failure, pregnancy-related conditions, and community-acquired pneumonia. None of the measures will lend themselves to a demonstration of how many specific accidental deaths were averted, but all come from the effectiveness domain, entail compliance with guidelines, and should lead to demonstrable improvements in morbidity and mortality across populations.

    But perhaps the choice is not one of effectiveness or safety. Our view is that the safety movement has led to the importation of a new basic science into health care quality. The fields of human-factors engineering, cognitive and social psychology, and informatics have now been added to the quality discipline. We also have to allow time for this basic science to be translated into useful approaches. As with any science, the key is that interventions must be measurable, and in this regard the emphasis that advocates of effectiveness place on the need for quantifiable data is correct.

    What Can Be Done?

    Several specific recommendations emerge. First, we should follow the evidence. Quality-improvement efforts based on evidence of effectiveness are likely to be more readily embraced and may save more lives than will efforts to improve safety that cleave to the concept of accidental death and lack a solid evidence base. Promoting compliance with the effectiveness measures outlined by the AHRQ report and in the JCAHO core measure sets would be a sensible place to start. All the recommendations are evidence-based. They also make clear that patient safety is not an alternative or rival to quality improvement but, rather, is a part of it.

    Second, the outcomes that mark changes in performance must be measurable. Only then can physicians and hospitals determine whether they are succeeding or failing. Third, hospitals should tailor their initiatives to the strengths of their staff. Among the AHRQ best practices and the JCAHO measures are probably more initiatives than even the most committed institution can undertake, and diligent champions of improvement are just as critical as are clear measures.

    These second and third points were recently made by one of us in two quite different articles — one on military medical care in Iraq and the other on the efforts of medical centers to improve care for patients with cystic fibrosis.30,31 In both cases, the critical element for improvement was an unceasing effort to improve one simple outcome measure. For the military, it was the percentage of battle injuries leading to death; for the cystic fibrosis programs, it was deterioration in lung function. Improvement in both cases was the result of numerous small interventions, all tested against the chosen outcome measure.

    The approach is not new. It was set forth in articles by Berwick32 and Laffel and Blumenthal33 more than 15 years ago. But moving from theory to practice is difficult, and both of the cases noted above required committed leaders and resources. Thus, hospitals must choose interventions on the basis of existing strengths in their medical and nursing staffs. And they must commit to measurement and action on the basis of those measures.

    The fourth point is that hospitals and other health care organizations should expect to expend resources in an effort to improve the quality of care. Theoretically, the savings from quality improvement could pay for itself. But this outcome remains highly unlikely. It is difficult to avoid the reality that some important changes, such as the hiring of more nurses and pharmacists,34,35 will simply require greater expenditures. So, too, will expenditures be required to develop a safety infrastructure of decision support and clinical-management programs. This expansion will be difficult in an increasingly harsh health care financing environment. Although there is growing interest in realigning financial incentives to reward high-quality care and penalize low-quality care,36 the nascent "pay for quality" movement will not fund systemwide change in the near future.

    Fifth, incident reporting to public agencies, although laudable on many fronts, may not play such an important role in shaping the choice of quality-improvement interventions. Anecdotal evidence today points to two problems emerging from the heavy emphasis on reporting during the past five years: an ongoing inability to attract reports of serious events and an abundance of reports of minor events that appear to be outstripping the analytic capacity of researchers and administrators to make sense of them. Moreover, if a medical center shifts its attention with respect to improvement from specific accidents to effectiveness measures reporting overall health outcomes becomes more essential than the open-ended reporting of errors or accidents. Reports of specific adverse events and errors (such as problems with drug safety) within the organization may still form an important part of the mix of measures used to evaluate interventions. But reporting of all errors to the public would not assume a natural priority over effectiveness-related interventions.

    Sixth, we must recognize that safety introduces new knowledge into quality by way of disciplines such as human-factors engineering and organizational psychology, sociology, and informatics. These disciplines can be considered the basic sciences of quality, just as we consider molecular biology, pharmacology, and genetics the basic sciences of medicine. Thinking of these safety-related disciplines as basic sciences has several implications. It underlines the fact that such fields will require more funding and time to yield meaningful research. Just as it has taken decades for basic science to transform the care of patients with cancer, it will take time for these safety sciences to transform quality. There will also be the need for translational research and, in some cases, even randomized trials to test the effectiveness of interventions that are costly or have unknown effects on health care systems.

    In summary, safety is a vital component of health care quality. But if we have limited resources to spend on the promotion of quality, we must spend them carefully. The domain of effectiveness, lending itself as it does to measurable outcomes, should increasingly be the vehicle for development of teams and quality champions. But once we get past the limits of the construct of accidental death, we should acknowledge — indeed celebrate — the inflow of ideas from other industries on safety and work to translate those ideas into hard measures that are amenable to being tracked for improvement. This approach is unlikely to draw the public's interest in the way that reports of accidental deaths do, but it is where the best scientific evidence leads us. We will not know exactly whose lives are saved, but there will be more of them.

    Source Information

    From Brigham and Women's Hospital, Harvard Medical School (T.A.B., A.G.), and the Harvard School of Public Health (T.A.B., A.G., D.S.) — all in Boston; and the University of Texas at Houston, Houston (E.T.).

    References

    Kohn LT, Corrigan JM, Donaldson MS. To err is human: building a safer health system. Washington, D.C.: National Academies Press, 1999.

    Altman DE, Clancy C, Blendon RJ. Improving patient safety -- five years after the IOM report. N Engl J Med 2004;351:2041-2043.

    Wachter RM. The end of the beginning: patient safety five years after `To Err Is Human.' Bethesda, Md.: Health Affairs, November 2004 (Web exclusive). (Accessed September 8, 2005, at http://content.healthaffairs.org/cgi/content/full/hlthaff.w4.534/DC1.)

    The Commonwealth Fund Quality Improvement Colloquium. Patient safety five years after To Err Is Human. (Accessed September 13, 2005, at http://www.cmwf.org/General/General_show.htm?doc_id=249059.)

    Leape LL, Berwick DM. Five years after To Err Is Human: what have we learned? JAMA 2005;293:2384-2390.

    Brennan TA, Berwick DM. New rules: regulation, markets, and the quality of American health care. San Francisco: Jossey-Bass, 1996.

    Wennberg DE, Wennberg JE. Addressing variations: is there hope for the future? Bethesda, Md.: Health Affairs, December 2003 (Web exclusive). (Accessed September 8, 2005, at http://content.healthaffairs.org/cgi/content/full/hlthaff.w3.614v1/DC1.)

    Cleary PD, Edgman-Levitan S, Roberts M, et al. Patients evaluate their hospital care: a national survey. Health Aff (Millwood) 1991;10:254-267.

    McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003;348:2635-2645.

    Leape LL. Error in medicine. JAMA 1994;272:1851-1857.

    Mello MM, Kelly CN, Brennan TA. Fostering rational regulation of patient safety. J Health Polit Policy Law 2005;30:375-426.

    Brennan TA. Untangling causation issues in law and medicine: hazardous substance litigation. Ann Intern Med 1987;107:741-747.

    Adams K, Corrigan JM, eds. Priority areas for national action. Washington, D.C.: National Academies Press, 2003.

    Woolf SH. Patient safety is not enough: targeting quality improvements to optimize the health of a population. Ann Intern Med 2004;140:33-36.

    Weiler PC. Medical malpractice on trial. Cambridge, Mass.: Harvard University Press, 1991.

    McDonald CJ, Weiner M, Hui SL. Deaths due to medical error are exaggerated in the IOM report. JAMA 2000;284:93-95.

    Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients: results of the Harvard Medical Practice Study I. N Engl J Med 1991;324:370-376.

    Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 2000;38:261-271.

    Hayward RA, Hofer TP. Estimating hospital deaths due to medical errors: preventability is in the eye of the reviewer. JAMA 2001;286:415-420.

    Localio AR, Weaver SL, Landis JR, et al. Identifying adverse events caused by medical care: degree of physician agreement in a retrospective chart review. Ann Intern Med 1996;125:457-464.

    Thomas EJ, Lipsitz S, Studdert DM, Brennan TA. The reliability of medical record review for estimating adverse event rates. Ann Intern Med 2002;136:812-816.

    Thomas EJ, Petersen LA. Measuring errors and adverse events in health care. J Gen Intern Med 2003;18:61-67.

    Andrews LB, Stocking C, Krizek T, et al. An alternative strategy for studying adverse events in medical care. Lancet 1997;349:309-313.

    Lockley SW, Cronin JW, Evans EE, et al. Effect of reducing interns' weekly work hours on sleep and attentional failures. N Engl J Med 2004;351:1829-1837.

    Landrigan CP, Rothschild JM, Cronin JW, et al. Effect of reducing interns' work hours on serious medical errors in intensive care units. N Engl J Med 2004;351:1838-1848.

    Shojania KG, Duncan BW, McDonald KM, et al., eds. Making health care safer: a critical analysis of patient safety practices. Evidence report/technology assessment no. 43. Rockville, Md.: Agency for Healthcare Research and Quality, July 2001. (AHRQ publication no. 01-E058.) (Also available at http://www.ahrq.gov/clinic/ptsafety.)

    Bates DW, Leape LL, Cullen DJ, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA 1998;280:1311-1316.

    Leape LL, Berwick DM, Bates DW. What practices will most improve safety? Evidence-based medicine meets patient safety. JAMA 2002;288:501-507.

    Shojania KG, Duncan BW, McDonald KM, Wachter RM. Safe but sound: patient safety meets evidence based medicine. JAMA 2002;288:508-512.

    Gawande A. The bell curve. The New Yorker. December 6, 2004:82-91.

    Casualties of war -- military care for the wounded from Iraq and Afghanistan. N Engl J Med 2004;351:2471-2475.

    Berwick DM. Continuous improvement as an ideal in health care. N Engl J Med 1989;320:53-56.

    Laffel G, Blumenthal D. The case for using industrial quality management science in health care organizations. JAMA 1989;262:2869-2873.

    Needleman J, Buerhaus P, Mattke S, Stewart M, Zelevinsky K. Nurse-staffing levels and the quality of care in hospitals. N Engl J Med 2002;346:1715-1722.

    Rollins G. National quality forum endorses four new patient safety practices. Rep Med Guidel Outcomes Res 2003;14:10, 12-10, 12.

    Rosenthal MB, Fernandopulle R, Song HR, Landon B. Paying for quality: providers' incentives for quality improvement. Health Aff (Millwood) 2004;23:127-141.(Troyen A. Brennan, M.D., )