当前位置: 首页 > 期刊 > 《新英格兰医药杂志》 > 2005年第3期 > 正文
编号:11325560
Care in U.S. Hospitals — The Hospital Quality Alliance Program
http://www.100md.com 《新英格兰医药杂志》
     ABSTRACT

    Background The Hospital Quality Alliance (HQA) is the first initiative that routinely reports data on hospitals' performance nationally. Heretofore, such data have been unavailable.

    Methods We used data collected by the Centers for Medicare and Medicaid Services on 10 indicators of the quality of care for acute myocardial infarction, congestive heart failure, and pneumonia. The main outcome measures were hospitals' performance with respect to each indicator and summary scores for each clinical condition. Predictors of a high level of performance were determined with the use of multivariable linear regression.

    Results A total of 3558 hospitals reported data on at least one stable measure (defined as information obtained from discharge data from at least 25 patients) during the first half of 2004. Median performance scores (expressed as the percentage of patients who satisfied the criterion) were at least 90 percent for 5 of the 10 measures but lower for the other 5. Performance varied moderately among large hospital-referral regions, with the top-ranked regions scoring 12 percentage points (for acute myocardial infarction) to 23 percentage points (for pneumonia) higher than the bottom-ranked regions. A high quality of care for acute myocardial infarction predicted a high quality of care for congestive heart failure but was only marginally better than chance at predicting a high quality of care for pneumonia. Characteristics associated with small but significant increases in performance included being an academic hospital, being in the Northeast or Midwest, and being a not-for-profit hospital.

    Conclusions Analysis of data from the new HQA national reporting system shows that performance varies among hospitals and across indicators. Given this variation and small differences based on hospitals' characteristics, performance reporting will probably need to include numerous clinical conditions from a broad range of hospitals.

    Numerous studies have now shown that the quality of health care is variable and often inadequate.1,2,3 Initiatives to measure quality are an important focus for policymakers who believe that measurement can drive quality-improvement programs and guide the choice of provider by consumers and payers.4,5

    For more than a decade, the National Committee for Quality Assurance has published annual data on the quality of care provided by health plans as measured by quality indicators in the Health Plan Employer Data and Information Set.6 Until recently, however, we have lacked any national database that could provide analogous data on the quality of care provided by hospitals. Recently, a consortium of organizations, including the Centers for Medicare and Medicaid Services (CMS), the Joint Commission on Accreditation of Healthcare Organizations (JCAHO), the American Hospital Association, and consumer groups such as the American Association of Retired Persons, initiated an effort now called the Hospital Quality Alliance (HQA) to fill this gap. Under the HQA, hospitals nationwide report data to the CMS on indicators of the quality of care for acute myocardial infarction, congestive heart failure, and pneumonia. Both the CMS and the JCAHO have collected data based on these indicators, albeit in some instances with slightly disparate specifications. Different, limited versions of these data became available on the Internet in late 2003 in the case of the CMS (at www.cms.hhs.gov) and in July 2004 in the case of the JCAHO (at www.qualitycheck.org). The national data from the CMS first became publicly available for research in November 2004.

    Despite the intense effort that has gone into defining and collecting these measures of quality, little is known about how hospitals measure up. We used the HQA data to answer four important questions: How well do hospitals perform on the basis of these quality measures? How variable is performance across regions, and more specifically, are there certain local regions in which the level of performance is consistently high or low? What is the likelihood that a high level of performance in one condition (e.g., acute myocardial infarction) predicts a high level of performance in other conditions (e.g., congestive heart failure)? Finally, do certain characteristics of hospitals, including profit status, number of beds, presence or absence of academic involvement, and geographic region, predict a high level of performance?

    Methods

    Conditions and Measures of Quality

    To initiate the reporting effort, the CMS selected 10 measures of the quality of care that have been widely endorsed7,8,9,10 and that are considered valid and feasible for immediate public reporting. These 10 measures reflect the quality of care for three major clinical conditions: acute myocardial infarction, congestive heart failure, and pneumonia. There were five measures of the quality of care for acute myocardial infarction: the use or nonuse of aspirin within 24 hours before or after arrival at the hospital and at discharge, the use or nonuse of a beta-blocker within 24 hours after arrival and at discharge, and the use or nonuse of an angiotensin-converting–enzyme (ACE) inhibitor for left ventricular systolic dysfunction. Two measures were used for congestive heart failure: assessment of left ventricular function and the use or nonuse of an ACE inhibitor for left ventricular dysfunction. Three measures were used for pneumonia: the timing of initial antibiotic therapy, the presence or absence of pneumococcal vaccination, and assessment of oxygenation.

    The Medicare Modernization Act, passed in 2003, established financial incentives for hospitals to provide the CMS with data on these 10 indicators of quality. On April 1, 2005, data became available on the performance of hospitals with respect to seven additional measures — three for acute myocardial infarction, two for congestive heart failure, and two for pneumonia. Because these seven additional measures were based on admissions during only one quarter of 2004 and were available for relatively few hospitals (fewer than 10 percent of hospitals reported data on five of the seven measures that were based on an adequate sample size), we describe their performance in the Supplementary Appendix (available with the full text of this article at www.nejm.org) but chose not to include them in our primary analyses.

    All data collected by the HQA from every hospital are audited quarterly by the CMS Clinical Data Abstraction Centers, which abstract and reanalyze data from five charts per hospital per quarter. Specifications for the measures are provided in the Supplementary Appendix.

    Performance Data

    HQA data on 10 quality indicators first became publicly available on November 30, 2004, and were updated on April 1, 2005, to reflect hospital admissions during the first half of 2004. For each of the 10 measures, a hospital's score reflects the proportion of patients who satisfied the criterion. We defined any hospital performance measure that was based on discharge data from at least 25 patients as a stable measure, to be consistent with the convention of the CMS to refer to such a measure as reliable.

    Characterization of Hospitals and Performance on Individual Indicators

    We linked the HQA data set to the database of the American Hospital Association, which has information on hospitals' characteristics with respect to profit status, number of beds, region, type of setting (urban vs. rural), status of membership in the Council of Teaching Hospitals, percentage of patients covered by Medicare and Medicaid, ratio of nurses to patient-days (calculated by dividing the number of nurses on staff by 1000 patient-days), and presence or absence of an intensive care unit.

    Statistical Analysis

    We used chi-square tests and analysis of variance to compare the characteristics of hospitals that reported no data to the HQA with those that reported some data for every measure but that were based on discharge data for fewer than 25 patients and those that reported adequate data for at least one measure. In addition, t-tests with unequal variance were used to compare performance measures between hospitals with adequate sample sizes and hospitals with inadequate sample sizes. For each hospital, we used both performance scores that were weighted (according to the number of patients) and performance scores that were unweighted and found that the differences were similar in magnitude and direction. Therefore, we chose to report unweighted mean performance scores.

    Factor Analysis and Creation of Summary Scores

    To reduce the 10 performance measures to manageable summary scores, we performed factor analysis. The factor analysis combined the five measures of the quality of care for acute myocardial infarction into a weighted average with almost identical weightings. Therefore, for simplicity, we used an equally weighted average of the five items as our summary score for acute myocardial infarction. Similar results for the congestive heart failure and pneumonia measures led us to use equally weighted summary scores for these two conditions as well. Because Cronbach's alpha (the degree of association among the measures) showed a very strong correlation (0.82) among the five measures for acute myocardial infarction and because of the need to retain as representative a sample of hospitals as possible, we also calculated a summary score for acute myocardial infarction for hospitals that had stable measures for four of the five indicators related to acute myocardial infarction. This gave us a total of 1537 hospitals with summary scores for acute myocardial infarction. Given lower Cronbach's alpha values for the two measures related to congestive heart failure (0.60) and for the three pneumonia measures (0.43), we included only the 1915 hospitals that had stable measures for both indicators related to congestive heart failure and the 3076 hospitals that had stable measures for all three pneumonia items in our calculation of summary scores for these two conditions.

    Performance of Hospital-Referral Regions

    We examined summary scores according to hospital-referral regions, which are based on regional markets for tertiary care and were previously described in the Dartmouth Atlas of Health Care.11 In this calculation, we combined all patients with any of the three conditions who were treated in hospitals for which we had hospital summary scores and chose the 40 hospital-referral regions with the largest total numbers of patients. We then calculated an average summary score for each of the three clinical conditions in each hospital-referral region by averaging the summary scores of individual hospitals within each region. We subsequently ranked all regions according to their performance on quality measures for each condition and calculated the difference (with 95 percent confidence intervals) in performance between the top-ranked and bottom-ranked regions. Finally, we calculated Spearman correlation coefficients to determine how performance in one condition was correlated with performance in another condition across referral regions.

    Predicting Quality across Conditions

    We determined, on a hospital-by-hospital basis, how performance in one condition related to performance in other conditions. We used information on the hospitals that had summary scores for both acute myocardial infarction and congestive heart failure and the hospitals that had summary scores for both acute myocardial infarction and pneumonia. For each comparison, we first categorized each hospital's performance according to the summary score for acute myocardial infarction. We then calculated the proportion of hospitals in the top decile, top quartile, bottom quartile, and bottom decile of performance measures for acute myocardial infarction that scored in the top quartile, top half, or bottom quartile of performance measures for each of the other two conditions (congestive heart failure and pneumonia).

    Hospital Characteristics and Performance

    We examined whether four characteristics of hospitals — profit status (for profit vs. not for profit), academic status (member of the Council of Teaching Hospitals vs. nonmember), number of beds, and region of the country — that have previously been found to be associated with the quality of hospital care12,13,14 were associated with performance with respect to each of the three conditions. We built separate multivariable linear regression models with the summary scores for acute myocardial infarction, congestive heart failure, and pneumonia as outcomes. The models were simultaneously adjusted for each of the four primary predictors as well as other available characteristics that might be associated with performance: the proportion of patients with Medicare insurance, the proportion of patients with Medicaid insurance, the ratio of nurses to patient-days, the presence or absence of an intensive care unit, and setting (urban vs. rural).

    Results

    Of the 4203 hospitals in the HQA database, 4002 hospitals reported on at least one measure to the CMS. The 201 hospitals that reported no data were mostly specialty surgical centers and orthopedic hospitals. A total of 444 hospitals reported only information that was based on discharge data from fewer than 25 patients, and 3558 hospitals reported information on one or more measures that was based on discharge data from at least 25 patients and thus considered stable. The three categories of hospitals differed significantly in terms of size, geographic region, status of membership in the Council of Teaching Hospitals, presence of an intensive care unit, and setting (Table 1).

    Table 1. Characteristics of the Hospitals.

    The quality of care in the hospitals that reported at least 1 stable measure was higher on 9 of the 10 measures than in hospitals that reported no stable measures, although some of the differences were small and not significant (Table 2). The quality of care in the hospitals that reported at least 1 stable measure varied widely across the 10 measures, from a mean (±SD) of 98±5 percent for oxygenation assessment to 43±27 percent for pneumococcal vaccination (Table 2 and Figure 1). The median score was at least 90 percent on four of the five performance measures for acute myocardial infarction (all except ACE-inhibitor therapy), neither of the two performance measures for congestive heart failure, and one of the three pneumonia measures (oxygenation assessment).

    Table 2. Mean Performance Scores.

    Figure 1. Distribution of Performance Scores for the 10 Core Measures of the Quality of Care for Acute Myocardial Infarction (Panels A, B, C, D, and E), Congestive Heart Failure (Panels F and G), and Pneumonia (Panels H, I, and J).

    AMI denotes acute myocardial infarction, ACE angiotensin-converting enzyme, LV left ventricular, and CHF congestive heart failure.

    Summary Scores and Performance across Hospital-Referral Regions

    Among the 1537 hospitals for which we could calculate a summary score for acute myocardial infarction, the mean score was 89±6 percent. Similarly, the mean summary score for congestive heart failure among the 1915 hospitals that were included in the analysis was 81±10 percent, and the mean pneumonia score among the 3076 hospitals included in the analysis was 71±11 percent.

    Among the 40 largest hospital-referral regions in the database, we found substantial gaps in mean performance among the three conditions. The difference in the pneumonia composite score between the top-ranked region in this respect (Oklahoma City) and the bottom-ranked region (San Bernardino, Calif.) was 23±4 percentage points, the gaps between the top- and bottom-ranked performers for acute myocardial infarction (12±4 percentage points) and the top- and bottom-ranked performers for congestive heart failure (21±5 percentage points) were smaller (Table 3). There was a moderate correlation between the performance of a hospital-referral region with respect to acute myocardial infarction and its performance with respect to congestive heart failure (Spearman correlation coefficient, 0.72; P<0.001) but a lower correlation between the performance of a region with respect to acute myocardial infarction and its performance with respect to pneumonia (Spearman correlation coefficient, 0.45; P=0.004) and a lower correlation still between the performance of a region with respect to congestive heart failure and its performance with respect to pneumonia (Spearman correlation coefficient, 0.15; P=0.35).

    Table 3. The Top-Ranked and Bottom-Ranked Performances in Measures of the Quality of Care for AMI, CHF, and Pneumonia among the 40 Largest Hospital-Referral Regions.

    Predicting Quality across Conditions within Hospitals

    Performance scores for acute myocardial infarction closely predicted performance scores for congestive heart failure but not for pneumonia. Seventy-three percent of hospitals that were in the top decile of performance scores for acute myocardial infarction were in the top quartile of performance scores for congestive heart failure, and 91 percent of such hospitals were in the top half of performance scores for congestive heart failure, whereas less than 1 percent were in the bottom quartile (Table 4). However, only 33 percent of hospitals in the top decile of performance scores for acute myocardial infarction were in the top quartile of performance scores for pneumonia, and 41 percent were in the bottom half.

    Table 4. Ability of Performance Scores for AMI to Predict High Performance Scores for CHF and Pneumonia.

    Characteristics and Performance of Hospitals

    We subsequently examined the relationship between performance scores and hospitals' academic status (as reflected by membership or nonmembership in the Council of Teaching Hospitals), profit status, geographic region, and number of beds (Table 5). We found that after adjustment for potential confounders (as well as the other variables of interest), academic hospitals had higher performance scores for acute myocardial infarction than nonacademic hospitals (91 percent vs. 89 percent, P<0.001) and congestive heart failure (85 percent vs. 81 percent, P<0.001), but lower scores for pneumonia (69 percent vs. 71 percent, P=0.02). Not-for-profit hospitals had significantly higher scores for all three conditions than did for-profit hospitals, and there were significant regional differences in scores for each of the three conditions, with the Midwest and Northeast outperforming the West and South. The number of beds was significantly associated only with the pneumonia scores (P=0.001), with the smallest hospitals having the highest scores.

    Table 5. Adjusted Performance Scores for AMI, CHF, and Pneumonia, According to Select Characteristics of the Hospitals.

    Discussion

    We evaluated the national HQA data set launched by the CMS and found that the quality of care in American hospitals varied greatly according to the indicator of quality and the condition. For five of the quality indicators — especially those for acute myocardial infarction — half the hospitals scored above 90 percent. However, the level of performance with respect to other measures of quality was much lower. There was substantial variability in the quality of care provided by hospitals in different metropolitan areas. A high quality of care for acute myocardial infarction closely predicted a high quality of care for congestive heart failure but not for pneumonia. There were significant but small differences in performance between academic and nonacademic hospitals and for-profit and not-for-profit hospitals, as well as among hospitals in various geographic regions, but there was no consistent association between performance and the size of the hospital.

    The HQA is the first national public reporting system that provides detailed performance data for each hospital. All but 1 of the 10 quality indicators we evaluated (oxygenation assessment) were examined previously in a state-level analysis by Jencks and colleagues using data on Medicare beneficiaries from 2000 through 2001.1 We found that the level of performance was higher than that described by Jencks et al. for all but one measure (the timing of initial antibiotic therapy in patients with pneumonia), which is consistent with the results of Williams et al.,15 whose findings in this issue of the Journal demonstrate temporal improvements in performance using comparable data reported to JCAHO. The variability in performance for different quality indicators may be due to several important factors, including the length of time the process measure has been considered high quality, the importance that clinicians place on the measure, and the difficulty in providing the specific aspect of appropriate care. Further studies of these issues will be critical to future efforts to improve the quality of health care.

    Our findings indicate that quality measures had only moderate predictive ability across the three conditions. Although a high quality of care for acute myocardial infarction predicted a high quality of care for congestive heart failure, the former was only marginally better than chance at identifying a high quality of care for pneumonia. These data do not provide support for the notion that "good" hospitals are easy to identify or consistent in their performance across conditions. Our data suggest that evaluations of hospitals' performance will most likely need to be based on a large number of conditions.

    On the basis of the literature, one might predict that the quality of care would be higher in large, academically oriented,16,17 not-for-profit18,19 hospitals. However, we found moderate associations between these characteristics and hospitals' performance. The quality of care in teaching hospitals has been an especially controversial topic, since care in such hospitals costs much more than care in nonteaching institutions. Our data, based on a limited number of measures but on a much larger sample of hospitals than in most previous studies, suggest that the extra money spent on teaching institutions does not necessarily buy a higher quality of some important components of care. Of course, the training function of teaching hospitals is important in itself. The moderate differences in performance associated with hospitals' characteristics suggest the need to target a large breadth of hospitals for improvements in the quality of care.

    Our study has important limitations. First, we could evaluate only 10 measures of the quality of care for three clinical conditions, although these conditions account for 15 percent of Medicare admissions. The CMS plans to expand the HQA database to include additional conditions. Second, our data on hospitals' characteristics do not provide potentially important details, such as a hospital's management structure or quality-management programs that might be associated with a high level of performance. Finally, our analyses provide results on process measures and not on patient outcomes.

    In summary, we found that the quality of hospital care in the United States varies widely across different indicators of quality and that individual hospitals vary in their performance according to indicators and conditions. Although the public reporting of quality measures in the HQA database represents an important start, our results provide a hint of the hard work that lies ahead. The variability of hospitals' performance across conditions and hospitals indicates that we will need to expand our data-collection efforts to include many more conditions and that we will most likely need to focus quality-improvement efforts on a large set of hospitals.

    Supported by the Commonwealth Fund, New York.

    Source Information

    From the Department of Health Policy and Management, Harvard School of Public Health (A.K.J., Z.L., A.M.E.); the Division of General Medicine, Brigham and Women's Hospital (A.K.J., E.J.O., A.M.E.); and the Boston Veterans Affairs Healthcare System (A.K.J.) — all in Boston.

    Address reprint requests to Dr. Jha at the Department of Health Policy and Management, Harvard School of Public Health, 677 Huntington Ave., Boston, MA 02115, or at ajha@hsph.harvard.edu.

    References

    Jencks SF, Huff ED, Cuerdon T. Change in the quality of care delivered to Medicare beneficiaries, 1998-1999 to 2000-2001. JAMA 2003;289:305-312.

    McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003;348:2635-2645.

    Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, D.C.: National Academies Press, 2001.

    Galvin R, Milstein A. Large employers' new strategies in health care. N Engl J Med 2002;347:939-942.

    Milstein A, Galvin RS, Delbanco SF, Salber P, Buck CR Jr. Improving the safety of health care: the Leapfrog initiative. Eff Clin Pract 2000;3:313-316.

    HEDIS 2005. Vol. 2. HEDIS technical specifications. No. 10284-100-05. Washington, D.C.: National Committee for Quality Assurance, 2005.

    British Thoracic Society Standards of Care Committee. BTS guidelines for the management of community acquired pneumonia in adults. Thorax 2001;56:Suppl 4:IV-1.

    Niederman MS, Mandell LA, Anzueto A, et al. Guidelines for the management of adults with community-acquired pneumonia: diagnosis, assessment of severity, antimicrobial therapy, and prevention. Am J Respir Crit Care Med 2001;163:1730-1754.

    Mandell LA, Bartlett JG, Dowell SF, File TM Jr, Musher DM, Whitney C. Update of practice guidelines for the management of community-acquired pneumonia in immunocompetent adults. Clin Infect Dis 2003;37:1405-1433.

    Antman EM, Anbe DT, Armstrong PW, et al. ACC/AHA guidelines for the management of patients with ST-elevation myocardial infarction: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee to Revise the 1999 Guidelines for the Management of Patients with Acute Myocardial Infarction). Circulation 2004;110:e82-e292.

    Wennberg J, Cooper M, eds. The Dartmouth atlas of health care. Chicago: American Hospital Association Press, 1999.

    Meehan TP, Fine MJ, Krumholz HM, et al. Quality of care, process, and outcomes in elderly patients with pneumonia. JAMA 1997;278:2080-2084.

    Marciniak TA, Ellerbeck EF, Radford MJ, et al. Improving the quality of care for Medicare patients with acute myocardial infarction: results from the Cooperative Cardiovascular Project. JAMA 1998;279:1351-1357.

    Bradley EH, Holmboe ES, Mattera JA, Roumanis SA, Radford MJ, Krumholz HM. A qualitative study of increasing beta-blocker use after myocardial infarction: why do some hospitals succeed? JAMA 2001;285:2604-2611.

    Williams SC, Schmaltz SP, Morton DJ, Koss RG, Loeb JM. Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004. N Engl J Med 2005;353:255-264.

    Allison JJ, Kiefe CI, Weissman NW, et al. Relationship of hospital teaching status with quality of care and mortality for Medicare patients with acute MI. JAMA 2000;284:1256-1262.

    Ayanian JZ, Weissman JS. Teaching hospitals and quality of care: a review of the literature. Milbank Q 2002;80:569-593.

    Sloan FA, Trogdon JG, Curtis LH, Schulman KA. Does the ownership of the admitting hospital make a difference? Outcomes and process of care of Medicare beneficiaries admitted with acute myocardial infarction. Med Care 2003;41:1193-1205.

    Devereaux PJ, Schunemann HJ, Ravindran N, et al. Comparison of mortality between private for-profit and private not-for-profit hemodialysis centers: a systematic review and meta-analysis. JAMA 2002;288:2449-2457.(Ashish K. Jha, M.D., M.P.)