当前位置: 首页 > 期刊 > 《新英格兰医药杂志》 > 2004年第5期 > 正文
编号:11304797
The Clinical-Skills Examination
http://www.100md.com 《新英格兰医药杂志》
     To the Editor: The Perspective article by Papadakis in support of the new clinical-skills component of the U.S. Medical Licensing Examination (USMLE) (April 22 issue)1 fails to justify this expensive and burdensome test. Coupling a Canadian study with findings from the National Board of Medical Examiners, Papadakis anticipates a "reliable, valid, and feasible" examination. This expectation is encouraging, yet it tells nothing of whether the exercise is truly necessary. On this point, Papadakis says surprisingly little. She cites a poll indicating that patients favor testing to sharpen physicians' clinical skills. This is merely an observation of consumerism, akin to a survey that finds support for prompt airline arrivals. Papadakis also cites a correlation between student misbehavior and subsequent professional disciplinary action — a sociological finding with marginal pertinence. In concluding, Papadakis simply invokes notions of "public trust" and "professional accountability" to dismiss widespread opposition.

    The motivation for this $975 test appears to be a nebulous spirit of progressivism. What problems will be remedied? How will medical practice change? Why supplant examinations that are adequately administered by medical schools? Lacking answers, critics will continue to view this as a needless initiative from a self-interested and paternalistic bureaucracy.

    David Diaz, M.A.

    University of Pennsylvania Medical School

    Philadelphia, PA 19104

    diazd@mail.med.upenn.edu

    References

    Papadakis MA. The Step 2 clinical-skills examination. N Engl J Med 2004;350:1703-1705.

    To the Editor: That the clinical skills of a candidate for medical licensure must be assessed before the license is granted should, of course, be universally supported. But to do so by examining 12 actors simulating a patient is outrageous, bordering on fraud. Admittedly, special conditions would be required to observe the examination of real patients — one-way viewing rooms or the presence of an unobtrusive examiner — and the arrangements and scheduling would be demanding and expensive. But it is the state medical boards, which have ceded their responsibility to the USMLE, that should provide the funding. Conducting the examinations at hospitals and clinics throughout the country would reduce candidates' travel costs. Yes, there would be variability among the patients examined, but it is not the precise diagnosis that is important. Physicians recruited by the USMLE should be the examiners — physicians who pay hefty annual fees to maintain their licenses. With 780,000 active physicians in the United States, the state boards should be able to support a real clinical-skills assessment. We need not and should not underwrite an impersonation.

    Morton D. Bogdonoff, M.D.

    Weill Medical College of Cornell University

    New York, NY 10021

    To the Editor: Between July 1998 and April 2004, the Educational Commission for Foreign Medical Graduates administered its Clinical Skills Assessment test to thousands of international medical graduates at a single location in Philadelphia. The cost of the test was $1,300. In an analysis of the performance of the international medical graduates on this test, Whelan reports that 96.9 percent of the 8383 candidates who took the test between July 1, 1998, and January 31, 2000, passed it.1 Assuming a similar pass rate by U.S. medical students, it is difficult to justify the exorbitant costs of the examination.

    Simone Musco, M.D.

    Lankenau Hospital

    Wynnewood, PA 19096

    muscos@mlhs.org

    References

    Whelan G. High-stakes medical performance testing: the Clinical Skills Assessment program. JAMA 2000;283:1748-1748.

    To the Editor: Papadakis and other proponents of a national clinical-skills examination point to Canada, where such an examination was instituted several years ago. However, the only large study of long-term outcomes showed that after five years, examination scores correlated with only two of the six outcomes measured. More important, scores on the clinical-skills examination were no better correlated with positive outcomes than were scores on a standard multiple-choice test.1 Although Papadakis notes that the constructs of the examination have been shown to be reliable and valid, there is no evidence that this test will improve long-term communication skills, increase patient satisfaction, or address malpractice rates. Riding the current national wave of educational policy to assess more and educate less, the National Board of Medical Examiners is conducting an expensive investigation in outcomes validity. Funding this experiment with medical students' money creates a serious conflict of interest regarding the dissemination of any negative findings about future medical practice. Whereas Papadakis believes that no proof of validity is required, I and other medical students nationwide would prefer to see some proof before donating our dollars.

    Kurt A. Smith, B.A.

    Harvard Medical School

    Boston, MA 02115

    kurt_smith@student.hms.harvard.edu

    References

    Tamblyn R, Abrahamowicz M, Dauphinee WD, et al. Association between licensure examination scores and practice in primary care. JAMA 2002;288:3019-3026.

    To the Editor: The case outlined by Papadakis for a national process of assessing the clinical skills of medical students in the United States is so cumbersome, costly, and inconvenient that, to an outside observer, it seems bound to fail. Why does the United States not do what we do in the United Kingdom? Every medical school conducts its own clinical examination, which includes an evaluation of clinical skills. External examiners in all disciplines are included on the panel to ensure that standards are maintained. It is as simple as that. If there needs to be a national leveling process, it need not be at the level of the individual student; it can be at the level of the local examination itself. It is entirely feasible to solicit structured reports from formally appointed external examiners to ensure that comparable standards are maintained from school to school. Culling a portfolio of patients with a wide variety of conditions (obviously of a chronic nature) would yield a group of "standardized" patients without too much difficulty.

    Leon G. Fine, F.R.C.P., F.Med.Sci.

    University College London

    London WC1E 6BT, United Kingdom

    To the Editor: The new Step 2 clinical-skills examination will improve education and ultimately help to protect society from incompetent physicians, thus justifying the major organizational and financial challenges in administering this large-scale performance examination to thousands of students each year. Surprisingly, only a pass or fail score will be reported back to the examinees.1 This kind of summary feedback does not provide helpful information to the candidate in a performance examination, because it tells nothing about the nature of the person's performance.2,3 Specific, formative feedback is crucial for all the skills assessed — namely, data gathering, communication, and language proficiency. A study could determine the most appropriate way to provide such feedback (e.g., by means of score bands or subscores).4 Since the examination is taken about one year before the end of medical school, there is enough time for both students and teachers to use the feedback for personal improvement.

    Martin R. Fischer, M.D.

    University Hospital of Munich

    80336 Munich, Germany

    fischer.martin@med.uni-muenchen.de

    References

    Papadakis MA. The Step 2 clinical-skills examination. N Engl J Med 2004;350:1703-1705.

    Wettach GR. A standardized patient enrolled in medical school considers the national clinical skills examination. Acad Med 2003;78:1240-1242.

    Duffield KE, Spencer JA. A survey of medical students' views about the purposes and fairness of assessment. Med Educ 2002;36:879-886.

    Van der Vleuten CPM, Swanson DB. Assessment of clinical skills with standardized patients: state of the art. Teach Learn Med 1990;22:58-76.

    Dr. Papadakis replies: The correspondents' comments reveal the intensity of feeling engendered by the Step 2 clinical-skills examination. Several of the correspondents express concern about the cost of the examination, and I agree that cost is important. But the cost of medical education is a problem both for students and for public policymakers, even in the absence of the examination.

    Diaz and Smith argue that the examination should not be initiated until it has been shown to affect medical practice. I do not agree that this is a reasonable requirement, since such outcomes are essentially unknown for any type of existing U.S. licensing examination. Nonetheless, the combination of a clinical-skills examination and a clinical-knowledge examination was shown in a Canadian study to predict patient outcomes, making this type of examination better validated with regard to patient outcomes than virtually any other test of undergraduate or graduate medical trainees.1

    Diaz and Smith also voice some of the opposition to this examination on the part of the medical students. Despite Diaz's objections, I believe that public trust and professional accountability are important considerations in determining the value of clinical examinations. I take issue with Smith's minimization of the study by Tamblyn et al.1 This study showed that the Quebec licensing examinations predict clinical performance in terms of indicators of quality patient care both in the short term (the first one to two years of practice) and in the longer term (up to seven years of practice).

    Bogdonoff is concerned that the use of actors portraying patients (i.e., standardized patients) borders on fraud and drives up the cost of the examination. I agree that cost is an issue, but far from being fraudulent, the use of standardized patients has been validated in many studies as a reliable method for measuring clinical skills. It is essential that the examination be reliable. That is my concern about both Bogdonoff's testing proposal and the system described by Fine. Examinations in the United States have not been reliable, even when examiners were sent from the National Board of Medical Examiners to directly observe candidates taking a medical history and performing a physical examination. In contrast, there is now good evidence that the use of standardized patients provides a reliable and valid measure of clinical skills.

    Fischer questions the adequacy of the feedback that the examinees will receive about their performance on the examination. Examinees who fail will receive feedback in graphic form about their performance in each of the major components of the examination. The original plan was to provide such profiles to all examinees. However, concern was expressed that performance profiles might be misused by directors of residency programs to rank passing students on the various components. The use of the score profiles will be reassessed after the first year of testing.

    Maxine A. Papadakis, M.D.

    University of California, San Francisco

    San Francisco, CA 94143-0454

    References

    Tamblyn R, Abrahamowicz M, Dauphinee WD, et al. Association between licensure examination scores and practice in primary care. JAMA 2002;288:3019-3026.