Finding the Right Tools for Assessing Quality of Clinical Ethics Consultation

Author

David Magnus

Publish date

Tag(s): Legacy post
Topic(s): Clinical Ethics

by David Magnus, Ph.D.

This issue of the American Journal of Bioethics contains two extremely important Target Articles in the history of clinical ethics consultation. The first presents the eagerly awaited results of the ASBH attestation pilot, while the second provides a detailed account of the development and application of the VA National Center for Ethics Consultation quality assessment tool that aims to evaluate the quality of an ethics consultation by analysis of documentation.

Together, these articles mark an important step in the development of standards and tools for those of us engaged in clinical ethics consultation. As the authors of both articles and many of the commentaries point out, ethics consultation can literally make a difference of life and death for patients, and it is imperative that we find ways of ensuring quality. At the same time, as also pointed in many commentaries, this is only a first step and many questions remain. Is evaluation of documentation a valid way to assess consultation quality (or only one aspect of consultation)? Are individual consultants (as opposed to whole services) the right unit for assessment? Is there a way of establishing consilience across a range of assessment approaches, thus instilling confidence in the instruments and their application? Can we find high correlations between objective outcomes and assessment tools?

Several lessons can already be gleaned from both the pilot project and the extensive work of the VA National Center in developing the tool. First, it is unlikely that agreement can be achieved across raters for a fine-grained evaluative instrument. As a result, the VA tool adopted a four-point scale in place of the USDA six-point scale. But even with the four-point scale, there was poor inter-rater reliability (43%), and reasonable reliability (74%) was only achieved for the binary pass/no pass. Second, the VA experience demonstrates that increased training of the evaluators (something that was inconsistently done for the ASBH pilot) can increase the inter-rater reliability. Given these scores and the preliminary nature of the work, it was perhaps a mistake for the Task Force to have defined who passed and who did not (for the purposes of being invited to participate in the next step of evaluation, an oral examination). In particular, it will likely be hard to recruit those who “failed” the evaluation to participate in the oral examination—but this will be critical to evaluate the reliability of the tool to assess quality. One hopes that those who fared poorly in the portfolio assessment will likewise score at the bottom of the oral examination. But what if the opposite happens? It will be critical to gather this data for the purpose of evaluating the portfolio and its review process and also for assessing the oral examination.

While the field is making slow, but steady progress towards certification and accreditation of training programs (the Association of Bioethics Program Directors has an early draft of an evaluative tool, but it has not been piloted by any of the handful of programs that offer clinical ethics Fellowships), the tools and portfolio help fill a very important meso-level niche. At my institution, we have a Fellowship in clinical ethics. In addition, we have a number of clinicians (young faculty) who have decided to pursue additional training in clinical ethics and to commit significant time to carrying out consultations. Our hospital has been generous in compensating for at least some of this time commitment. Reflecting the growing need for evaluation and documentation of the quality of the performance of all aspects of our medical staff, I have been asked to evaluate the performance of the Fellow (for Graduate Medical Education), and clinical staff (to their clinical supervisor or division chief), who are being trained and actually carry out consultations. I also report annually to Stanford’s Quality and Patient Safety and Efficacy Committee. Since I sit on that committee, I am familiar with the efforts of other clinical programs to collect meaningful data and improve clinical performance.

In each of these three activities, it has been a struggle to identify the right data needed to carry out evaluation and quality improvement. Qualitative observation and reporting of performance is certainly an important component of these local evaluations, but the data from peer and mentor evaluation of physicians should give pause. Stakeholder satisfaction, number of consults performed, and process evaluations can also play a part in evaluation and reporting. But the Ethics Consultation Quality Assessment Tool (ECQAT) is a very useful addition to our toolbox and can potentially be implemented by services across the country. Clinical ethicists need some access to evaluator training, and if this can be developed in a scalable way, we could see the beginning of a revolution in quality for clinical ethics consultation. What is needed now is more research to support the relationship between ECQAT and other outcomes and evaluative approaches so that we know how well calibrated this tool is to carry out the goal of improving the quality of ethics consultation.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.