On Opportunities And Challenges For Applying Multi-Modal AI In Organ Transplantation, Part II

Author

Veljko Dubljević, Thomas Egan, Savitri Fedson, William Rand and Munindar P. Singh

Publish date

On Opportunities And Challenges For Applying Multi-Modal AI In Organ Transplantation, Part II
Topic(s): Artificial Intelligence Organ Transplant & Donation

This essay is part II of a two-part series. See part I, here: https://bioethicstoday.org/blog/on-opportunities-and-challenges-for-applying-multi-modal-ai-in-organ-transplantation-part-i/

In the last few years, artificial intelligence (AI) techniques have been used in organ transplantation (OTx). A PubMed search using the terms ((transplant) AND (artificial intelligence)) AND (ethics) returned 79 citations, most in the last two years. AI was used to predict mortality on the liver transplant waiting list. AI has been used to accurately predict one-year survival after heart transplant 14. In a recent editorial in the American Journal of Transplantation, Kang reviewed the use of machine learning in transplant-related studies and cautioned about ethical challenges. AI has been used to predict immune responses in OTx. The concept of using AI to allocate livers for transplant was evaluated in a survey of 172 lay people in the UK. Seventy percent of the respondents felt that AI might be superior to transplant professionals allocating livers due to objectivity and reduced bias. Despite evident ample enthusiasm for the use of AI in OTx decisions, the evidence for effectiveness and fairness is far from being clear. AI ethics scholarship pertaining to OTx has typically involved clarifying relevant concepts, and exploring whether the potential organ allocation strategies might either violate or align with widely held societal values. Crucial stakeholders are not well represented in scholarship, nor are algorithms and models designed by implementing data from stakeholders. In order to incorporate the best training sets and minimize the risk of harm to patients, we posit that the AI models for OTx need to be multimodal.

A PubMed search using the terms ((organ transplant OR organ donation OR organ allocation) AND (“multimodal AI”) returned 0 citations. Given the power of multi-modal AI (MAI), this is a significant gap. Multi-modal AI (MAI) holds significant promise in the context of OTx, offering unique advantages over other forms of AI by harnessing diverse data modalities to improve various aspects of the transplant process, from donor matching to post-transplant care. One of the key advantages of multi-modal AI in organ transplantation (OTx) is its ability to integrate and analyze a wide range of data sources, including medical images, clinical records, genetic information, and patient-reported outcomes. By combining these modalities, multi-modal AI can provide a comprehensive view of each patient’s health status, enabling more accurate assessment of organ compatibility, identification of potential complications, and personalized treatment planning. The fact that there seems to be no MAI work in OTx in general, let alone ethical issues in OTx is noteworthy. However, before harnessing the power of MAI in the donor selection processes or the post-transplant phase, an ethical and robust unified platform for sharing and analyzing multimodal data in the context of OTx is needed.

Developing an MAI tool would benefit not only potential organ transplantation (OTx) recipients but would be a useful method to address related ethical concerns in health care. The Organ Procurement and Transplant Network (OPTN) serves as a pivotal infrastructure for managing organ donation and transplantation activities across the US. Leveraging MAI to analyze data from the OPTN could greatly enhance key aspects of organ transplantation OTx; streamlining organ allocation which in turn might improve patient survival. MAI algorithms can analyze vast amounts of historical data on organ transplants, patient demographics, medical conditions, and outcomes to optimize organ allocation. By considering factors such as donor-recipient compatibility, economic precarity of the patient’s family, geographical distribution, and medical urgency, MAI predictive analytics can assist healthcare providers in personalized treatment planning, preemptive interventions, and improving long-term patient survival rates. MAI can help identify areas for quality improvement in organ procurement processes. By analyzing data on transplant center performance, surgical techniques, immunosuppressive regimens, and patient care protocols, MAI can pinpoint opportunities for enhancing efficiency, reducing complications, and standardizing best practices across healthcare facilities. Ethical dilemmas inherently arise in OTx, particularly concerning organ allocation prioritization, donor-recipient matching, and medical resources allocation. MAI tools can analyze OPTN data alongside ethical frameworks and guidelines to provide decision support to transplant teams and policymakers. By considering various ethical principles, societal values, and stakeholder preferences, MAI can assist in navigating complex ethical challenges by promoting transparency and fairness in organ allocation. MAI analysis of OPTN data can facilitate research and innovation. By identifying novel prognostic indicators and therapeutic targets, MAI could accelerate the discovery of new treatments and interventions aimed at improving transplant outcomes and extending graft survival. Furthermore, MAI could support data-sharing initiatives, collaboration among research institutions, and the development of predictive models for experimental therapies and clinical trials. While there is a lively debate on the explainability of AI models in general, not enough attention has been given to the issue of replicability, especially in MAI.

The rapid advancement of multi-modal artificial intelligence (MAI) technologies has fueled a surge in research aimed at developing innovative models and algorithms to tackle complex problems across various domains. However, the replication of MAI research findings faces major challenges, hindering the validation and generalization of proposed methods. Model replication refers to the process of independently reproducing the results of a previously published AI model or algorithm using the same data, code, and experimental setup. Replication serves as a critical cornerstone of scientific inquiry, allowing researchers to verify the validity and robustness of proposed approaches and build upon existing knowledge to advance the field. Despite its importance, model replication in MAI research faces several challenges, including lack of standardization, data availability and accessibility, code complexity and documentation, lack of computational resources, and publication bias. MAI research often lacks standardized protocols, benchmarks, and evaluation metrics, making it challenging for researchers to replicate experiments accurately and compare results across studies. Also, access to high-quality datasets is essential for replicating MAI models, yet many datasets are proprietary, restricted, or difficult to obtain, limiting the reproducibility of research findings. Furthermore, MAI models are often implemented using complex codebases with poorly documented procedures, dependencies, and hyperparameters, hindering replication efforts for researchers without intimate knowledge of the original implementation. Additionally, replicating state-of-the-art AI models may require significant computational resources, including specialized hardware, cloud computing infrastructure, and access to parallel processing capabilities, posing barriers for researchers with limited resources. Finally, positive results are more likely to be published and disseminated than negative or inconclusive findings, leading to a bias toward showcasing successful models and algorithms while overlooking unsuccessful replication attempts or negative results.

The failure to replicate MAI models can have far-reaching consequences for the field, including erosion of trust, wasted resources and incomplete understanding of the limitations of the model. Furthermore, replication failures may deter researchers from exploring new avenues of inquiry or building upon existing work, stifling innovation and inhibiting progress in the field of MAI. Finally, without successful replication, the underlying mechanisms and factors contributing to the performance of AI models remain poorly understood, impeding efforts to develop robust and generalizable solutions to real-world problems.

Replication is critical, perhaps more so in computational models of organ transplantation rather than physical experiments, because it allows for an increased confidence in the validity of the model, for more to be learned about the model through the understanding of seemingly unimportant aspects brought to light, and finally, for more knowledge to be gained about the real world through the creation of a common language of modeling concepts. Since the replication must differ from the original model, it is important to first define those replicated differences, even if the same data was used to train the model.

There are six differences: time – when model was implemented, this will naturally change in every replication; hardware – the physical machine the model is run on; languages – the coding language used to write the model; toolkits – a preexisting software that allows for easier coding; algorithms – functions specific to each model; and authors – who created or implemented the model. There are also replicated standards (RS) that define if the replicated model was successful or not. There are three: numerical identity – models produce numerically the exact same results; distributional equivalence – models are statistically similar; and relational alignmentmodels show qualitatively similar input to output relationships. For instance, in prior work case study, Wilensky and Rand attempted to replicate Axelrod’s agent-based model but had to undergo five large revisions to their replicated version in order to achieve statistical equivalence to the original model. This leads to a conclusion that not only is replication not a straightforward process but also that more information, specifically level and specifications of detail of the conceptual model, model authorship and implementation, and availability need to be shared with the original model to encourage the replication process. Replication is necessary to validate the results and implantation of original models and should become standard practice, especially in any application of (M)AI to healthcare, and particularly in organ transplantation (OTx).

Dr. Veljko Dubljevic is a Professor of Philosophy and STS (Science, Technology and Society) at North Carolina State University and also leads the NeuroComputational Ethics Research Group. 

Dr. Thomas Egan is a Professor of Surgery at UNC Chapel Hill, Adjunct Professor in the joint UNC/NCSU Dept of Biomedical Engineering.

Dr. Savitri Fedson is a Professor of Medicine and Clinical Ethics at the Michael E. DeBakey VA Medical Center and Baylor College of Medicine and a transplant cardiologist.

Dr. William Rand is a Professor of Marketing and Analytics and Executive Director of Business Analytics Initiative at North Carolina State University.

Dr. Munindar P. Singh is a Professor of Computer Science at NC State University.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.