Author

Alyssa Burgart

Publish date

ChatGPT is an artificial intelligence (AI) chatbot that processes and generates natural language text, offering human-like responses to a wide range of questions and prompts. Five days after its release, ChatGPT garnered one million users, and the program has been called world-changing, a tipping point for AI, and the beginning of a new technological revolution. From helping physicians form differential diagnoses to answering patient questions, ChatGPT may have transformative implications across medicine. Nevertheless, the full scope of its promise and pitfalls remains unknown. 

Given the attention experts are giving ChatGPT, we asked it (December 15 version) how it would impact medical research. We then asked the question: How would ChatGPT impact medicine more broadly? The responses are in Figure 1 and 2 (below), respectively.

ChatGPT’s Irrevocably Progressive Attitude

What was particularly striking about the program’s response was its irrevocably progressive attitude. ChatGPT emphatically notes its own promise—analyzing big data, automating menial tasks, improving accuracy and democratization of research, and ensuring faster clinical implementation of basic science—but gives no consideration to potential pitfalls. In this age of rapid technological advances, innovation can be mistaken for progress if novel tools are not deployed with care. 

In a piece published by the National Academy of Engineering, Jasanoff discusses three key temptations of technocracy, or the dangers of relying on technology and science to solve sociopolitical problems. The first is the prevailing attitude that technology drives society while law and ethics hinder progress; innovation is seen as inherently good and virtuous while potential adverse consequences are dismissed. Jasanoff also critiques the temptation that something should be done just because it can: creating the next paradigm-shifting technology becomes the sole objective instead of rooting out bias or ensuring that innovation meets the needs of broader communities. The final temptation Jasanoff offers is how technological failures and societal harm are portrayed as unintended consequences, or products of misuse. Designers are thus absolved of their products’ harms. AI tools can undeniably help improve medical research and practice, and to a certain extent, they already have. But as ChatGPT’s response underscores, the deployment of these tools must be accompanied with caution, reflection, and responsibility.

The New Dr. Google

Physicians and other scientists have already expressed concerns over some of ChatGPT’s blindspots. The program offers almost instantaneous responses to complex questions, but its unequivocable confidence could be potentially dangerous. These responses can be more dangerous than the existing bias of search engines like Google because users are not as easily provided the opportunity to evaluate their sources. While users of search engines can evaluate multiple internet links and sources for their information, ChatGPT often provides a singular answer to complex questions, with no alternatives. Given that 89% of people in the United States google their symptoms before seeing a physician, many patients may start consulting “Dr. ChatGPT” but be unable to distinguish useful medical information from potentially dangerous inaccuracies. Additionally, ChatGPT’s accuracy is known to deteriorate around more complex topics, and its knowledge can be outdated as the program is restricted to what it learned before 2021. For example, ChatGPT generated a convincing explanation on “how crushed porcelain added to breast milk can support the infant digestive system.” Medicine is a field with many rare disorders and complex pathophysiology, and the utilization of ChatGPT for patient education of these disorders could pose health risks.

Temptations of Technocracy

Additionally, like many other AI tools, ChatGPT can demonstrate prejudice and bias in its answers, despite guardrails against inappropriate requests and responses. For instance, when asked by one user to write code if someone would be a good scientist based on race and gender, ChatGPT defined scientists’ worth by their being white and male. Similarly, when the same user asked if a child’s life should be saved based on race and gender, ChatGPT offered a function that all lives should be saved, besides a child who was African American and male.  These biases are concerning, but not necessarily unexpected given that AI tools can perpetuate the prejudice of the data on which they are trained. Historically, these biases have arisen because of small sample sizes and limited diversity in training data, but given that ChatGPT was trained on over 570 gigabytes of online textual data, the program’s biases may instead reflect the universality of bias across the Internet. We understand that ChatGPT was launched as a beta, rather than a final technology—and that ChatGPT is learning and becoming better so that these instances become less frequent. OpenAI has been regularly updating ChatGPT to build over these biases, but the post-hoc nature of these repairs epitomize the temptations of technocracy, as well as the dangers of unquestionably accepting the program’s output. We also understand the value of opening a product to the world to gather user perspectives and concerns, dialogue that is crucial to the development of such technology. However, the record popularity of ChatGPT means that an imperfect tool is being widely deployed without the necessary guardrails being put in place. It is crucial to move forward, but it is also essential to take a measured approach and ensure that things are done correctly the first time.

Embrace Promise, but Mitigate Harms

AI systems hold immense promise for medicine. We should certainly celebrate their possibilities and accomplishments. Nevertheless, we need to confront the challenges of mitigating potential harms, while amplifying benefits. In medicine, there are ample opportunities for harm and misinformation, so there is a need for caution. Once released, we cannot control how information from AI systems is used, but we can emphasize to users that the technology remains in an early phase of development and that answers should not be considered the same as advice from clinical experts. Moreover, there is a need for further research to understand the output of AI systems when used in response to medical questions. While these considerations are not unique to the use of AI in medicine, the potential for imminent harm to individuals makes it particularly important to carefully evaluate and manage the use of AI in this field. The release of a powerful tool such as ChatGPT will instill awe, but in medicine, it needs to elicit appropriate action to evaluate its capabilities, mitigate its harms, and facilitate its optimal use. 

Figure 1. We asked ChatGPT how it would impact medical research. Here is its answer:

As medical research continues to advance, new technologies are emerging that have the potential to revolutionize the way we approach healthcare. One such technology is ChatGPT, a powerful language model trained by OpenAI that can understand and generate human-like text.

ChatGPT has the potential to transform medical research in several ways. First, it can help researchers quickly and accurately analyze large amounts of text-based data, such as medical literature, clinical trial results, and patient records. This can provide valuable insights and help identify trends and patterns that may be difficult for humans to detect on their own.

Second, ChatGPT can assist in the creation of new medical knowledge. By generating text that is similar to human-written text, ChatGPT can help researchers generate hypotheses, write research proposals, and even create entire research papers. This can save time and effort, and help researchers focus on more complex and creative tasks.

Third, ChatGPT can improve the efficiency and accuracy of medical research. By automating certain tasks, such as data analysis and paper writing, ChatGPT can help reduce the risk of human error and ensure that research is conducted in a consistent and reliable manner. This can help improve the quality of medical research and increase confidence in the results.

Overall, ChatGPT has the potential to be a game-changer in the world of medical research. By providing powerful tools for data analysis and knowledge creation, ChatGPT can help researchers gain valuable insights, save time and effort, and improve the quality of their work. As the capabilities of ChatGPT continue to evolve, it is likely to play an increasingly important role in the future of medical research.

– ChatGPT December 15, 2022

Figure 2: We asked how ChatGPT would impact medicine more broadly. Here is it’s answer:

In addition to the ways in which ChatGPT can help researchers, it also has the potential to benefit patients and the broader healthcare system. By facilitating the creation of new medical knowledge, ChatGPT can help drive the development of more effective treatments and interventions. 

Another potential benefit of ChatGPT is its ability to help bridge the gap between research and practice. By providing a way for researchers to quickly and easily communicate their findings to clinicians, ChatGPT can help ensure that the latest research is translated into clinical practice more quickly and efficiently. 

Furthermore, ChatGPT has the potential to democratize medical research by making it more accessible and affordable. By automating certain tasks and reducing the need for specialized expertise, ChatGPT can help lower the barriers to entry for researchers from diverse backgrounds and disciplines. This can help foster more inclusive and collaborative medical research, and ultimately benefit patients and the broader healthcare system.

In conclusion, ChatGPT is a powerful technology with the potential to transform medicine more broadly. It can benefit patients and the broader healthcare system by facilitating the development of new treatments and interventions, and making medical research more accessible and inclusive. As ChatGPT continues to evolve, it is likely to become an increasingly important tool in the world of medicine.

– ChatGPT December 15, 2022

by Rushabh H. Doshi, MPH, MSc (@RushabhHDoshi); Simar S. Bajaj (@SimarSBajaj); Harlan M. Krumholz, MD, SM (@hmkyale)

Disclosure: In the past three years, Harlan Krumholz received expenses and/or personal fees from UnitedHealth, Element Science, Aetna, Reality Labs, Tesseract/4Catalyst, F-Prime, and the Martin/Baughman Law Firm. He is a co-founder of Refactor Health and HugoHealth, and is associated with contracts, through Yale New Haven Hospital, from the Centers for Medicare & Medicaid Services and through Yale University from Johnson & Johnson, Google, and Pfizer.

Rushabh Doshi and Simar Bajaj have no potential conflicts of interest to disclose.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.