Can Chat-GPT 4 diagnose complex illnesses?

  • June 26, 2023
  • Steve Rogerson

Chat-GPT 4 is not very good at medical diagnosis, but it is getting there, according to researchers at the Beth Israel Deaconess Medical Center (BIDMC) in Massachusetts.

In a recent experiment published in Jama, researchers tested Chat-GPT 4’s ability to make accurate diagnoses in challenging medical cases. The team found that the generative AI selected the correct diagnosis as its top diagnosis nearly 40 per cent of the time and provided the correct diagnosis in its list of potential diagnoses in two-thirds of difficult cases.

Generative AI refers to a type of artificial intelligence that uses patterns and information it has been trained on to create new content, rather than simply processing and analysing existing data. Some of the most well-known examples of generative AI are so-called chatbots, which use a branch of AI called natural language processing (NLP) that allows computers to understand, interpret and generate human-like language.

Generative AI chatbots are powerful tools poised to change creative industries, education, customer service and more. However, little is known about their potential performance in the clinical setting, such as complex diagnostic reasoning.

“Recent advances in artificial intelligence have led to generative AI models that are capable of detailed text-based responses that score highly in standardised medical examinations,” said Adam Rodman, co-director at BIDMC and an instructor in medicine at Harvard Medical School. “We wanted to know if such a generative model could think like a doctor, so we asked one to solve standardised complex diagnostic cases used for educational purposes. It did really, really well.”

To assess the chatbot’s diagnostic skills, Rodman and colleagues used clinicopathological case conferences (CPCs), a series of complex patient cases including relevant clinical and laboratory data, imaging studies, and histopathological findings published in the New England Journal of Medicine for educational purposes.

Evaluating 70 CPC cases, the artificial intelligence exactly matched the final CPC diagnosis in 27 (39 per cent) of cases. In 64 per cent of the cases, the final CPC diagnosis was included in the AI’s differential – a list of possible conditions that could account for a patient’s symptoms, medical history, clinical findings, and laboratory or imaging results.

“While chatbots cannot replace the expertise and knowledge of a trained medical professional, generative AI is a promising potential adjunct to human cognition in diagnosis,” said first author Zahir Kanjee, a hospitalist at BIDMC and assistant professor of medicine at Harvard Medical School. “It has the potential to help physicians make sense of complex medical data and broaden or refine our diagnostic thinking. We need more research on the optimal uses, benefits and limits of this technology, and a lot of privacy issues need sorting out, but these are exciting findings for the future of diagnosis and patient care.”

Co-author Byron Crowe, an internal medicine physician at BIDMC and an instructor in medicine at Harvard Medical School, added: “Our study adds to a growing body of literature demonstrating the promising capabilities of AI technology. Further investigation will help us better understand how these new AI models might transform health care delivery.”

BIDMC is a patient care, teaching and research affiliate of Harvard Medical School. It is the official hospital of the Boston Red Sox baseball team.