Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Diagnostic Performances of GPT-4o, Claude 3 Opus, and Gemini 1.5 Pro in “Diagnosis Please” Cases

View through CrossRef
AbstractBackgroundsLarge language models (LLMs) are rapidly advancing and demonstrating high performance in understanding textual information, suggesting potential applications in interpreting patient histories and documented imaging findings. LLMs are advancing rapidly and an improvement in their diagnostic ability is expected. Furthermore, there has been a lack of comprehensive comparisons between LLMs from various manufacturers.PurposeWe tested the diagnostic performance of the latest three major LLMs (GPT-4o, Claude 3 Opus, and Gemini 1.5 Pro) usingRadiologyDiagnosis Please cases, a monthly diagnostic quiz series for radiology experts.Materials and MethodsClinical history and imaging findings as provided textually by the case submitters were extracted from 324 quiz questions fromRadiologyDiagnosis Please cases. The GPT-4o, Claude 3 Opus, and Gemini 1.5 Pro generated the top three differential diagnoses. Diagnostic performance among the three LLMs were compared using Cochrane’s Q and post-hoc McNemar’s tests.ResultsThe diagnostic accuracies for the primary diagnosis were 41.0%, 54.0%, and 33.9% for GPT-4o, Claude 3 Opus, and Gemini 1.5 Pro, respectively. When considering the accuracy of any of the top three differential diagnoses, the rates improved to 49.4%, 62.0%, and 41.0%, respectively. Significant differences in diagnostic performance were observed among all pairs of the models.ConclusionIn a comparison of the latest LLMs, Claude 3 Opus outperformed GPT-4o and Gemini 1.5 Pro in solving radiology quiz cases. These models appear capable of assisting radiologists when supplied with accurate evaluations and worded descriptions of imaging findings by radiologists.Summary statementClaude 3 Opus achieved the highest diagnostic accuracy, followed by GPT-4o and Gemini 1.5 Pro, in a comparison of their performance on 324 text-basedRadiologyDiagnosis Please cases..Key ResultsThis study compared the diagnostic performances of the latest three major large language models, GPT-4o, Claude 3 Opus, and Gemini 1.5 Pro, using clinical history and textualized imaging findings inRadiologyDiagnosis Please cases.The top three differential diagnoses generated by GPT-4o, Claude 3 Opus, and Gemini 1.5 Pro achieved diagnostic accuracies of 49.4%, 62.0%, and 41.0%, respectively, with statistically significant differences between each model’s performance.
Title: Diagnostic Performances of GPT-4o, Claude 3 Opus, and Gemini 1.5 Pro in “Diagnosis Please” Cases
Description:
AbstractBackgroundsLarge language models (LLMs) are rapidly advancing and demonstrating high performance in understanding textual information, suggesting potential applications in interpreting patient histories and documented imaging findings.
LLMs are advancing rapidly and an improvement in their diagnostic ability is expected.
Furthermore, there has been a lack of comprehensive comparisons between LLMs from various manufacturers.
PurposeWe tested the diagnostic performance of the latest three major LLMs (GPT-4o, Claude 3 Opus, and Gemini 1.
5 Pro) usingRadiologyDiagnosis Please cases, a monthly diagnostic quiz series for radiology experts.
Materials and MethodsClinical history and imaging findings as provided textually by the case submitters were extracted from 324 quiz questions fromRadiologyDiagnosis Please cases.
The GPT-4o, Claude 3 Opus, and Gemini 1.
5 Pro generated the top three differential diagnoses.
Diagnostic performance among the three LLMs were compared using Cochrane’s Q and post-hoc McNemar’s tests.
ResultsThe diagnostic accuracies for the primary diagnosis were 41.
0%, 54.
0%, and 33.
9% for GPT-4o, Claude 3 Opus, and Gemini 1.
5 Pro, respectively.
When considering the accuracy of any of the top three differential diagnoses, the rates improved to 49.
4%, 62.
0%, and 41.
0%, respectively.
Significant differences in diagnostic performance were observed among all pairs of the models.
ConclusionIn a comparison of the latest LLMs, Claude 3 Opus outperformed GPT-4o and Gemini 1.
5 Pro in solving radiology quiz cases.
These models appear capable of assisting radiologists when supplied with accurate evaluations and worded descriptions of imaging findings by radiologists.
Summary statementClaude 3 Opus achieved the highest diagnostic accuracy, followed by GPT-4o and Gemini 1.
5 Pro, in a comparison of their performance on 324 text-basedRadiologyDiagnosis Please cases.
Key ResultsThis study compared the diagnostic performances of the latest three major large language models, GPT-4o, Claude 3 Opus, and Gemini 1.
5 Pro, using clinical history and textualized imaging findings inRadiologyDiagnosis Please cases.
The top three differential diagnoses generated by GPT-4o, Claude 3 Opus, and Gemini 1.
5 Pro achieved diagnostic accuracies of 49.
4%, 62.
0%, and 41.
0%, respectively, with statistically significant differences between each model’s performance.

Related Results

Exploring Large Language Models Integration in the Histopathologic Diagnosis of Skin Diseases: A Comparative Study
Exploring Large Language Models Integration in the Histopathologic Diagnosis of Skin Diseases: A Comparative Study
Abstract Introduction The exact manner in which large language models (LLMs) will be integrated into pathology is not yet fully comprehended. This study examines the accuracy, bene...
Assessment of Chat-GPT, Gemini, and Perplexity in Principle of Research Publication: A Comparative Study
Assessment of Chat-GPT, Gemini, and Perplexity in Principle of Research Publication: A Comparative Study
Abstract Introduction Many researchers utilize artificial intelligence (AI) to aid their research endeavors. This study seeks to assess and contrast the performance of three sophis...
Hydatid Disease of The Brain Parenchyma: A Systematic Review
Hydatid Disease of The Brain Parenchyma: A Systematic Review
Abstarct Introduction Isolated brain hydatid disease (BHD) is an extremely rare form of echinococcosis. A prompt and timely diagnosis is a crucial step in disease management. This ...
Primary Thyroid Non-Hodgkin B-Cell Lymphoma: A Case Series
Primary Thyroid Non-Hodgkin B-Cell Lymphoma: A Case Series
Abstract Introduction Non-Hodgkin lymphoma (NHL) of the thyroid, a rare malignancy linked to autoimmune disorders, is poorly understood in terms of its pathogenesis and treatment o...
Diagnostic Performance of Claude 3 from Patient History and Key Images in Diagnosis Please Cases
Diagnostic Performance of Claude 3 from Patient History and Key Images in Diagnosis Please Cases
AbstractBackgroundsLarge language artificial intelligence models have showed its diagnostic performance based solely on textual information from clinical history and imaging findin...
Microwave Ablation with or Without Chemotherapy in Management of Non-Small Cell Lung Cancer: A Systematic Review
Microwave Ablation with or Without Chemotherapy in Management of Non-Small Cell Lung Cancer: A Systematic Review
Abstract Introduction  Microwave ablation (MWA) has emerged as a minimally invasive treatment for patients with inoperable non-small cell lung cancer (NSCLC). However, whether it i...

Back to Top