Comprehensive Summary
In this study, Bárbara Lemos Pereira Simão and colleagues evaluated the potential of ChatGPT compared to human primary care physicians in prescribing medical treatments. The research was carried out in three phases. In the first phase, 860 acute care consultations were reviewed by pairs of doctors, who provided a diagnosis and treatment plan for each case. In the second phase, the same cases were entered into ChatGPT, and the suggested diagnoses and treatment plans were recorded. In the final phase, the doctors’ and ChatGPT’s recommendations were compared and classified as “Agree,” “Partially agree,” or “Disagree.” Simão found that ChatGPT matched the number of correct treatments compared to human doctors, and had a lower percent of incorrect treatments. When the medical doctor and ChatGPT treatment options were not the same, ChatGPT was found to be correct at a slightly higher percentage. This study reveals the promise of artificial intelligence in the realm of healthcare diagnosis.
Outcomes and Implications
Artificial Intelligence is projected to revolutionize today’s healthcare system and can be a powerful tool in diagnosis and treatment. It can be applied in areas such as image analysis in dermatology and electrocardiogram interpretations for diagnosing heart failure among many others. This study suggests that at times ChatGPT can be even better than medical doctors in the acute care setting. However several limitations exist with AI, including bias and the lack of human personalization, both of which are essential in healthcare. Thus AI should not be viewed as something that will replace doctors but rather as a tool to help advance the field.