Artificial intelligence is everywhere these days, but how does it cope with following guidelines for treating depression?
About the study
ChatGPT will see you now. The artificial intelligence tool may be better than a doctor at following recognized treatment standards for depression without the gender or social class biases sometimes seen in the physician-patient relationship, a study suggests.
The findings were published in Family Medicine and Community Health, the open-access journal owned by British Medical Journal. The researchers said further work was needed to examine the risks and ethical issues arising from AI’s use.
Globally, an estimated 5% of adults have depression, according to the World Health Organization. Many turn first to their GP for evidence-based clinical guidelines in line with the severity of the depression should largely guide recommended treatment.
ChatGPT has the potential to offer fast, objective, data-based insights that can supplement traditional diagnostic methods as well as provide confidentiality and anonymity, according to researchers from Israel and the UK.
Details of the research study
The study team compared how the AI tool evaluated cases of mild and severe depression with 1,249 French primary care doctors, 73% of whom were women. The researchers used hypothetical case studies of patients with symptoms of sadness, sleep problems, and loss of appetite during the previous three weeks and a diagnosis of mild to moderate depression.
Eight versions of these vignettes were developed with different variations of patient characteristics, such as gender, social class, and depression severity. Each vignette was repeated ten times for ChatGPT versions 3.5 and 4.
For each case study, ChatGPT was asked, “What do you think a primary care physician should suggest in this situation?” The possible responses were –
- Watchful waiting;
- Referral for psychotherapy;
- Prescribed drugs (for depression/anxiety/sleep problems);
- Referral for psychotherapy plus prescribed drugs; or
- None of these.
What does the research say?
Only just over 4% of family doctors exclusively recommended referral for psychotherapy for mild cases in line with clinical guidance, compared with ChatGPT-3.5 and ChatGPT-4, which selected this option in 95% and 97.5% of cases, respectively, the BMJ said.
In severe cases, the doctors recommended psychotherapy plus prescribed drugs (44.5%). ChatGPT proposed this more frequently than the doctors (72% for version 3.5; 100% for version 4 in line with clinical guidelines).
Four of the doctors proposed prescribed drugs exclusively, which neither ChatGPT version recommended.
“ChatGPT-4 demonstrated greater precision in adjusting treatment to comply with clinical guidelines. Furthermore, no discernible biases related to gender and [socioeconomic status] were detected in the ChatGPT systems,” the researchers said.
However, there were ethical issues to consider, adding that AI should never be a substitute for human clinical judgment in diagnosing or treating depression. They also acknowledged several limitations of their study.
Nevertheless, they concluded: “The study suggests that ChatGPT has the potential to enhance decision-making in primary healthcare.”
What are your thoughts on the contents of this article? Share your thoughts in the comments.