Abstract
Purpose
To evaluate the utility of CataractBot, an LLM (Large Language Model)-powered chatbot that provides doctor-verified answers to patient questions about cataract surgery. We examine its use by both end-users (patients and attendants) and medical experts.
Methods
A 24-week study was conducted to evaluate CataractBot among patients, their attendants, doctors, and patient coordinators. The bot responded instantly to questions by querying a knowledge base curated by medical professionals. Each response was asynchronously verified by an ophthalmologist (for medical questions) or a patient coordinator (for logistical questions), and their edits contributed to updating the knowledge base, thereby minimizing future expert intervention. A mixed-methods analysis was conducted on interaction logs, including patient and attendant questions, chatbot answers, and expert verifications.
Results
A total of 318 patients and attendants sent 1,992 messages, and LLM-generated answers were verified by five doctors and two coordinators. Questions asked pre-surgery were significantly more than post-surgery
Conclusion
CataractBot was predominantly used to address medical questions. It incorporated expert corrections to improve its answers and reduce the experts’ bot-related workload over time. This study highlights the potential of LLM-powered chatbots to support patient-provider communication in ophthalmology.
Get full access to this article
View all access options for this article.
