Abstract
Background
Patients with fibromyalgia require clear and reliable medical information to manage a complex chronic condition. AI-based tools may offer valuable support for patient education.
Objective
To evaluate and compare the performance of three AI models—ChatGPT, Gemini, DeepSeek—in providing patient-centered, accurate information about fibromyalgia syndrome (FMS). Specifically, the study focuses on medical accuracy, readability, and the use of patient-oriented language.
Methods
Responses were collected from each AI model using a set of frequently asked questions about FMS. These questions were selected based on global search trends and expert input. A total of 10 questions were asked, and the responses were evaluated by a team of physiotherapists using a 4-point Likert scale.
Results
Statistical analysis revealed significant differences in response quality for certain questions, with the models performing similarly for others. The evaluation indicated that ChatGPT generally provided more accessible, accurate, and patient-friendly answers compared to Gemini and DeepSeek. Readability analysis using the Flesch-Kincaid Grade Level revealed that ChatGPT's responses generally required lower reading grade levels, making them more accessible. In contrast, Gemini produced more complex responses that required higher reading levels. DeepSeek's responses were found to fall in the mid-range of readability.
Conclusions
The findings suggest that AI tools can be a valuable resource for patient education but caution is advised, particularly in areas such as diagnosis, treatment, or medication use. AI-generated responses should always be verified by healthcare professionals to ensure their accuracy and relevance. When used properly and under appropriate supervision, AI can enhance patient understanding and improve access to reliable information about FMS.
Keywords
Get full access to this article
View all access options for this article.
