Abstract
This short report evaluated the accuracy and quality of information provided by ChatGPT regarding the use of complementary and integrative medicine for cancer. Using the QUality Evaluation Scoring Tool, a panel of 12 reviewers assessed ChatGPT's responses to 8 questions. The study found that ChatGPT provided moderate-quality responses that were relatively unbiased and not misleading. However, the chatbot's inability to reference specific scientific studies was a significant limitation. Patients with cancer should not rely on ChatGPT for clinical advice until further systematic validation. Future studies should examine how patients perceive ChatGPT's information and its impact on communication with health care professionals.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
