Abstract
Chat GPT is a modern artificial intelligence program: its recent introduction has created controversy in the academic world. This commentary discusses the utility of Chat GPT to explore healthcare issues such as chronic pain and associated conditions. To illustrate the potential application of brain fog, this commentary presents an example of Chat GPT using brain fog. Brain fog is a phenomenon which has been increasingly discussed in both academic and social media discourses. Further, the potential advantages and dangers of Chat GPT are described. Noted advantages include search facilitation, drafting patient information, identifying opportunities for future research, and highlighting areas with a lack of consensus. Dangers of Chat GPT include the possibility for misinformation and the reproduction of social stereotypes and assumptions. Lastly, this commentary concludes with recommendations for healthcare professionals considering use of Chat GPT. Artificial intelligence-driven technologies like Chat GPT become increasingly available and trusted in our society, so does the importance of our awareness for both benefit and potential for harm when artificial intelligence is used for non-critical information seeking and self-education.
Keywords
Introduction
Chat GPT is a generative artificial intelligence (AI) program: its recent introduction has created controversy in the academic world. 1 Generative AI are electronic programs utilizing information from available databases to produce outputs similar to human actions and conversation. 1 Similar to other forms of generative AI, Chat GPT works through synthesizing vast amounts of information across electronic databases and the internet, formulating seemingly accurate and convincing texts based on any question asked by the user. 1 Researchers across diverse fields have discussed the utility of Chat GPT in 1) aiding in literature searches by identifying relevant sources, 2) quickly predicting trends in the data to support research decision making, and 3) summarizing and communicating information in an accessible and timely manner. 2
This commentary discusses the utility of generative AI to explore healthcare issues such as chronic pain and associated conditions. We present an example of Chat GPT 3.5, a free version, to illustrate potential applications. Further, the potential advantages and dangers of Chat GPT are described. The commentary concludes with recommendations for healthcare professionals considering use of Chat GPT.
Application
Given the posited practicality and usefulness of Chat GPT in research, we were curious to apply it to the context of our current research on brain fog in persons with chronic pain. 3 Brain fog lacks a formal definition, but is often described as a state of “mental cloudiness” characterized by fluctuating issues with attention, memory, and executive function. 4 Given that brain fog is a relatively new concept, we hypothesized an AI-generated description would be biased towards social and media discourses: however, the results were similar to both academic and media discourses we have read on the topic. Below we present three examples of questions we asked Chat GPT 3.5 and the answers we received (Table 1):
Chat GPT and brain fog application.
Discussion
The responses provided by Chat GPT are consistent with existing literature, illustrating both the potential benefits and pitfalls of adopting the technology within healthcare and research. Some responses were accurate, suggesting utility for information seeking by healthcare professionals. For example, Chat GPT was able to synthesize potential causes of brain fog identified in the literature, such as inflammation, sleep, and pain medications,5–7 and suggested plausible, if generic, management approaches. However, when asked how to measure brain fog, Chat GPT did not generate a single accurate response. There is no such thing as the “Brain Fog Questionnaire” and we are unaware of neuroimaging or biomarker studies within the context of brain fog. 3 Also, a study investigating long-COVID-related brain fog found that the MoCA was not sensitive enough to detect cognitive changes in persons with long COVID 8 ; this is concordant with the mixed findings of earlier studies using such tools to investigate “fibrofog.” 3 Chat GPT may also have inconsistencies in responses as when provided with a more specified question: “please provide the name of a questionnaire to measure brain fog,” Chat GPT addressed that there were no specific measures of brain fog, but provided examples of existing measures of cognition. Interestingly, we noted Chat GPT was most accurate when describing topics which have been increasingly investigated, such as potential causes. Chat GPT is least accurate when responding to inquiries that are least investigated (e.g., measurements of brain fog). This illustrates the dependence of Chat GPT on the quality and quantity of information existing on a given topic, and its tendency to “confabulate” when information is lacking.
Recommendations
Given the potential strengths and limitations of Chat GPT we have developed four recommendations for use by healthcare professionals and researchers:
Chat GPT should be used with caution as a starting source to facilitate the search of information. Chat GPT can provide a brief and accessible synthesis of the literature that may help guide healthcare professionals to form a foundational understanding of a well-established topic or phenomenon, akin to having a reference textbook that is constantly updating itself. This may improve evidence-informed practice by facilitating time-consuming searches for relevant sources of information synthesis in busy and under resourced health care settings.
9
However, given the potential for imprecise or fictitious results from generative AI, critical reading, and triangulation of outputs through peer debriefing or additional sources are required. Chat GPT might be helpful to draft patient information. In our example, Chat GPT provided a concise (albeit generic) summary of potential treatments that could form the basis for plain language educational materials, with careful edits for veracity and relevance. Therefore, Chat GPT could be used to draft health teaching tools. However, as with many other forms of AI used in healthcare settings (e.g., machine learning
10
), Chat GPT is rooted in existing knowledge and may reproduce the stereotypes and assumptions of its sources. Healthcare professionals must continue to be aware of human and AI biases during patient interactions. Chat GPT can inform research including opportunities for further investigation. Chat GPT can also highlight areas of confusion across the literature and public discourses. Inaccurate responses from Chat GPT may highlight the topics requiring scientific clarification. This is evidenced by the misinformation provided regarding brain fog measurement, reinforcing the need for future studies to identify reliable and valid measures of brain fog.
3
Further, while Chat GPT discusses that exercise can improve brain fog in persons with chronic pain, this evidence is currently lacking for pain, and fails to consider the challenges that persons with brain fog may have with exercise interventions.
11
The process of fact-checking may indeed identify gaps and spark critical reflection on unmet needs. Chat GPT can identify areas of confusion across the health literature and public resources. Similar to recommendation #3, by examining inaccurate responses of Chat GPT, healthcare professionals can understand common misconceptions of a given phenomenon. In turn, this may assist healthcare professionals in understanding patient perspectives. As noted in recommendation #2, Chat GPT uses existing information that may be based on stereotypes. Healthcare professionals may utilize this information to reflect on potential biases that may be unknowingly reproduced in their practice.
Overall, this commentary has illustrated considerations for adoption of generative AI in clinical and research settings. Given the potential pitfalls, it is the responsibility of the user to be critical and reflexive of its outputs. If not used appropriately, generative AI such as Chat GPT may add to confusion. However, when triangulated with prior experience, reputable sources, and critical reasoning, Chat GPT can be engaged to facilitate clinical decision-making, develop resources, and highlight areas of uncertainty.
Limitations
In this commentary, we used Chat GPT 3.5 as it is free and widely accessible. A commentary exploring the use of Chat GPT in a different clinical population reported that Chat GPT 4.0. provided more conservative responses than the earlier version. 12 However, this model is currently valued at a monthly subscription at 20 dollars a month and may not be affordable for all researchers or healthcare professionals.
Implications
We have used Chat GPT 3.5 as one example, but our recommendations may be generalizable to other similar applications.
Researchers may use AI-generated responses to consider what types of information need to be disseminated to the public. This can assist in facilitating knowledge mobilization, by identifying needed educational tools or guidelines for healthcare professionals and policy makers. Educators may also use Chat GPT outputs as the catalyst for literature search exercises for trainees, evoking critical thinking and searching for robust counter-arguments to misinformation.
Conclusion
To conclude, as AI-driven technologies like Chat GPT become increasingly available and trusted in our society, so does the importance of our awareness for both benefit and potential for harm when AI is used for non-critical information seeking and self-education.
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
