Abstract

Dear Editor,
We are writing to you to express our thoughts on the topic of ChatGPT (Chat-based Generative Pre-trained Transformer), and its impact on academic cardiology. With the rapid advancements in artificial intelligence (AI) and its potential applications in healthcare, the use of ChatGPT has gained significant attention in recent times after its launch in November 2022. 1
ChatGPT created a buzz by its ability to pass exams conducted in law and business schools as it was able to answer both multiple choice and essay questions. 2 It proved to be a valuable asset in the medical field by performing exceptionally well even in the United States Medical Licensing Exam (USMLE). 3
ChatGPT was able to correctly answer 74% of the questions, with slight variation in accuracy among the various subgroups of heart diseases as shown by AI-assisted decision support tool in medicine: a proof-of-concept study for interpreting symptoms and management of common cardiac conditions (AMSTELHEART-2). In the case vignettes, it showed 90% accuracy when compared to actual medical advice. In complex cases, where practitioners needed assistance from cardiologists, ChatGPT showed 50% accuracy in its results. 4
ChatGPT has the potential to revolutionize the approach to medical education, and clinical practice in cardiology. It is also a good tool for literature review, data analysis, and collaborative writing, saving time and effort. It will shape the future of medical writing, with human involvement only for quality assurance. It can also aid in telemedicine, patient education, and remote monitoring, thus improving patient care and access to information.
However, as with any new technology, there are concerns regarding the ethical issues and challenges that might arise due to the use of this technology in academic cardiology. One of the concerns is the reliability and accuracy of information provided by ChatGPT, as it relies only on the data that it has access to. There is also the potential risk of reduction in clinical judgment, human expertise, and human analytical thinking power due to ChatGPT overuse. Notably, there are concerns about patient privacy, data security, and potential biases in the training data used for ChatGPT.
This development could have serious consequences for the scientific community, as it may lead to misinformation and misleading findings.
5
Researchers may unknowingly cite or base their work on inaccurate or unreliable information, which could negatively impact the validity and integrity of scientific literature.
6
The lead author of “Appropriateness of Cardiovascular Disease Prevention Recommendations Obtained From a Popular Online Chat-Based Artificial Intelligence Model” published in JAMA, commented that “In a field like medicine, even a small limitation can be critical,” and “Even if the limitation seems small, on an individual patient level, one mistake could be life threatening”.
7
As AI tools cannot take accountability for the work, the
There is a need to set appropriate guidelines and standards for the use of ChatGPT in academic cardiology, including the need for validation of information provided by ChatGPT through trusted sources and the importance of maintaining human expertise and critical thinking in research and clinical practice. We believe that ChatGPT has the potential to be a boon to academic cardiology if used appropriately and ethically, but it also poses challenges that need to be addressed proactively. We encourage further research and discussions on this topic within the academic cardiology community to ensure that we harness the potential of ChatGPT while mitigating its risks.
We urge the IJCC journal to implement measures to neuter the potential risks associated with abstracts written by AI language models. This could include providing clear guidelines to authors on disclosing the use of AI in their research, verifying the accuracy of abstracts through peer review, and promoting critical evaluation of all content, including abstracts, by researchers and readers.
As we become more reliant on AI for scientific research, it becomes necessary to maintain high standards of accuracy and integrity in the publication process. We hope that this issue will be thoroughly considered by the journal and appropriate steps will be taken to address the potential challenges associated with abstracts written by ChatGPT or other AI language models.
Footnotes
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The authors received no financial support for the research, authorship and/or publication of this article.
