Abstract
This brief commentary explores the opportunities and challenges presented by the increasing prevalence of artificial intelligence in the field of psychology in South Africa. Artificial intelligence has the potential to revolutionise teaching and learning, research, content production, and professional services, but it also presents some challenges to academic and professional psychology in South Africa. While some generative artificial intelligence can produce written work, such as assignments, literature reviews, and theses, they currently cannot replace human reasoning and the critical thinking abilities required to argue a particular point (at this stage). Artificial intelligence chatbots can also act as teaching assistants and even provide complex psychological interventions such as cognitive-behavioural therapy. In research and publication, artificial intelligence can increase efficiency and provide new insights and perspectives by detecting patterns and relationships that may have been overlooked by human researchers. However, the use of artificial intelligence raises ethical concerns, particularly around ownership and authorship of artificial intelligence–generated content, potential biases, and errors. The commentary concludes that as artificial intelligence technology continues to evolve, and with the human–artificial intelligence partnership continuing to unfold, it is important to recognise the risks associated with its use in academic writing and ensure that psychology students develop appropriate research skills.
Keywords
Introduction
Artificial intelligence (AI) is becoming increasingly pervasive in the discipline of psychology, impacting various aspects of education, research, publishing, and professional practice. AI is revolutionising academic research, content creation, teaching and learning, and even professional psychological services, at an unprecedented rate because of technological advancements. This brief commentary explores the possibilities and challenges that AI presents for South African academics, authors, researchers, and practitioners in the field of psychology (Fulmer et al., 2021; Luxton, 2014; Popenici & Kerr, 2017; S. V. Singh & Hiran, 2022; Sutton, 2023).
Defining AI and describing the fields it is used in
AI refers to computer systems designed to carry out tasks requiring human-like intelligence (Copeland, 2023). Narrow AI performs specific tasks while general AI is able to perform many cognitive tasks that humans can. Machine learning (ML) is a subset of AI that enables computer systems to improve their performance by learning from input data. The accuracy and effectiveness of ML depend on the nature of the data used to train it, which can be problematic if the data contain any errors or biases. Chatbots, which simulate human conversation and tasks, are becoming more common in education and professional settings (Adamopoulou & Moussiades, 2020). AI has the potential to disrupt many industries by automating tasks, conducting data analyses, and making judgements based on patterns and trends. This could lead to job displacement, necessitating the development of new skills. However, AI disruption may boost productivity and reduce costs while possibly worsening economic inequality, which would require much ethical consideration (Ahmed et al., 2022; Campbell et al., 2022; Dwivedi et al., 2021, 2023; Fulmer et al., 2021; Russell & Norvig, 2021).
The hazards and benefits of AI platforms for teaching and learning
Despite the above-mentioned disruptions, generative AIs, such as ChatGPT-4, Bard (by Google), and Writesonic, can produce written work, including entire assignments, literature reviews, and even theses, without much input (Dwivedi et al., 2023; Rosenzweig-Ziff, 2023). However, the extent to which these tools can replace human intellect in academic writing is debatable. While they excel at summarising and setting up a framework, they cannot match human reasoning and the critical thinking abilities required to argue a particular point (Ruttkamp-Bloem, personal communication, May 3, 2023). Turnitin and other plagiarism checkers have been updated with AI-generated content detectors, but these systems are not yet perfect and may falsely flag some human-written content as AI-generated (Clayton, 2023; Crawford et al., 2023; Dwivedi et al., 2023; Rosenzweig-Ziff, 2023; Sun & Hoelscher, 2023). However, other AI-based detection software such as GPTZero and ZeroGPT have performed well in readily identifying AI-based text through various markers of burstiness (i.e., concentrated clusters of specific words or phrases), perplexity, and randomness.
Furthermore, AI websites can instantly generate bullet-point summaries of entire chapters of psychology literature. While this technology is beneficial, it may deprive students of the opportunity to engage in the process of writing their own study notes, which is an integral part of the traditional learning experience (Dwivedi et al., 2023), and in the process, may even lose their own voice in summarising what they have learned. However, one could also argue that to create a more effective learning experience, one could encourage psychology students to compare their study notes with those produced with generative AIs. In addition, in the digital age, note-taking has evolved, with students often relying on digital tools and concept mapping software, such as ‘Coogle’ (not to be confused with ‘Google’), to capture essential information.
Despite the negative risks associated with the use of AI in the classroom, such as increased risk of cheating, plagiarism, and not developing appropriate research skills, there are other definite benefits for psychology students and academics. Students who come from non-English-speaking backgrounds can produce assignments in more academically sounding English, and those with neuro-developmental disorders can present work on par with their peers (Ruttkamp-Bloem, May 3, 2023). Research even suggests English second-language speakers actually write better after learning sentence structures and important phrases, by receiving ongoing feedback from the AI tool they are using (Su et al., 2019).
AI in education is also revolutionising assessments by pushing cutting-edge techniques that rate critical thinking and higher-order cognitive abilities. This is accomplished through personalised assessments, real-time feedback, and adaptive learning, which promotes a change from traditional rote memorisation to a more comprehensive and skills-focussed approach, ultimately equipping students for the difficult problems of the 21st century (Almusaed et al., 2023).
AI can be a valuable tool for academics to produce professional-looking teaching and learning content, such as presentation slides that masterfully summarise complex information. Some AI platforms are even able to generate test questions and grading sheets, freeing educators’ time for supervision and individual attention (Dwivedi et al., 2023; McFarland, 2023; Viljoen, 2023). An educator can even upload assignments submitted by students and have the AI technology mark the assignment by the very same rubric created with AI. This results in a lot of time saved by the psychology lecturer. One study also suggests that AI chatbots could act as teaching assistants for university students, helping them learn basic information in a dynamic way (Chen et al., 2023). Online learning and AI-driven activities could offer benefits beyond traditional lectures, enabling active student engagement and self-regulated learning in well-designed digital environments. Some psychology educators even believe that AI presents an opportunity to increase critical thinking in the classroom, by promoting interactive, concept-based learning; encouraging learners to analyse, evaluate, and defend their own ideas; fostering discussions, and enhancing real-world readiness (Abramson, 2023).
While AI platforms offer benefits, they cannot replace human reasoning and critical thinking abilities required for well-reasoned arguments. As AI technology continues to evolve, it is crucial to recognise the risks associated with its use in academic writing and ensure that students develop appropriate research skills. University psychology departments could provide guidelines for using generative AI in writing tasks to help students understand its limitations. A further way to avoid last-minute dependency on AI for assignment completion is to encourage students to routinely work on their assignments throughout the semester.
Research and publication
Using AI in psychological research and publication can offer numerous benefits, such as language translation, summarisation, language and plagiarism checks, and data analysis. It can increase the efficiency of research and the amount of research produced. AI can also provide new insights and perspectives by detecting patterns and relationships that may have been overlooked by human researchers (Regorz Statistik, 2023; Russell & Norvig, 2021; S. Singh, 2023).
However, the use of AI also raises ethical concerns, particularly around ownership and authorship of AI-generated content, potential biases, errors, and fake information. AI tends to cite and reference hypothetical or inaccessible sources (Abdelaal et al., 2019; Dwivedi et al., 2023). Alkaissi and McFarlane (2023), for example, found that the ChatGPT would cite non-existent references (100%), using the names of published experts from the wrong field when asked to write a scientific research paper.
There is also a growing concern about political biases influencing AI algorithms, as demonstrated by Peters (2022) and Rozado (2023). Many believe that most AI sites reflect an unbalanced, left-leaning socio-political bias of their creators and therefore seem to reflect the editorialisation and narratives of their developers (Meekins, 2023; Rozado, 2023). Rozado (2023) reports an experiment where 15 different political orientation tests were administered to ChatGPT. The results consistently showed a left-leaning viewpoint, raising ethical questions about the validity of answers. Whether one agrees with this concern or not, it is understood that algorithm bias does arise from skewed training data (i.e., data that is unrepresentative of the broader population), leading to various biases, including socio-political, race, culture, language, and education – and not simply one predominant bias (Aldoseri et al., 2023).
Professional psychology
From a professional perspective in the field of psychology, AI is proving to be a useful tool in the promotion of mental well-being. For instance, Ly et al. (2017) demonstrated that AI conversational bots’ programmes developed for mobile phones have already been shown to promote mental well-being among non-clinical populations (see Abbas et al., 2018; Luxton, 2015; Ly et al., 2017; Rauws et al., 2019).
AI algorithms are also being used to assess physical and mental health, diagnose psychological disorders, and refer individuals for treatment by analysing speech, facial expressions, and physiological measures. These algorithms can identify patterns and connections in datasets, leading to accurate and unbiased diagnoses and recommendations for further testing (Al Hanai et al., 2018; Howard et al., 2023; Luxton, 2014, 2015; Nastasi et al., 2023; Sallam et al., 2023). AI is even demonstrating some potential in the area of sports psychology to aid in the prediction of sporting performance; and in terms of industrial psychology, AI might be a way to bolster human and team performance (Deloitte Insights, 2023; Noorbhai, 2022).
Furthermore, AI is being used to develop and deliver personalised treatment plans for individuals with mental health conditions. Chatbots and virtual agents can provide automated therapy and support, including cognitive-behavioural therapy (CBT), in a cost-effective and scalable manner that has been proven to be effective (Fulmer et al., 2018, 2021). However, some authors have raised concerns about the use of AI therapeutics without sufficient research into ethical safeguards (Al Hanai et al., 2018; Fitzpatrick et al., 2017; Fulmer et al., 2018, 2021; Luxton, 2015). While the integration of AI in mental health treatment, such as automated therapy through chatbots, might have the potential to offer a cost-effective and scalable solution for developing countries, ethical concerns, including the need for refined regulations to define ‘psychological acts’ and ethical standards in cyberspace (such as confidentiality, anonymity, and safe recordkeeping), could be addressed to ensure public safety and ethical safeguards in AI therapeutics.
There has already been a progressive move to using computer-based psychometric testing more than traditional projective assessments with pencil and paper (Joubert & Kriek, 2009; Piotrowski, 2015). Incorporating AI into industrial psychology might further enhance selection and development in organisations, offering efficiency and objectivity. However, potential issues again include data bias, reduced human interaction, privacy concerns, and skill requirements. Striking a balance between AI benefits and challenges is crucial for effective talent management (Tippins et al., 2021).
Final remarks
AI’s impact on psychology education, research, and practice is a topic of much discussion. AI chatbots have already demonstrated the ability to deliver complex interventions, such as CBT, suggesting a potential transformative shift in psychology. However, the notion that mental health professionals’ roles could be reduced to just connecting patients with AI virtual therapists raises concerns. A possible solution for this uncertain future involves ensuring that AI is only an adjunctive therapist or enabler to supplement human therapists rather than replacing them entirely, with refined regulations to ensure ethical and safe use. Despite AI’s growing efficiency in assessment and treatment, its limitations and the multifaceted nature of psychology cannot be overlooked. In addition, at least from a humanistic perspective, the therapeutic connection formed by a human professional cannot be replicated entirely by AI at this stage, which lacks true soul and spirit. At this point, too much alarm may not be warranted, and we would rather recommend that universities adapt their curricula to teach future psychology practitioners to be critical thinkers and more innovative, by possibly integrating AI in their hybrid practices.
In psychology classrooms, AI presents benefits such as increased efficiency and customised learning for individual needs. It helps students with writing disabilities to produce work of the same quality as their peers, more so than pre-AI spell and grammar checkers did. To minimise the risk of plagiarism, lecturers could encourage students to only use AI platforms that provide real-time grammar and spelling suggestions as the student is writing, giving feedback and corrections on specific words or phrases rather than full sentences or paragraphs, and thereby assisting students in learning and developing their language abilities. By comparing the text with a database of previously published materials, the AI system can also be programmed to identify suspected plagiarism, promoting originality and responsible authoring.
However, the potential risks of cheating, plagiarism, and the loss of memory-enhancing benefits from engaging with literature and creating personal study notes still persist. To maximise AI’s benefits while minimising risks, educators should use it judiciously and in a way that supports critical thinking and engagement, diversifies assessment pedagogies, and emphasises independent thinking, creativity, and visual or multimedia elements.
Psychology departments at South African universities (and universities as a whole, for that matter) may need to establish clear guidelines for AI use and clear consequences for academic and research ethical violations (such as fake information, plagiarism, abuse of data privacy, results manipulation, and stealing intellectual property), following examples being set internationally (Stanford University, 2023). Educators should become familiar with generative AI to recognise such content and use paywalled content to prevent students from relying solely on technology. Assigning (open-ended and critical thinking) tasks that require real-world observations, interviews, surveys, or experiments, and pen-and-paper writing also encourages active engagement and independent thinking. In-class tasks, discussions, and tests that require students to demonstrate their reasoning in a step-by-step manner can help educators measure student performance which is unassisted by AI. Ultimately, educators need to familiarise themselves with what AI can do and seek out means of detecting assignments written by AI to prevent unethical practices from increasing in the classroom. Educators might consider promoting academic integrity through open discussions on AI’s ethical use, encouraging responsible research, and fostering a sense of ethical responsibility among students, especially in professional psychology programmes.
AI is already making an impact, and some authors are even starting to cite particular generative AI systems as co-authors, which obviously raises ethical concerns (see King & chatGPT, 2023). However, free-to-use generative AI’s current propensity to generate false references and ‘hallucinations’ (creative content that is not based on real data) prevents users from generating complete or mostly complete journal articles without later sourcing real references and replacing the fake ones. Nevertheless, educators would be wise to take steps to ensure that AI is used to enhance, not diminish, the learning experience for both students and lecturers in the field of psychology.
To date, many institutions might position varied guidelines and policies on generative AI uses (or abstaining from it) in both higher education and clinical practice sectors. As AI continues to evolve, and in alignment with the Pan-African and Global South contexts, it is important for such guidelines and/or policies to be kept fluid to allow for (possible) adjustments in the future. Furthermore, this might necessitate regulatory frameworks and clinical guidelines from both academic and healthcare bodies to ensure that current and future practices remain in step with the changes and implications of AI’s rapid evolution.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
