Abstract
This article builds a heuristic that raises the artificial intelligence (AI) literacy of Latine students. Nefarious people are exploiting marginalized Latine communities by using AI in creative partnerships, similar to those described in technical communication research, to build social profiles of Latines. These people are rhetorically using AI in passive-income and voice-over scams that target Latines who are insecure about their financial and citizenship situations. The heuristic offered here guides instructors on how to increase Latine students’ AI literacy by making these students aware of the rhetorical relationships between nefarious individuals and AI.
The field of technical and professional communication (TPC) is in a unique position to increase students’ literacy on the nefarious use of artificial intelligence (AI) to harm marginalized communities. Much of the TPC research on generative AI aims to improve the creative partnerships between humans and generative AI (Duin & Pedersen, 2021). For example, recent special issues on composing with AI in the Journal of Business and Technical Communication (Carradini, 2024a, 2025) and Computers and Composition (Lim et al., 2025; Ranade & Eyman, 2024a) address the barriers between generating AI ethically and finding where humans fit in the production of AI materials. The scholarship has found that more discussions are needed on the ethics of using generative AI in the classroom and beyond (Aguilar, 2024; Carradini, 2024b; Hillen, 2024; Hocutt, 2024; Kong & Ding, 2024; Vetter et al., 2024). Concerns in the field about the unethical use of generative AI are that students will violate academic dishonesty, instructors will remain unaware of the difference between AI-generated and non-AI-generated content, and both instructors and students, from a user standpoint, will approach AI generation uncritically, allowing AI to create content that is outside the bounds of the user's intention. For the most part, these special issues show that AI generation should be more human centered by calling on the companies that use AI to include humans in the moderation of AI generation and by asking instructors and students to collaborate ethically and critically with AI.
I suggest that there must also be conversations on how people are creating partnerships with AI to deceive and harm others. The nefarious use of AI, what I define to be the intentional use of AI by humans to exploit other humans, is built on the historic exploitation of marginalized communities. The rapid emergence of generative AI in daily life, from the workplace to leisure to the classroom, has resulted in gaps in the AI literacy of users. These literacy gaps are most prevalent in communities that have low access to technology literacy sources, such as education and public libraries, and are predominantly communities of color. By AI literacy, then, I am referring to the knowledge and understanding of not only how to use AI but also how people are using AI to target certain demographics. According to a study by AARP (2021), Black and Latine communities are among the demographics with the least access to AI literacy materials and education and are thus targeted “more than other groups” by nefarious human actors using AI (McCormack, 2021). De la Torre and Frazee (2024) likewise reported that students of color have less resources to engage in AI literacy than do White students and that students of color are especially anxious about the future of AI. Consequently, increasing AI literacy in university classrooms and beyond is crucial in order to mitigate the damage that nefarious AI use can cause in targeted communities of color.
Latines may be particularly vulnerable to the alarming rise of scams involving digital technologies and generative AI. Multiple studies find that both younger adult and elderly Latines in the United States are at high risk of being scammed through digital technologies since the COVID-19 pandemic in 2020 (Arenas, 2023; Kahn & Reuters Institute, 2024; McCormack, 2021; Petrie & AARP, 2021; Truecaller, 2022). Younger adults (≤ 50 years) are at an increased risk of falling for work-from-home scams, passive-income scams, and mule scams (i.e., being persuaded to become a courier who unwittingly moves illegal money between locations) whereas elderly Latines are at an increased risk of falling for AI voice scams, utility scams, and government scams. In either case, Latines are targeted by digital technology scammers because the scammers perceive that Latines lack technological, AI, and financial literacy, as well as citizenship security (Federal Trade Commission, 2022, 2023b, 2023c; McCormack, 2021). Communities with citizenship insecurity are those with significant populations of undocumented immigrants who are concerned about being deported. In other words, scammers may exploit for financial gain Latines more than other races or ethnicities in the United States because Latines tend to be in a demographic of (perhaps undocumented) low-income laborers and may face language barriers and lack literacy in digital technology.
The TPC field has framed the collaborative nature of human–AI relationships as crucial for avoiding the damage that can be done by the uncritical use of AI in workplaces and classrooms. I suggest that with this framing of the collaborative nature of humans and AI, the field is in a unique position to increase the AI literacy of Latines and other marginalized communities. The research that I present here showcases how nefarious people are in creative partnerships with AI in much the same way that TPC has framed for our students and practitioners. The nefarious use of AI is built on decades of exploitative practices. For the Latine community, such use of AI is a continuation of other scams that were popular before the proliferation of user-friendly AI (e.g., scams involving a fake government official, lawyer, or family emergency; passive-income or multilevel-marketing scheme; or deportation scare). Collaborating with AI, scammers use the historic exploitation of Latines to rhetorically appeal to the Latine community's culture, sense of family, experiences with deportation, Spanish-language use, and more.
To help TPC instructors increase the AI literacy of their students—especially their students of color—I offer a working heuristic. This heuristic will mainly be for instructors who are teaching a course or sections on AI, but the working nature of the heuristic might also help scholars who are of color research how their communities are particularly targeted by the nefarious use of AI. Because of my own ethnicity as a Latino–Chicano scholar, the heuristic centers around increasing student literacy on how nefarious people scam Latines with generative AI. The heuristic guides instructors on teaching students the rhetorical nature of the nefarious use of AI through four parts. Part 1 presents an overview of the creative partnerships that the field finds most promising in human–AI research. Part 2 frames the nefarious use of AI as a creative partnership between humans and AI. Part 3 describes common scams that target the Latine community and demonstrates the rhetorical nature of the nefarious creative partnerships. And Part 4 calls students to inform their community about the nefarious use of AI.
Thinking Critically About AI Applications: Perspectives From Instructors and Students
The sudden rise of generative AI has pulled TPC instructors and students in many different directions. Instructors are worried that students could uncritically (mis)use AI in their assignments and the workplace. Many scholars agree that the AI skills and use habits that students learn in the classroom will shape how they approach AI outside of academia (Carradini, 2024b; Deets et al., 2024; Gallagher & Wagner, 2024; Mallette, 2024). Students risk producing academically dishonesty writing that could get them in trouble at their university or erroneous materials in their future workplace—both are outcomes that could hold dire consequences. To help both instructors and students navigate the world of AI, research has framed humans and AI as being in collaboration, or creative partnerships, with one another. Duin and Pedersen (2021) argued that creative partnerships promote “collaborative intelligence; process and product transformation in support of expanding sets of users; and [the] cultivat[ion of] professional development” (p. 4). To those ends, creative partnerships between humans and AI may not seem too different from creative partnerships between humans or even between humans and non-AI technologies; however, to what extent and in what kinds of partnerships can humans and AI work together on problems is the real question.
The current AI technology has its problems. Or perhaps better put, the potential products of human–AI collaborations have their problems. Many researchers agree that the current AI technologies cannot compete with human-authored writing and materials for myriad reasons (Anderson, 2023; Duin & Pedersen, 2023; Hillen, 2024; Johnson-Eilola et al., 2024; Kong & Ding, 2024; Ranade & Eyman, 2024b). In 2018, a time just before conversations about generative AI, Hart-Davidson argued that although robots do indeed write, the automated writing is without that element that makes the writing human, so it is the duty of the humans to inform and check robot writing in order to make the product rhetorically acceptable. He stated then that “we are only starting to learn about teaching robots rhetoric” (p. 249). But now, with the proliferation of generative AI, those rhetorical teachings have exponentially grown.
Johnson-Eilola et al. (2024) attempted to rhetorically train ChatGPT on how to create useful COVID-19 antigen test instructions for humans. They found that, despite the careful input given to the AI (e.g., instructing it to give sequential tasks in the imperative mood and to avoid spelling mistakes), ChatGPT produced instructions that were inferior to human-authored ones. The AI instructions even had spelling mistakes, an error that contradicted the input prompt that the authors gave. The authors concluded that ChatGPT is not ready to replace human-authored instructions, but the technology might be seen as a collaborator to those who need a starting place for writing instructions. Further, they argued, when generative AI makes a mistake, the solution is not more generative AI; instead, the human user is responsible for curtailing errors and revising the generative AI product until it is rhetorically appropriate to the audience and task.
Many human actors, however, still try to tackle the problems of generative AI with more generative AI. Kong and Ding (2024), for example, noted how companies use third-party AI screening software to assist them with the screening, or “social profiling,” of applicants’ social media (p. 35). But as Gallagher (2020) reminded us, the structural human biases that go into the algorithms that dictate the products of technologies such as generative AI can result in real-world replications of those biases—whether intentionally or not. Graham and Hopkins (2022), for example, recognized that AI technologies are trained by human interactions and that many humans interact with digital technologies in racist and sexist ways. Social justice in the age of AI, then, is teaching humans how to responsibly interact with digital technologies in order to lessen the production of harmful content by AI automation (Aguilar, 2024; Graham & Hopkins, 2022; Shelton, 2020).
Using AI to correct AI may exacerbate the production of harmful content. Kong and Ding (2024) found that these AI screening algorithms can unfairly flag minoritized communities by giving lower scores to applicants that use nontraditional forms or vernaculars of English. Using more AI screening could further flag marginalized communities, so the authors recommend solutions that highlight the importance of human preparation from applicants (e.g., encouraging job applicants to use privacy settings and to audit their social media accounts for any potentially risky language) and human-centered guidelines for employers (e.g., urging employers to create guidelines on the currency of social media posts and to implement their own human audits when AI screening flags candidates). In either case, humans have the responsibility to rhetorically situate the products of AI in the creative partnership.
To rhetorically train AI, humans must have a dialogue with it. A special issue on AI in Computers and Composition (Ranade & Eyman, 2024a) contains articles showcasing how instructors and students reimagine the writing process by communicating in dialogue with AI. To conceptualize the collaboration between humans and AI in writing, many articles refer to humans in the loop (Wang, 2019), an expression for the role that humans play in automated AI generation; that is, humans are a part of that content creation at various stages of the product—from brainstorming, to drafting, to revising, to amending. Li (2024) demonstrated such an approach by encouraging students to give feedback to AI such as ChatGPT in ways similar to in-class peer review of drafts. Also, to ensure that AI-generated content is without the bias that Gallagher (2020) and others have warned about, I have encouraged instructors to have their students reflect on social justice values (Aguilar, 2024). Such articles make clear that treating AI as a collaborative partner, one that can be corrected and negotiated with, is an emerging practice in technical writing instruction.
Latines, Automation, and Concerns About AI
The work we do in the classroom happens amid political, cultural, and societal concerns about how AI will change the labor landscape. There are concerns about what uncritical usage of the technology could do to technical writing, such as reproduce bias and fabricate false events (Ranade & Eyman, 2024b); however, there is also an awareness that AI could soon replace humans in the workplace. TPC research has assured technical communicators that AI is not and may never be ready to replace us. But students show general concern that they will be replaced by AI or that it will somehow negatively affect their quality of life (Carradini, 2024b). Knowles (2024) covered how nefarious actors can use AI or even create their own large language model (LLM), the predictive autoregressive technology that generative AI currently relies on, to spread misinformation that can hurt certain communities. The author noted that legislation can do little to curtail misinformation from generative AI; rather, people need to be educated in order to achieve AI literacy. Both Carradini and Knowles suggested that AI's automated nature (e.g., its ability to work around the clock without human intervention) could threaten student job prospects. Students, then, are worried that their labor will become obsolete in the near future because a robot could do their job for less pay and without labor protections.
People in the Latine community have increasing concerns about job replacement. Since the COVID-19 pandemic, the labor landscape has drastically changed. More workers are using digital platforms to find work, and more are looking for remote work (Kong & Ding, 2024; Mallette, 2024; Ravenelle et al., 2022). In the United States, this transition to digital and remote work comes at a time in which 53% of Latine workers are worried that AI will replace their current jobs, a significantly higher percentage than those for Black U.S. workers (46%) and White non-Hispanic U.S. workers (34%; Franco, 2023). Also, working Latines in both the United States and beyond tend to be more connected to informal-gig economies than are other races and ethnicities (Grohmann et al., 2022; Pew Research Center et al., 2021; UnidosUS & Lake Research Partners, 2019), which could make Latines even more uncertain about what types of jobs will be available for them in a shrinking labor market that is shifting toward automated work.
Digital platforms do little to help with the unease Latines face about AI, and scammers have been emboldened to post risky links on reputable platforms. According to an ABC News (2016) report, 1% of all job listings on LinkedIn are fraudulent, and recent data provided by Forbes suggests that that number has either maintained or increased since the pandemic (Gurinaviciute, 2024; Ravenelle et al., 2022). Thus, in 2016 over 60,000 job listings on LinkedIn were fraudulent, and given the rise of job listings on the platform since then, that number has likely skyrocketed (Osman, 2024; Ravenelle et al., 2022). The sheer number of fraudulent links on sites such as LinkedIn makes it difficult for anyone to tell real opportunities apart from the fake ones, but the financial pressures that many Latines face make them especially vulnerable to clicking on such fraudulent links (AARP, 2020; AP News, 2023; Federal Trade Commission, 2022; Petrie & AARP, 2021; Truecaller, 2022). These pressures that the Latine community face are well noted by those who post fraudulent links on digital platforms.
Latines in communities with citizenship insecurities might not have the digital literacy to help them discern real websites and opportunities from fake ones. Pimentel and Balzhiser (2012) reported that government documents such as the U.S. Census have historically been inaccessible to many Latines (officially termed Hispanics in the Census) due to language barriers, ambiguous ethnic and racial categories, and citizenship insecurities. In fact, according to the National Network for Immigrant and Refugee Rights (NNIRR, 2019), in any given census year, 34% of Latines are foreign born and contribute to the problem of accurately counting the number of Latines, documented or otherwise. Consequently, language barriers, citizenship insecurity, the threat of deportation, and unfamiliarity with U.S. government documents—along with the tendency for the undocumented to not seek legal representation or advice (Pedroza et al., 2023)—leave a significant portion of the Latine community vulnerable to digital scams.
Digital scams target citizenship-insecure Latines through documents and protocols that mimic those of the U.S. government but that are not related to any government entity. For example, according to Pedroza et al. (2023), scammers have presented themselves online and through mail as a notario (a term that, to many Spanish-speakers, is synonymous with lawyer) to Spanish-speaking noncitizens. Thus, many Spanish-speaking Latines with citizenship insecurities might be more trusting of scammers who falsely present themselves as a notario. Further, U.S. Citizenship and Immigration Services (2023) reported that in order to impersonate government officials, scammers will use digital platforms to create fake websites that end in dot org (.org) as opposed to dot gov (.gov). Victims are instructed to pay legal or processing fees through these digital platforms that cannot be easily traced.
Latines who face financial insecurities might be more prone to take financial risks on these fake digital platforms. Both older and younger Latines are vulnerable to digital scams albeit in different ways. Although older Latines may be susceptible to scams that impersonate legitimate websites, younger Latines are more likely to fall for fake-employment, passive-income, and mule scams. It is well documented that the Latine community is among the highest demographics targeted by multilevel-marketing schemes (MLMs). Studies have found that predatory MLMs target Latines because of perceived stereotypes: their close community and family ties, familiarity with gig and informal labor, and ethnically marked financial needs (e.g., caring for elderly and infant family members; AARP, 2021; Federal Trade Commission, 2021a; Walsh, 2022). Using language that targets Latines, scammers use digital platforms to pitch MLMs as an opportunity for earning passive income. The Federal Trade Commission (FTC, 2021a) has noticed the rise of passive-income scams on digital platforms and has issued warnings to the Latine community. For example, the FTC issued a monetary judgment of $7 million to the “operators” of a passive-income scam for making fraudulent claims to Latinas that they would earn passive income by reselling luxury items.
As a member of the Latine and Chicane communities, I understand that the Latine experience is not a monolith. Latines are a diverse population with varying degrees of AI literacy and experience with digital technologies. There are Latines who are proficient with AI, and some members of the Latine community no doubt also harm other Latines through digital means—the FTC settlement I just mentioned was orchestrated by Latines who specifically targeted Latinas. But nefarious human actors play a numbers game. Because significant portions of the Latine community face citizenship and financial insecurities, scammers can justify the risk–reward ratio of engaging with potential cybercrime. The broad strokes I use to describe the insecurities of Latine communities are the ones that scammers assume as they develop their scams. Now, with the rise of user-friendly AI, the digital scams I have covered are transitioning—and so must the efforts to increase Latines’ digital literacy.
Nefarious Humans, Scams, and AI Literacy
Nefarious humans take advantage of Latines’ uncertainty in order to exploit the community for monetary gain. To describe these bad human actors, I draw the word “nefarious” from Knowles (2024), who described nefarious actors as those who use AI technologies to target certain communities (p. 2). My definition of nefarious humans, however, are those who intentionally use AI to exploit a targeted community through misleading, dishonest, and otherwise bad faith actions because they perceive that the community is technologically vulnerable. Actions such as unknowingly sharing links to a risky AI that uses a nefarious LLM and unknowingly sharing misinformation because of the uncritical use of AI are not actions of nefariousness for my definitional purposes. That is, my use of “nefariousness” and “bad faith” requires a human to knowingly and intentionally use AI to deceive other users who might be less technologically literate and therefore at risk of AI exploitation.
The key to understanding AI scams is to see that nefarious human actors are collaborating and developing creative partnerships with AI—just like us. Such nefarious partnerships may involve collaborating with AI to find the best ways to exploit people, produce materials that exploit people, and automate scams that populate links to courses that are paywalled. The scams are numerous and always evolving, and so are the partnerships that nefarious humans create with AI. Here are just a few examples that have become popular due to the growth of user-friendly generative AI. Since 2020, scammers have used generative AI to mimic people's voices in order to gain the trust of a targeted community, automate paywalled classes that promise passive income to financially desperate communities, flood search engine optimization algorithms with risky links, and create fake job postings and interviews (AARP, 2020; Arenas, 2023; FTC, 2022, 2023b; Truecaller, 2022). Thus, scammers are inventing new ways to exploit vulnerable populations through a technology that its members may not fully understand.
The tactics that scammers use to exploit Latines through AI are based on the historical scamming of Latines in digital and nondigital platforms, and the scams, particularly those targeted at Latines, are built on stereotypes used by scammers before the emergence of AI. It is difficult to get a full picture of how scammers are using AI to scam Latines because the data are still emerging and Latines tend to underreport their scams. But the scams we do know about seem to reflect the digital scams I have mentioned here. For example, AI scammers who target older Latines tend to impersonate public lawyers or government officials. Such scammers who target younger Latines tend to use MLM tactics and passive-income scams. I will now outline some of the most popular AI scams that target certain demographics of Latines.
AI Scams That Target Older Latines
The most reported AI scam targeting older Latines (ages 50 and older) is the deepfake, voice-cloning scam that impersonates a public figure or even the voice of a relative. According to a series of FTC reports, scammers use an AI technology called chatbots, or automated accounts using LLMs to appear to be real users on various platforms, in order to generate seemingly real content that will lure users into clicking their links (FTC, 2021b). Chatbots are known to distribute links through platforms such as email, Facebook Messenger, WhatsApp, and other outlets in which it is fairly easy to communicate with large amounts of users. The links may ask for sensitive information such as names, addresses, bank numbers, and so on under the guise that the chatbots are trusted individuals or institutions. Scammers then use the information given by unknowing users to generate profiles in similar ways to how the AI software for social media screening generates profiles on job applicants (Kong & Ding, 2024). Both AI programs—one used nefariously by scammers and the other used legitimately by companies—create profiles on users based on information provided through the internet. AI scammers may then continually build their user profiles to determine how to move forward with their scams.
Older Latines may disclose information through phishing schemes that build their social profiles around content that is susceptible to further AI scamming. Since at least 2021, the FTC and various news outlets have brought attention to AI deepfake scams in which voice data are fed into AI to generate speech that resembles a certain person. In past years, AI scammers have used the large amount of voice data on figures such as Joe Biden to target elderly citizens with fake phone calls that ask for sensitive personal information (CFTC, 2024; FTC, 2023a). But recent AI scams have been collecting voice data from the family members of AI scam victims who have had a social profile built from chatbot phishing attempts, and elderly Latines are especially at risk. Bethea (2024) reported that since 2022, there has been publicly available AI software that can create an impersonation from about 45 seconds of voice recording data; another software that is not publicly available needs only 3 seconds to do so. Older Latines might unintentionally share the names and phone numbers of loved ones, giving scammers opportunities to build social profiles of these loved ones in order to impersonate them. Scammers then use the social profiles along with AI voice cloning to call older Latines using impersonated voices of loved ones in order to ask for anything from bank passwords to ransom money.
AI Scams That Target Younger Latines
Younger Latines (under the age of 50) are more likely to fall victim to financial AI scams. The most common scams involve AI voice bots that call younger Latines to promise them opportunities to receive student debt relief or make passive income. The FTC (2021b) reported that scammers lure college-age Latines to disclose personal identification information through AI phishing schemes on social media sites, such as LinkedIn. Then, using data such as area codes, zip codes, and last names to build social profiles of those likely to have student loan debt, the scammers use automated AI to call these victims en masse in order to promise them student debt relief. Masquerading as legal counsel, the AI caller requires victims to provide an upfront payment before the caller will purportedly go through the legal channels so that the victim can receive the debt relief. According to the report, the call, which can be heard in either English or Spanish, specifically targets Puerto Ricans and may use common Latine experiences, such as first-generation college attendance or recent immigration, to establish an ethos with younger Latines (pp. 1, 11–12).
Another AI scam that targets younger Latines is the passive-income scam. Scammers have notoriously used MLMs to promise opportunities to earn passive income. But with these AI scams, which typically involve a paywall, the AI technology is packaged into the promise of receiving passive income. For example, users on X (formerly Twitter) or YouTube use AI to generate and share “courses” on websites like coursify.com that promise to teach users how to gain passive income through AI generation. Essentially, these AI-generated courses tell users to use AI to create courses on the website—a cycle that is hidden behind a paywall. Users who pay for the AI-generated course cannot get their money back, but they can receive passive income from using AI to generate their own course and then recruiting other unknowing users to pay money to see their AI-generated course. The course then instructs these users to do the same, and the cycle continues. These scams use MLM tactics by distributing the burden of pay to lower and lower levels. The problem with such MLMs is that the market for the AI-generated courses shrinks when the supply of the courses grows. These scams quickly come and go, but I encourage readers to learn more about these scams by searching YouTube for “AI Passive Income” through the Recently Uploaded filter.
Discussion: AI Scams, AI Literacy, and TPC
As TPC instructors, we are preparing our students to think critically about AI issues. In the same way, we must prepare students to think critically about AI scams. The scams I have listed were the most prevalent at the time I wrote this article, but AI scams quickly evolve. I have not detailed here AI scams involving money exchange or cryptocurrency or those that specifically target Latines who are on visas or have recently immigrated (CFTC, 2024). It is impossible to predict how AI scams will evolve; however, AI scams will likely continue to rely on the historic victimization of vulnerable communities such as the Latine community.
AI literacy in TPC is emerging to mean the ability to rhetorically use AI to generate content that is appropriate for the user's purposes. In many TPC contexts, these purposes are to generate ethical and responsible writing products for the workplace or the classroom. For example, Hillen (2024) described using ChatGPT to draft materials in nonprofit contexts. Key to the author's approach was to understand generative AI's ability to create, or, I would say, habit of creating, misleading rhetorical choices. For example, although Hillen requested that the AI write about the causes and patterns of gun violence at the local level, ChatGPT included rhetorical appeals to national gun violence. Similarly, in a survey of students, Cummings et al. (2024) found that in many cases, students who uncritically used generative AI experienced a loss of the rhetorical ownership of their writing. Students felt that their writing became impersonal and that generative AI could only give so much feedback before the drafting process became unproductive. Many students felt that the more generative AI gave feedback past the drafting process, the more they needed human peer review of their writing.
But I am finding that nefarious human actors are working rhetorically with AI similarly to how we are teaching our TPC students to do. The rhetorical choices of the nefarious actors just happen to be different from those of the students. I return to the $7 million FTC settlement (FTC, 2021a). Although the role that AI had in creating the scam is unclear, the inclusion of AI voice-over in airing the commercials over Spanish-language television suggests that there is at least some AI use. According to the FTC official complaint, male and female voice-over used deliberate rhetorical choices that target financially insecure Latinas in the United States. Nefarious humans had to work with the AI to ensure that cultural signifiers such as accent, cadence, and slang were appropriate for their rhetorical purposes—to trick Latinas into purchasing a fake product and then make harassing phone calls threatening litigation if the customers failed to order more.
We can imagine that AI scammers are in dialogue with AI technologies to retain their rhetorical purpose. The dialogue narrows rhetorical intention to produce content that is targeted to Latines. For example, scammers need the AI to produce fraudulent links that appeal to Latines experiencing citizenship insecurity, so perhaps the AI is rhetorically trained to generate with certain language like notario. Similar rhetorical training can be found in the AI voice-over that targets younger Latines. The AI appeals to younger Latines’ likely experiences as first-generation college attendees or recent immigrants. In any case, scammers are using social profiling tactics, along with historic victimization of Latines, to give rhetorical instructions with AI technologies in dialogue, going back and forth until the AI generation is rhetorically appropriate.
A Working Heuristic Toward Building AI Literacy for Latine Students
The working heuristic that I will outline now is meant for TPC instructors who are teaching a course about AI or have a section about AI in their syllabus. Each part of the heuristic covers areas that I have described here and provides suggestions on how to apply the lessons in the classroom. Parts 1–3, respectively, cover how AI and humans are in creative partnerships, the nefarious use of AI is a process and product of creative partnerships, and the nefarious use of AI continues the legacy of practices that exploit certain communities. Part 4 is a call for students to inform their communities about the rhetorical nature of the nefarious use of AI. Although there is no specific timeline as to when to implement the heuristic in a lesson plan, in my classroom, I tend to space Parts 1–3 into 3 weeks of instruction and incorporate Part 4 (the call for action) into the third week. But other instructors might want to use more or less time on the heuristic depending on the course material, frequency of the class, student demographics, and curriculum. In any case, I would like to reiterate the working nature of the heuristic. Instructors are welcome to add, subtract, replace, or borrow any part of this heuristic in order to fit the needs of their classroom.
Part 1. AI and Humans in Creative Partnerships
The first step in increasing AI literacy for marginalized students is to review the TPC field's nearing consensus that humans and AI are in creative partnerships together. Students need to understand that they are responsible for ensuring that the rhetorical output of generative AI is correct and appropriate. AI can be framed in many ways—for example, as a peer writing tutor (Aguilar, 2024) or a research collaborator (Anderson, 2023; Duin & Pedersen, 2023; Hillen, 2024; Johnson-Eilola et al., 2024). Although the AI can help students organize their thoughts, brainstorm, or even research topics that the AI suggests, students should expect that the labor of revising, drafting, and correcting is on them and not the AI (Hart-Davidson, 2018). Students should communicate with the AI in dialogue, always taking care to give the best input and review the output generated by the machine.
Part 2. The Nefarious Use of AI as Process and Product of Creative Partnerships
Students should next learn about the nefarious use of AI. Nefariousness can be generally defined as I have here (i.e., as the knowing and intentional use of AI to deceive other users) with room to personalize aspects of the definition. For example, I find it useful to emphasize the intent of humans to exploit marginalized people, but I can see a Robin Hood type of scenario (i.e., robbing the rich to help the poor) in which marginalized humans exploit more privileged humans. Using AI to scam a Wall Street banker can also be considered nefarious, but it is outside the scope of my research. In any case, I believe that intent and collaboration are two key aspects of this part of the heuristic. Students must understand that the creative practices they are building in the classroom are similar to the practices that nefarious users of AI are building. That is, scammers are revising, drafting, collaborating, and building products that are then used to exploit people. For example, I like to compare how the AI social profiling that some employers use to screen employees (see Kong & Ding, 2024) is scarily similar to the profiles being built by scammers. In both cases, a party is using AI to sort through data to make assumptions about a human. In one case, the party is a potential employer judging the ethics of a potential employee; in the other case, the party is a scammer judging the likelihood of a person falling victim to a scam. Both parties are using rhetorical processes in partnership with AI in order to create a product, in these cases, an advancement decision: to advance an application or to advance a scam, respectively.
Part 3. The Nefarious use of AI as a Continuation of a Legacy of Practices That Exploit Certain Communities
Students should understand how the nefarious use of AI is a continuation of a legacy of exploitative practices. In this part of the heuristic, instructors are likely to have the most variation. Although the literature I have reviewed here mostly pertained to emerging AI research, my approach to AI is informed by the social justice turn in the field. For example, Part 3 of the heuristic was inspired by Dayley's (2023) suggestion to make more apparent how TPC personally affects students in order to increase student retention in TPC programs. I found that it is powerful to show Latine students how TPC can equip them to understand, avoid, and advocate against AI exploitation of their community. Although social justice applications in AI studies are still developing and much of the literature is outside the scope of AI, the Appendix contains a reading list of social justice research that I have found particularly helpful in my research of the nefarious use of AI.
Because of my ethnicity as a Latino–Chicano, the demographics of my university and classroom, and my research expertise, I find it most beneficial for my class to discuss the risks of Latine communities in the age of AI. We first discuss the vulnerabilities of the typical age range of Latine students (18–30 years of age, in my experiences) as reported by the FTC and other reputable outlets. The reports show that these younger adult Latines are most susceptible to passive-income, student loan–forgiveness, and money-mule scams. Our class reviews how passive-income scams are an extension of Ponzi schemes or MLMs that exploit stereotypes that Latines are family focused and financially illiterate. I emphasize that the sudden rise of AI scams did not emerge in a vacuum. Rather, they are carefully crafted to uphold the legacy of scams that have generationally targeted Latines.
I then cover how older Latines are susceptible to other types of scams. We review how older Latines have always been the targets of phishing schemes, either by phone or on the internet. Then I describe how AI technologies have extended previous scams by mimicking the voices of family members to gain the trust of older Latines or pressure them for ransom. Scammers also use AI to create fake personae of government officials or lawyers in order to obtain personal information from older Latines. Overall, my goal is for students to understand that Latine communities are targeted for AI scams according to a digital divide between their own likely age range as young adults and that of older members of their family and community.
Part 4. A Call for Action
I ask students to take what they have learned into their communities. The rhetorical nature of the nefarious use of AI needs to be understood by as many Latines as possible. As Knowles (2024) argued, there is little recourse available for a person who falls victim to an AI scam. Although such scams are rapidly emerging and evolving, teaching students that AI–human collaborations are creative partnerships, that nefarious humans also collaborate with AI, and that AI scams are built on historic exploitative practices will increase students’ AI literacy. Students can then take action by raising awareness of the nefarious use of AI in their community. This action could be as simple as notifying older family members about common AI scams, or it could involve holding community workshops, reporting AI scams to the appropriate public or private governing bodies (e.g., the FTC or Facebook if a scam is found on the site), or sharing their own experiences with AI scams.
Here are some projects that could help TPC students follow through on this call for action. For example, instructors could integrate this heuristic into infographic projects that disseminate the class's findings into the community. Students could explore various design options for the different demographics of Latines, such as using Spanish-language text or cultural signifiers that rhetorically appeal to the recently immigrated. Students could even be encouraged to use generative AI to find the most effective outreach for their target Latines, an exercise that would contribute to the metanalysis of the nefarious versus the well-intentioned use of AI. Or students could create podcasts that investigate the lessons offered in this heuristic. Media such as podcasts are particularly important for reaching younger, and perhaps more masculine, members of the Latine community.
Conclusion
The nefarious use of AI will continue to evolve as scammers innovate their creative partnerships with emerging AI technologies in order to continue the historic exploitation of marginalized users. TPC's efforts to rhetorically situate AI as a writing process can help frame the rhetorical nature of the nefarious use of AI. Thus, even as scams continue to evolve, technical communicators will be better equipped to anticipate the rhetorical motives behind the nefarious use of AI. The heuristic I have offered demonstrates my approach to raising students’ AI literacy by reading AI research in the field, applying the research on the nefarious use of AI, and calling for students to take action that will raise awareness in their community, particularly of the risk of exploitation. To these ends, this working heuristic is available for other researchers to adopt, adapt, and apply in different contexts and for different user demographics. My version of the heuristic focuses on Latine students and how scammers partner with AI to continue the long history of scams aimed at exploiting Latines. I hope that researchers find my heuristic useful while they explore how the field's emerging AI research can continue to build our collective understanding of the nefarious use of AI.
So what is next for technical communicators interested in researching the nefarious use of AI? The trend I see is similar to trends established in TPC research on social justice and, to a lesser extent, decolonial research. It will take several scholars to critically investigate how different minoritized communities experience the nefarious use of AI. It will then take another several scholars to research how to lessen the risk that minoritized communities face by the nefarious use of AI. I suspect that doing so will require including members of minoritized communities as research participants in order to understand the nefarious use of AI and its effects in these communities. For example, there has been a decade of social justice research on methods of participatory localization (Agboka, 2013). One of TPC's latest studies contributing to the participatory localization research, by Gonzales et al. (2022), urges research to include some of the most disenfranchised communities in the design of health communication documents. Similarly, I think the next step is to find ways to collaborate with minoritized communities in order to best examine the nefarious use of AI. My next projects will explore how to do so with the Latines.
Footnotes
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
