Abstract
Much of the philosophical discussion of video game ethics is dominated by the literature on the Gamer's Dilemma, which forces us to focus on the ethics of certain forms of extreme virtual content in video games, such as virtual murder or molestation. While a focus on the ethics of video game content is important, we argue that scrutinizing the ethics of video game systems is needed to properly capture the full range of ethical concerns raised by video games. Drawing on a distinction between intravirtual and extravirtual effects, we identify ethical issues with video game content and, by linking to the dark patterns literature, video game systems. To illustrate our view, we give examples of how a game can appear to have morally objectionable content without the game being, at least clearly, morally objectionable, and how a game can appear to be morally unobjectionable despite having morally objectionable systems.
Introduction
Much of the philosophical discussion of video game ethics is dominated by the literature on the Gamer's Dilemma (Luck, 2009, 2023). This dilemma forces us to focus on the ethics of certain forms of extreme virtual content in video games, such as virtual murder or molestation. However, while this focus on the ethical issues raised by extreme virtual content is important, we argue in this article that it is also important to focus on the ethical issues raised by video game systems to properly capture the full range of ethical concerns raised by video games. We make that argument as follows. First, in virtual actions as Systems and Content, after differentiating between virtual and digital entities, by building off the video game studies literature we argue that to properly understand video game actions ontologically, we need to see that they have a dual nature consisting of both content and systems layers. Similarly, we argue that to understand the ethical nature of video games, we need to assess them ethically in terms of both content and systems layers. To do that we explore, by drawing on a distinction between intravirtual and extravirtual effects, some ethical issues with video game content in ethical issues with content-based wrongs and, by linking to the dark patterns literature, some ethical issues with video game systems in ethical issues with systems: dark patterns. Finally, in content versus systems-based wrongs: two examples, to illustrate our view we give examples of how a game can appear to have morally objectionable content without the game really being morally objectionable, and how a game can appear to be morally unobjectionable despite having morally objectionable systems. The upshot of our argument is that, while the existing ethical focus on video game content makes sense given that it is typically the content of a game that we direct our attention towards, it is important to supplement this focus by also considering the ethical dimensions of video game systems to get a more complete picture of the ethical issues raised by video games.
Virtual Actions as Systems and Content
Before distinguishing ethically between the systems and content of video game actions (“virtual actions” henceforth), it is worth first understanding their relationship ontologically. Many argue that to fully appreciate the character of video game activity, we need to understand virtual actions as not only grounded by virtual gaming systems or what has been described via the ludological or ergotic mode of discourse (Aarseth, 1997; Juul, 1998, 2005; Klevjer, 2002). By “systems,” we are referring to the mechanics, processes (Sicart, 2013), ludic elements (Tavinor, 2017), internal codes (Aarseth, 1997), or rules (Juul, 2005) that allow for, or give rise to, virtual actions. While systems are important, to focus only on systems is to fail to appreciate the fictional dimension of virtual actions as interactive representational content (Aarseth, 2014). Similarly, many argue that we can’t only understand virtual actions as interactive representational content (Atkins, 2003; Brey, 2003; Murray, 1997; Tavinor, 2014) as this fails to appreciate the centrality of systems to a virtual action's meaning (Juul, 2005). By “content,” we are referring to the scripts (Formosa et al., 2016), semiotics (Sicart, 2013), narrative elements (Tavinor, 2017), external expressions (Aarseth, 1997), or fictions (Juul, 2005) that instantiate a virtual action's underlying systems. What follows from this, as others have argued, is that we ought to see both systems and content as essential components of virtual actions in most video games (Aarseth, 2014; Formosa et al., 2016; Juul, 2005; Sicart, 2013; Tavinor, 2017). That is, when we talk of virtual acts in video games, we may be simultaneously referring to both the content and systems of the acts. The “in most” caveat is needed to account for video games in which there is either very little or no content on top of its systems or very little or no systems underlying its content.
Virtual actions and virtual objects furnish or compose virtual environments (Brey, 2014). My turning left as Mario in Mario Kart is a virtual action, while Mario and his kart are virtual objects. Both may be understood as virtual entities that make up the virtual environment. However, virtual entities are distinct from digital entities. Digital entities are the material realization of virtual entities, realized physically; for example, via a computer screen and various pieces of computer code stored on a hard drive (Chalmers, 2018). Mario is a virtual entity, while Mario's image on the screen in front of me is a digital entity. Digital entities act as Walton (1990) “props,” being the material entities which authorize the fictional truth of propositions concerning virtual entities. For example, when we talk about Mario driving a red kart, we are referring to the fictional character Mario who drives a Kart, the virtual entity, and claims concerning Mario's kart will be made true by the digital entity, the image projected to us of Mario's kart, being in fact red. Therefore, virtual entities are materially constrained by their material digital counterparts, yet exist immaterially. This ontological account of virtual entities is intuitive as it captures the sense in which Mario is fictional (i.e., Mario doesn’t actually exist), as well as the sense in which we could get things right or wrong when talking about him (e.g., it is wrong to say that Mario drives a green kart). This view is granting a kind of “ludo-fictionalism” regarding the nature of virtual entities (see McDonnell & Wildman, 2019) and is in part grounded on a Walton (1990) irrealist conception of fiction known as “virtual Walt-fictionalism” (Friend, 2008). While we are not necessarily committed to this approach, since the arguments we develop here can accommodate other conceptions of virtual actions, it will be helpful to employ it here as a means of illustrating a useful way of carving up the nature of virtual actions.
Virtual actions can be further viewed through two distinct lenses: their content and their systems. To clarify this distinction, consider how a virtual action can be broken down into three layers, each sitting on top of the other. The top layer understands a virtual action qua digital entity being, as already outlined, a virtual action as materially realized; for example, via a computer screen displaying the image of Mario turning his kart. This is the immediate subject of our perception. We can set this layer aside as we have just established that, while importantly related, digital entities are not virtual entities (see Chalmers, 2018; McDonnell & Wildman, 2019). The middle layer understands a virtual action qua content, being a virtual action as a fictional representation that is referred to and engaged with by a player; for example, when you make Mario turn his kart to the left, you are engaging with Mario as a fictional character or content, and not merely as a digital entity. The content of the virtual action is the virtual action's “skin.” The bottom layer understands a virtual action qua systems, being the underlying ludic elements that ground what is and is not afforded to the fictional representation of Mario; for example, the systems dictate how sharp Mario can turn and the size of his hit box, but also do not allow Mario to get out of his kart. When we talk about what we can do in Mario Kart, we refer both explicitly to the middle “content” layer—“I turned the kart to the left”—and implicitly to the bottom “systems” layer, as well as (perhaps) to the physical-digital entities and game hardware that realize those content and systems layers.
Both content and systems layers play a central role in determining the character of virtual actions. To illustrate the role that systems play, consider two shooter games much like Counter Strike. Both games are identical in terms of content; they look as though they are the same game. But imagine in one of these games the hitboxes of the virtual avatars (the underlying three-dimensional area that must be hit to register a response) are reduced to a very small size, making it very difficult to shoot your opponent. Your opponent is the same size qua content, but the underlying ludic elements have significantly changed. While both games are identical in terms of content, we could sensibly comment that the games are different in terms of their systems, and this would result in different ways of treating each game, perhaps to the extent that they would be justifiably considered different games, or different “modes” of the same game (e.g., differing by difficulty setting). For example, each game would warrant different kinds of gameplay, such as incentivizing close-contact weapons rather than sniper rifles. Likewise, consider games which offer “hardcore” or “survival” modes, where one is unable to save their progress or is challenged in other ways (see Copcic et al., 2013). In these cases, the content layer of the game is identical, but the kind of game being played is significantly different due to a change in its systems.
Systems therefore play a central role in determining the character of virtual actions, so much so that some argue that video games, and by extension, virtual actions, are only a product of their systems (Juul, 2005). Juul (2005), for example, building off an analogy from narratology (see Chatman, 1978), argues that it is a game's immaterial systems that are sufficient for defining the nature of a game, and this explains the transmediality of many games (i.e., the fact that a game can be realized in different media). For example, a game of chess played over email and across multiple boards, where one board is virtual and has Star Wars-themed pieces, and the other board is nonvirtual and has traditionally themed wooden pieces, would still count as a game (even the same game) of chess, even though it is realized in different mediums. Klevjer (2002, pp. 191–192) refers to this systems-only argument as radically ludological as “[e]verything other than the pure game mechanics of a computer game is essentially alien to its true aesthetic form.” What extends from this conception is that determining the meaning of virtual actions becomes what Aarseth (1997) refers to as ergotic, since the user has a configurative relationship with the systems of the game and thereby is partly responsible for constructing its meaning (see Eskelinen, 2001).
However, to consider video games (or games generally) as defined only by their systems is a mistake because it fails to recognize the role that a game's content also plays in determining the meaning of a player's actions. Consider, for example, two identical games qua systems which differ substantially in terms of their content. Aarseth (2014) illustrated this by comparing Bogost and Frasca's game Howard Dean for Iowa, and the hypotethical Kaboom: The Suicide Bombing Game. Both games are two-dimensional side-scrolling games where the player's character moves from side to side on a populated street. The goal is to press a button when in proximity to the most people, which determines your score. In Dean from Iowa, pressing the button triggers the avatar to show an election campaign placard and those in proximity will pay attention to the placard. In Kaboom: The Suicide Bombing Game, pressing the button triggers the avatar to detonate a bomb attached to them, and those in proximity are killed or injured (Aarseth, 2014). To insist that the two games in this example are really the same game because they have the same systems with incidentally different skins is to fail to appreciate the extent to which content shapes the meaning of gameplay. For example, pushing a button at the right time in one game produces an action that kills NPCs, whereas doing the same thing in the other game produces an action that helps to increase attention on a candidate running for political office. Determining what one's virtual action means is therefore not only a configurative process involving interacting with systems, but also an interpretive one involving understanding content (Klevjer, 2002).
Following Brey (2014), we can further distinguish virtual actions in terms of their “intravirtual” and “extravirtual” effects (Brey, 2014). Intravirtual effects refer to the outcomes of a virtual action which only occur within a virtual environment, that is, at the content level of a game (which can be described in terms of a virtual, rather than a physical, mental, or digital level of abstraction (Floridi, 2008; Søraker, 2012). For example, overtaking Yoshi may be an intravirtual effect of turning Mario's kart to the left. The virtual action of turning Mario's kart in this way is clearly fictional, in the sense outlined above, insofar as no actual kart has been turned to the left. In contrast, extravirtual effects are the outcomes of a virtual action which occur externally to a virtual environment or nonvirtually (Søraker, 2012). Brey argues, building off Searle (1995), that extravirtual effects can be institutional or physical (Brey, 2014). Institutional extravirtual effects are nonvirtual effects which have institutional social entities as their targets. For example, if turning Mario's kart results in you, via Mario, winning the race, this virtual action could be described in reference to its institutional effect in causing you to win. This virtual action is therefore not only fictional, but it also has an actual effect; you have actually won a race. Physical extravirtual effects are nonvirtual effects on the player (and others, such as the audience) of a video game that impacts them in a physical, physiological, or psychological way, such as increased heart rate, emotional distress, or pain in the fingers from pushing a button too often. For example, one may feel upset because they lost the Mario Kart race and want to play the game again to do better. Virtual actions can thus have intravirtual effects within the game, which are fictional impacts, and extravirtual effects on actual people and institutions, which are real impacts. In terms of digital entities, Brey (2014) offers the example of a virtual flashlight being able to both light up a virtual environment, and incidentally, via its being digitally realized on a computer screen, light up an actual room. In this example a virtual action, turning on a flashlight in a virtual environment, has the extravirtual effect of lighting up a real nonvirtual room.
It is useful to distinguish between virtual entities that have only intravirtual effects captured via their content, and those that also have extravirtual effects captured via their systems or digital outputs (such as lighting up a room). This is useful because it limits the kinds of virtual entities that can be said to be purely fictional, which in turn can help us to ethically distinguish between different kinds of wrongs; namely, those that are purely fictional and those that are not. We can understand virtual wrongs per se to be those which are captured via their intravirtual effects alone. These wrongs are purely fictional wrongs and can only be made sense of as wrong within the virtual context of the virtual world (Young, 2020). These wrongs are captured via their content layer. For example, the moral claim that “virtual murder is morally wrong” might only concern the permissibility of the virtual wrong per se. That is, the moral permissibility of virtual murder in the gameworld. Alternatively, this moral claim may instead be focusing on the ethics of its extravirtual effects or digital outputs. For example, virtual murder might be wrong because a player's moral character is harmed by engaging with it (see Bartel, 2020; McCormick, 2001). In this case, it is not the virtual action per se—it's being a virtual murder—that realizes a wrong-making feature, but rather the virtual act captured via its extravirtual effects or digital outputs, such as how the content impacts a real person by engaging with it, which makes it morally salient. Further ethical consideration of virtual wrongs qua content will be the focus of ethical issues with content-based wrongs. It is our claim that ethically relevant extravirtual effects expand beyond just virtual entities qua content, but also include the systems that underlie virtual entities. This claim will be the focus of ethical issues with systems: dark patterns.
Ethical Issues With Content-Based Wrongs
In the last section, we argued that understanding the nature of virtual actions involves a consideration of both their systems and content layers. In the next two sections, we demonstrate how this distinction can be extended to a consideration of the moral status of virtual actions. That is, understanding the full moral character of a virtual action involves ethically considering both its content and systems layers. In this section, our focus is aligned with much of the video game ethics literature, and particularly the Gamer's Dilemma literature, as it concerns itself with the moral aspects of the content layer of virtual wrongs. What aspects of a virtual action's “skin,” such as its depiction of murder, might be subject to moral criticism? As we will show, the various morally salient aspects of the content of virtual actions helps us to understand why the literature has been primarily focused on the content layer of virtual actions. However, as we will argue in ethical issues with systems: dark patterns and content versus systems-based wrongs: two examples, other aspects of a virtual action, such as those which arise within its systems layer, are also morally salient and worthy of analysis when considering the moral status of virtual actions.
To help determine the moral character of a virtual action via its content layer, it is useful to distinguish between its extravirtual and intravirtual effects. This is because doing so tracks onto the intuitive amorality of many virtual actions per se (as well as fictional actions generally), whereby many virtual actions which produce only intravirtual effects are seen as harmless and thereby morally permissible (see Luck, 2009; Montefiore & Formosa, 2022; Ostritsch, 2017; Young, 2017). For example, virtual murder in a video game appearing to only have intravirtually immoral effects (it is only immoral in the fiction) seems to render it a morally unobjectionable kind of act (not morally objectionable outside the fiction). However, there are some moralist approaches that focus on the ethical character of virtual actions by focusing on the act's intravirtual character (see Bartel, 2012). For example, Bartel (2012) argues that even if there are no immoral extravirtual effects, some virtual actions, such as those that amount to virtual child pornography, are wrong insofar as they are instances of child pornography, which is itself wrong.
Distinguishing between the intravirtual and extravirtual effects of a virtual act also accords with many moralist views in the video game ethics and Gamer's Dilemma literatures about what would be required of a virtual action for it to constitute immoral virtual activity, namely its realizing negative extravirtual effects (see Brey, 2014; Luck, 2009; Montefiore & Formosa, 2022; Ramirez, 2020; Young, 2020). For example, if enacting virtual murder in a video game produces immoral extravirtual phenomena, such as increasing the likelihood of aggression and real-world violence, then it would be (potentially) morally objectionable. Of course, this issue rests on whether it is true that enacting, in this case, virtual murder in video games has any extravirtual negative moral impacts, which is a claim that the empirical literature on the impacts of violent video games on aggression suggests it might (Anderson et al., 2010; Ferguson, 2007; Prescott et al., 2018).
To further spell out the amoralist and moralist positions towards the morality of virtual actions qua content and their relationship to intravirtual and extravirtual effects, consider how one might apply this distinction to the following analysis from Young (2020). Young considers the conditions required to transform a virtual act of immorality (such as virtual murder) into an immoral virtual action (such as morally impermissible virtual murder) by identifying four features of fictional immorality (which we will refer to as virtual wrongs): what the fiction is depicting per se; the meaning of the fictional content within the fiction taken as a whole; the motivation of the person engaging with the fiction; and the medium through which the fiction is being depicted. 1 Let's consider each in turn.
The amoralist view towards the morality of virtual depictions of immorality is that the virtuality of a virtual wrong renders it morally asymmetrical to its nonvirtual counterpart, partly because virtual wrongs qua content do not realize any actual moral harms, as they are merely depictions of wrongs. Therefore, while murder is morally impermissible, it does not follow that its mere depiction is also morally impermissible, unless that depiction itself realizes immoral effects. For example, a game (or any media) that involves depictions of extremely violent murder might possibly harm one's moral character through engaging with it (see Bartel, 2020; McCormick, 2001; Ostritsch, 2017). The basis on which the virtual wrong is impermissible is via its extravirtual effects of harming the moral character of the person playing the game.
Alternatively, bringing about the depiction of certain virtual wrongs may be impermissible regardless of any further extravirtual effects beyond the realization of the depiction. For example, bringing about the depiction of a virtual wrong that is morally repugnant might be immoral, even if this has no further negative effects on anyone's moral character (see Coghlan & Cox, 2023; Montefiore & Luck, 2024). In this case, one's extravirtual act of pressing a particular button is immoral because it realizes the morally repugnant depiction of an action.
A further strategy is to focus on the moral status of a work's meaning. A virtual wrong may be morally permissible per se, but if it contributes to the game's overall endorsement of an immoral world view, then that may make it morally impermissible (see Levy, 2002; Patridge, 2011; Young, 2020). For example, a game which allows one to humiliate homeless people and treats doing so as genuinely funny, could be wrong to engage with as the game is endorsing the idea that “humiliating homeless people is funny.” Whereas if the game allowed one to humiliate homeless people and treated doing so as funny so as to ironically make the player sensitive to the serious social justice issues regarding homelessness, and as a result endorsed the idea that “humiliating homeless people is not at all funny,” then engagement with that virtual wrong would not seem to be morally impermissible. Again, what makes the virtual wrong immoral is the extent to which its extravirtual effects are wrong, such as its contributing to harmful real-world norms. In this case, we would need to examine evidence that playing video games really does have some sort of tangible impact on real-world norms, such as norms around humiliating homeless people, or whether the impacts of playing video games, given all the other impacts on real-world norms, are negligible or nonexistent.
A virtual wrong may be morally impermissible if it is being enacted on immoral motivational grounds. For example, if one engages in virtual murder because it stands in as a surrogate for their fantasy to enact actual murder, then it may be morally impermissible to engage that fantasy (Bartel & Cremaldi, 2018; Young, 2020). Whereas, if an E-Sports practitioner engages in virtual murder simply to win a competition, then this kind of moral criticism is unavailable as no such fantasy is present (Montefiore & Formosa, 2023b). Again, this may not bear on the moral status of the virtual action per se, but rather its (potential) immorality will be dependent on extravirtual features of the world, such as the presence and enactment of immoral fantasizing by an individual.
Determining the overall meaning of a videogame, let alone interpreting what motivates an individual to enact virtual wrongs, is not always clear and can be difficult to infer from the content being engaged with alone (see Gribble, 1983; Nader, 2020; Woodcock, 2013; Young, 2013). The medium through which a fictional wrong is depicted may also bear on the interpretive flexibility of one's engagement with virtual wrongs (see Patridge, 2011). For example, reading a book in which virtual murder occurs may be less likely to be interpreted as indulging in a harmful surrogate fantasy than playing with a virtual reality murder simulator. This may be because of the capacity of different media to afford agential and immersive experiences for the user (see Montefiore et al., 2024; Montefiore & Formosa, 2023a). Beyond limiting interpretive flexibility, the immersive and agential affordances of the media may bear on other morally relevant extravirtual effects, such as a virtual wrong amounting to what Ramirez (2020) calls a “virtually real experience,” whereby choosing to enact the virtual wrong produces an “as if” experience in relation to its nonvirtual counterpart, (see also Formosa et al., 2024). That is, it produces similar physical, psychological, and behavioral effects as if it were real. Again, what may be grounding the wrongness of the virtual wrong is its extravirtual effect on the user. In this case, we would need to look at evidence that enacting some wrongs virtually have similar impacts as enacting the same wrongs in the real-world, or evidence that some media afford immoral motivational states more so than others.
This section has attempted to show that many approaches to justifying what is wrong about virtual wrongs focus on the extravirtual features of those wrongs. There are of course also approaches that focus on the morally relevant intravirtual features of virtual actions. However, when we appeal to the intravirtual features of virtual wrongs by themselves—their purely fictional features—it is less clear what, if anything, could make a virtual wrong justifiably impermissible. This is not to argue that the amoralist approach is therefore vindicated, since some intravirtual features of virtual actions qua content, such as their constituting virtual child pornography, seem at least in principle capable of grounding a moral justification for the impermissibility of the action. But it is to argue that extravirtual effects at least typically play a substantial role in grounding most claims about the immorality of virtual actions. As such, the focus of both moralists and amoralists in this context is on the presence (or not) of immoral extravirtual features of the actions depicted in the content layer of games. Given this focus, we now turn to exploring whether we can locate the impermissibility of some virtual actions in their systems layer by examining the extravirtual features of those systems.
Ethical Issues With Systems: Dark Patterns
As the previous section has suggested, most of the discussion around the immorality of video game actions has tended to focus on the content layer of a video game. For example, the (potential) immorality of virtual murder seems to depend on the fact that enacting that content (i.e., virtual murder) has immoral extravirtual effects, such as worsening your moral character or negatively impacting real-world views about the wrongness of murder. However, as shown in virtual actions as Systems and Content, games are composed of both content and systems layers, as well as the digital entities that realize these. We have already seen the sorts of moral issues raised by the content layer of games. But what about the systems layer? In this section, we explore this comparatively underexplored area in video game ethics by arguing that there are extravirtual effects of a game's systems layer, which can both be largely insulated from the game's content layer, and which raise a different set of important agential and moral concerns. It is to these cases that we now turn.
There are several ways in which wrongs (or at least conventionally impermissible violations) may occur via a game's systems layer. Commonly discussed examples of this could include acts of cheating or exploiting gaming hacks (see Consalvo, 2009, 2014; Thompson, 2007). For example, using external software such as an aimbot to manipulate the systems of a game to improve one's advantage in a multiplayer setting is a form of cheating which might be considered immoral. This is akin to cheating in traditional board or card games, and is wrong for similar reasons, such as because it involves deception, lying, not abiding by agreed upon rules, and so on. Similarly, the phenomenon of “camping”—where a player stays in one strategic position for an extended period to gain an advantage over other players—while not strictly cheating, is often considered a conventional violation (see Hailey, 2009) as it does not constitute “fair play.” Further, acts involving using a game's systems as a means to immoral nonvirtual ends, such as online abuse in multiplayer games (Mildenberger, 2017), are also clearly immoral for the same reasons their offline equivalents are immoral, namely because they involve the abuse of real people and the associated harm that brings with it. In these cases, it is not the content of the virtual act, but the ways in which the systems of the game are being used immorally or for immoral purposes, that is morally concerning.
Another major way in which the systems layer of a virtual act might be morally objectionable is by employing what is known as “dark patterns.” “Dark patterns” is a term that emerged in 2010 (Mathur et al., 2021) and, since then, the use of dark patterns has been explored in an expanding literature across games, advertising, social media, e-commerce, web services, and mobile contexts (Gray et al., 2023; Mathur et al., 2021). There are a range of definitions of dark patterns, and the various aspects of dark patterns in different media have been documented. Typically, dark patterns are understood as the intentional design of a user interface to trick, exploit, or manipulate users to do something that benefits the designers but not the users (Mathur et al., 2021). In the context of dark patterns in games, Zagal et al. (2013) outlined a helpful taxonomy, which includes temporal (e.g., grinding to level up or requiring play by appointment), monetary (e.g., pay to skip and pay to win), and social capital-based (e.g., using social pressure to force continuing play or spending) dark patterns. Beyond these clearly exploitive dark patterns are more ambiguous “shades of grey,” two of which are of importance here, namely “encouraging anti-social behavior” (e.g., affording virtual immoral actions), which points back to issues around content discussed above, and the use of games for nonentertainment purposes, such as being “tricked into learning something” through gameplay (Zagal et al., 2013). This last gray pattern challenges the common definition of dark patterns as necessarily involving something that is not in the user's best interests, since being tricked into learning something beneficial could, in fact, be argued to align with a user's interests.
What are the core ethical concerns with dark patterns in games? At their core, dark patterns raise serious issues of manipulation. Such manipulation can be unethical as it expresses disrespect for players’ autonomy and their capacity for self-determination (Formosa, 2017). Dark patterns treat players as mere means to the game creators’ ends rather than as ends in themselves. Manipulation works by bypassing a person's capacity for autonomous decision-making. It seeks to exploit decision-making vulnerabilities that users may not even be aware they have, in ways that are often opaque to users (Susser et al., 2019). Even if players are on some level aware of these vulnerabilities, dark patterns are designed to make it difficult for them to avoid having those vulnerabilities exploited. Just as intentionally manipulating and deceiving others for personal gain is unethical outside of games, it is likewise wrong to use game systems to trick and subvert player autonomy for the game creators’ benefit.
To illustrate, consider some common dark patterns in games. Temporal dark patterns, such as daily check-in rewards and time-limited offers, exploit our fear of missing out to keep us playing longer than we otherwise would (Foxman, 2014). Pay-to-win mechanics and microtransactions for in-game advantages manipulate players into spending more money than they intend by making the game increasingly difficult or impossible to enjoy without spending (Neely, 2021). Social capital-based dark patterns are also ethically concerning. For example, placing emotionally manipulative messages from in-game characters begging you not to abandon them if you try to stop playing subverts the player's free choice to disengage. Leaderboards showing that your friends have surpassed your level can subtly manipulate you by exploiting your desire for status and fear of missing out socially. A recent study has shown a large increase since 2010 in the number of popular games with cosmetic microtransactions and loot boxes, the latter of which has been likened to gambling (Zendle et al., 2020). Progress gates that increasingly, but often imperceptibly, slow your advancement to steer you towards paying for progress can manipulate gamers by exploiting the sunk cost fallacy—the desire to avoid feeling that their prior time investment was wasted (Allison, 2017). These kinds of dark patterns are often employed by “freemium” games that are explicitly designed to capture the players who will spend considerable amounts of money on the game, known as “whales,” and exploit them as much as possible, while relying on a large base of nonpaying players to make the game appear more popular than it really is (Alha et al., 2014; James & Tunney, 2017).
Beyond these clear moral concerns with exploiting the vulnerabilities of users, dark patterns can also have more subtle negative impacts on player agency and autonomy. The concern here is not that exposure to particular content is itself harmful, as in the case of content-based wrongs, but rather the systems themselves are harmful irrespective of the content. For example, it is not the content involving the crushing of virtual candy that raises moral concerns with games such as Candy Crush Saga, but rather the way the systems of these games can impact the agency and associated agential powers and autonomy competencies of players. For instance, implementing “near-miss” mechanics that stimulate feelings of frustration can drive longer play sessions than the player would independently choose (Clark et al., 2009; Larche et al., 2017). Where players are induced to keep playing a game solely due to dark patterns, this can inculcate compulsive patterns of engagement similar to those associated with problem gambling (Zendle et al., 2020).
Related concerns can arise even in gamified apps designed for beneficial purposes, such as the language-learning app Duolingo which employs streak mechanics and other gamification techniques to motivate persistent use (Mogavi et al., 2022). While such systems can help scaffold the user's autonomous commitment to a valued goal, in this case learning a language, they can also come into tension with user autonomy, particularly if they impair autonomy competencies, such as their ability to reconsider the player's ends based on reflection. There are interesting comparisons here to philosophical debates over the relationship between autonomy and pre-commitment devices or “Ulysses contracts” (Standing & Lawlor, 2019; Van Willigenburg & Delaere, 2005). But whereas the autonomous agent binds herself voluntarily through a Ulysses contract, dark patterns typically constrain the user's choices without their prior fully informed consent. In this way, manipulative gamification techniques may ultimately detract from user's autonomy, even if it provides some benefits to them. Gamification can also generate agential conflicts where the user's second-order desires (e.g., to spend their time differently, such as to exercise more) clashes with their first-order compulsions to keep engaging with the gamified system (cf. Frankfurt, 1971). There is also a risk of the user rationalizing their continued engagement in ways that paper over their deeper ambivalence.
Drawing on relational accounts of autonomy (Anderson & Honneth, 2005; Mackenzie, 2008; Mackenzie & Stoljar, 2000), we can see how dark patterns threaten to erode the social conditions for autonomy by undermining key self-relating attitudes. Relational accounts focus on the constitutive role that several socially situated self-attitudes or autonomy competencies, such as self-respect, self-trust, and self-esteem, have for autonomy (Mackenzie, 2008). Self-respect involves regarding yourself as a moral equal and a source of normative authority. Self-trust involves a basic confidence in your powers of agency, as well as your convictions, responses, and judgments. Self-esteem requires seeing yourself and your ends and commitments as meaningful, worthwhile, and valuable. Dark patterns can potentially hinder and impair each of these self-attitudes, and thereby constitute the expression of disrespect for a person's autonomy.
Dark patterns work by manipulating, exploiting, or tricking users into doing something that they may not, on reflection, really want to do. For example, temporal and social capital-based dark patterns can exploit us into playing much longer than we would really, on reflection, want to play a game. This can easily lead to us breaking our own explicit commitments or ends (e.g., “I will only play for 10 minutes,” which turns into an hour—see Clark et al., 2009; Larche et al., 2017; James & Tunney, 2017), or interfering with other ends or commitments which we reflectively endorse as much more valuable (e.g., when game play negatively interferes with work, school, or family life). This not only directly interferes with our autonomous pursuit of ends we value more highly, but also undermines our self-respect (e.g., I see myself as being pushed around by dark patterns rather than directing my own actions), self-trust (e.g., I can no longer trust myself to follow through on my own commitments about how long I will play a game), and self-esteem (e.g., I am spending too much time playing a game that I don’t really see as highly meaningful or valuable). While gaming addiction fueled by dark patterns is an extreme version of all these negative impacts (Xiao, 2021), even at levels far below what would constitute gaming addiction, less extreme versions of these negative impacts will already be present. In this way, game systems with dark patterns can both interfere with the exercise of autonomy and negatively impact the autonomy competencies that help ground a person's autonomy.
In summary, this section has argued that beyond the much-discussed ethical hazards of certain video game content, the manipulative systems underlying some games also raise serious moral issues around manipulation and the erosion of player autonomy. Even gameplay elements that appear benign in isolation can be ethically problematic when they exploit decision-making vulnerabilities to undermine the user's capacity for self-determination.
Content Versus Systems-Based Wrongs: Two Examples
It is unsurprising that much of the literature on the ethics of virtual acts in video games is concerned with what is depicted in video games, rather than the ethics of the underlying systems of video games that give rise to what is depicted. This may partly be because virtual wrongs are most apparent as content; the depiction of the virtual wrong is much more immediately apparent than any ethical concerns raised by a game's systems. As a result, we are typically most morally sensitive to a game's content and its depiction of virtual acts. Nonetheless, it remains ethically important to pay attention to both the ethical issues raised by content and systems layers, and to consider the extravirtual effects of both content and systems. For example, a game which involves extremely violent virtual wrongs may produce a kind of ludic resistance for the player, in which they are unable to play the game by engaging with its systems because of how unpleasant, distasteful, or morally problematic its content is perceived to be by its player (Montefiore & Formosa, 2023b). In this case, the narratological qua content, rather than the ludological qua systems, becomes the primary subject of ethical consideration. However, simply focusing on content-based wrongs can be misleading, both because what seem to be content-based virtual wrongs may not in fact be morally problematic, and games which are not morally problematic in the content layer may still be morally problematic in the systems layer. We now provide some examples to illustrate this important point.
Consider the moral difference between the infamous and banned iPhone app Baby Shaker, with a hypothetical puzzle or tile-matching game, much like Candy Crush Saga. The former, because of its content, raises all sorts of moral red flags, but may in fact not be morally problematic, whereas the latter, despite not clearly raising any similar moral flags because of its content, may in fact be much more morally problematic because of its systems layer. In terms of its content, Baby Shaker, as described by Young (2020), “involved shaking a (virtual) noisy baby in order to stop it from crying, potentially shaking it until it died: an outcome represented by ‘Xs over the baby's eyes’” (Young, 2020, p. 6). Described in terms of its systems, Baby Shaker incorporated the, at the time, innovative technology of an in-built accelerometer which allowed the device to respond to its being shaken. The grounds on which one might morally object to Baby Shaker are content-based and focus on certain extravirtual effects, such as the content endorsing an immoral world view about the appropriateness of shaking babies or the content being offensive, repugnant, or obscene. However, it is unclear whether these morally problematic extravirtual effects of the content are in fact likely to occur in this case. It seems unlikely that the developer of Baby Shaker is genuinely hoping to convince those who play it to adopt the world view that they ought to kill babies, and playing a silly game such as this is unlikely by itself to cause anyone to adopt such a view or meaningfully change existing social norms. While we may level moral concerns at the developer of the app for making such a tasteless or even potentially offensive app, it is far from clear whether there really is anything morally problematic with using such an app despite the moral red flags that the app's content seems to clearly and loudly raise. In terms of the systems layer, while the novelty (at the time) of the app's shaking mechanic might arouse some interest, the app lacks any obvious dark patterns that might manipulate or exploit the user.
In contrast, consider a hypothetical game much like Candy Crush Saga. This game is a simple puzzle game where a player finds patterns of candy to earn points. It features generic elevator music, is very colorful, and when you complete a puzzle, a voice declares “well done!.” In terms of the content layer, the game appears morally unproblematic, beyond perhaps mild concerns that it might somehow promote the consumption of unhealthy foods such as candy. However, from the systems layer, the game contains several dark patterns. One dark pattern of central concern is the implementation of the mechanic of “near-misses” where the player, by design, falls just short of winning, a design feature common in gambling machines. This pattern arouses and frustrates the user and manipulates them into playing much longer than they would, on reflection, really like to (see Larche et al., 2017). Other noteworthy dark patterns include the game incentivizing the player to make in-game purchases, making it hard to “win” without buying upgrades, implementing loot boxes, incentivizing play through the use of sound and other feedback mechanisms, the use of a friends leader board, and affording the player the chance to keep playing the game when they attempt to leave by asking if they are sure they want to stop playing. The game thus has elements of temporal, monetary, and social-capital-based dark patterns which, as argued above, are morally problematic and can negatively impact the player's autonomy competencies.
From our analysis, it seems that while Baby Shaker appears prima facie to be morally objectionable because of its content, whether it really is justifiably immoral is far from clear. In contrast, the game much like Candy Crush, while not immediately distasteful like Baby Shaker in terms of its content layer, employs several dark patterns in its systems layer which are much more clearly justifiably unethical. The dark patterns in the systems of the Candy Crush style game are just that, dark, but they are nonetheless clearly morally problematic, even though this might not be obvious to the casual user of the game. Baby Shaker appears to be the opposite case, where it is on the surface distasteful, but it is far from clear whether it really is morally problematic. We see this surface-level difference play out in terms of public reactions, with one of these games being banned (Baby Shaker was removed from the iTunes Store) whereas the other would be completely excused (Candy Crush Saga), for example, is consistently one of the most popular mobile apps in the world having produced more than 20 billion dollars in revenue (Sandle, 2023).
What seems to be driving this real-world response is that there are diverging folk intuitions regarding the moral permissibility of games which have seemingly morally objectionable content and those which have morally objectionably systems, although empirical verification of this claim would be helpful. This may be the outcome of an epistemic failing, being an unawareness of the harms of dark patterns, or an awareness of the harms but an overestimation of one's ability not to succumb to dark patterns. This is the sense in which the gambler feels at fault by continuing to gamble, while not recognizing the role the dark patterns (or “dark nudges”) employed by the gambling industry plays in fueling their addiction (Newall, 2019). This is an epistemic failing insofar as individuals do not know the extent to which dark patterns are harming them, and if they did, they would find them more morally objectionable.
Conclusion
Video games are ontologically composed of content and systems layers realized by digital entities. However, when thinking about the ethical issues raised by video games, the literature has tended to focus primarily on the content layer, such as whether a video game includes content that involves murder or molesting virtual agents. While this focus is indeed important in many cases, this can be misleading as morally confronting content may not raise any genuine moral concerns if it has limited or no extravirtual effects, and it can be incomplete because we also need to focus on the extravirtual impacts of the system’s layers of video games. Further, by considering both content-based and systems-based wrongs, we can broaden our areas of concern beyond the immoral real-world consequences that a content-based wrong may lead to, by also considering the possible negative impacts on autonomy that can emerge from systems-based wrongs. Our approach thus promises to offer a more complete framework for thinking about the ethical concerns raised by video games and other forms of interactive digital content.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
