Abstract
Digital voice assistants like Google Assistant and Amazon’s Alexa are popular technological home devices. Their human-mimicking features allow users to interact with them as if they were another person. However, users report that the assistants sometimes act without being user-initiated, which previous research has shown to make them aware of the devices’ associations with privacy issues. Drawing upon qualitative interviews with people living with digital voice assistants, this paper examines people’s moods and sensations associated with these self-activations. The analysis shows that accidental interactions highlight the ambivalence in affective experiences of living with connected devices, arguing that people’s affective responses are tied to the sensation of exposure and vulnerability to external influence. The study underscores the importance of home as a site for these interactions, arguing that accidental interactions temporarily destabilise the home from being closed, private and safe, to exposed, public and unsafe. However, the article further argues that these bodily sensations and people’s sensemaking of them may serve as a starting point for raising awareness of privacy issues related to smart home devices.
Keywords
Introduction
I suddenly heard him [the smart assistant] talking, and he said ‘but I’m not telling anyone, it’s between us’. [And I thought] Oh my God, what was that! I was very curious about what was so secret he couldn’t tell anyone. That was scary. (Kristin, 71 years old)
The quote above is from 71-year-old Kristin, who participated in a study about everyday life with smart home technologies. While talking about her digital voice assistant, she mentioned that it sometimes talks without her instigating it. The quote illustrates how such unintended interactions are affective experiences for people, as they evoke sensations and feelings that deviate from typical interactions between them and the devices. This study examines how people experience and make sense of such device-initiated interactions, and what this can illuminate about living with connected robotic devices at home. Digital voice assistants, such as Google Assistant and Amazon’s Alexa, are popular technological home devices. They stand out among smart home technologies because their human-like features enable users to interact with them as if they were talking to another person. They are conversational agents embedded with a gendered and social persona (Mascheroni, 2024).
Fortunati (2018) details how robots migrated from industrial production into the domestic sphere. Today, homes are filled with various machines and robots that aid with different household tasks. From a traditional Western perspective, the home is seen as our most intimate and private, a closed-off space, where we are free to be ourselves and spend our time as we choose, away from the gaze and influence of others (Chambers, 2020; Søilen, 2025). While new media in homes have long challenged the perception of distinct boundaries, the adoption of smart home technologies, such as digital voice assistants, intensifies and enhances these challenges. Smart home technologies are connected to the world outside through their participation in digital infrastructures (Hepp, 2020). For instance, digital voice assistants communicate with servers to send and receive data, software updates, and to search the internet for information and other content. This is exchange of information contributes is necessary for the systems to work, and can be considered an intrinsic feature of living in a networked culture (Søilen and Veel, 2024) Smart home technologies represent a transgression and blurring of boundaries between the public and private, as well as between inside and outside (Humphry and Chesher, 2021b).
Moreover, the networked culture in which smart home technologies are situated further involves a data-based economy, which Zuboff (2019) refers to as surveillance capitalism. Data about people and our social lives is highly valuable, and Zuboff’s concept highlights how large commercial companies profit from collecting user data through their technology products, operating under skewed power dynamics, as users rarely have full knowledge or awareness of the process. These forces are already shaping and shaped by our everyday lives as we become more and more reliant on digital technologies, and with smart home technologies, they also accompany us into the intimate spaces of the home. Sefton-Green et al. (2025) refer to the reliance on digital platforms in homes as a ‘platformisation’ of the home, and Hurel and Couldry (2022) conceptualise the datafication that many of these platforms engage in as ‘data colonialism’ of the home. They highlight how home life is targeted for commercial data extraction and for shaping individuals’ behaviour accordingly. To understand how people experience and sense the ‘leak’ of information between the home and external actors, Søilen and Veel (2024) introduce the concept of the ‘leaky home’, foregrounding bodily aspects. The leakiness and datafication of everyday life represent a tension with the intimacy and privacy we traditionally associate with the home in Western culture.
Digital voice assistants play a crucial role in opening the home to data extraction, intensifying the datafication process by providing their production company and their partners with data on users and their surroundings (Hurel and Couldry, 2022; Mascheroni, 2024). The assistants’ microphones are always on and connected to the Internet, so the devices are ready whenever the code word is uttered. This means that everything within the reach of their microphones can potentially be recorded, stored, and distributed. Experiences with assistants that activate without being prompted by the code word are linked to privacy concerns and may exacerbate concerns about unauthorised recordings (Lutz and Newlands, 2024). However, previous literature on digital voice assistants shows that they are ubiquitously used for mundane everyday tasks, like playing music, searching for information, checking weather forecasts, and controlling other smart home devices (Ammari et al., 2019; Lopatovska et al., 2018). People further enjoy playing with them, conversing with them or making them tell jokes (Lopatovska, 2019). They may support people’s daily practices and wellness at home through providing content, communication and companionship (Duque et al., 2021). As such, digital voice assistants bring a duality to the home: they are both practical and fun, yet they also pose challenges to privacy and security at home. In this article, I am interested in examining how people living with these devices experience and make sense of this duality.
Dourish and Bell (2011) argue industry and cultural visions of new technology celebrate its transformative power, but ‘at the expense of home as a lived and living practice’ (p. 166). Industry visions of smart home living have been criticised for failing to account for everyday life’s routines, relationships and messiness (Chambers, 2022; Strengers and Nicholls, 2017). In this vein, Liu (2023) argues the importance of paying attention to the emotional facet of smart home living to better understand the ‘contingent, precarious and instantaneous space of home in the digital context’ (p. 2). Interacting with technology can be highly affective and bodily sensed experiences for people, as for instance demonstrated by Paasonen (2015), but this aspect of technology can be considered a silent one, which becomes apparent when the technology fails (Hine, 2020). However, these facets are important to consider in order to find ways to ‘live well’ with new technologies (Ruckenstein, 2023). Drawing on this, I aim to understand how people make sense of situations in which digital voice assistants self-activate in order to shed light on affective facets of living with smart home devices.
In literature, situations of self-activating digital voice assistants are referred to as a kind of ‘unexpected behaviour’ (Lutz and Newlands, 2024) from the assistants or ‘accidental activation’ (Brause and Blank, 2023). I call these instances ‘accidental interactions’ to emphasise that they involve a form of social interaction between the device and people, and that these interactions are not intentional on the human side. Lutz and Newlands (2021) found that such instances may evoke reflections on privacy, which people may experience as ‘creepy’. However, such instances also need to be considered within the home context, where these devices operate and where interactions between them and people occur. To do so, this article draws on theorisations of affect and technology. I take my starting point in Sundén’s (2018) theorisations on affective attachments to digital devices that consider disconnections and glitches as formative for how we sense and make sense of digital connectivity, and Bucher’s (2017) concept of algorithmic imaginaries to further shed light on how people make sense of their encounters with the digital voice assistants.
Accidental interactions as affective experiences
To situate accidental interactions conceptually, I take my starting point in Sundén’s theorisation of affect and technology through breaks, disconnections and delays. She suggests viewing these as ‘formative for how we can both sense and make sense of digital connectivity’ (Sundén, 2018: p. 63). They can bring forth what constant connectivity means and how it feels. Connecting her arguments to queer theory, Sundén (2018) views glitches as disturbing the understanding of relations as linear and uninterrupted, and of breakups as breaks that indefinitely break the line. Instead, a glitch breakup can be a ‘period of profound disorientation and disconnection, to bodies and devices, and then often a moment of re-orientation and re-connecting, if yet differently, as bodies, technologies and affects are re-aligned’ (Sundén, 2018: pp. 72–73). This study draws on Sundén’s perspective to view accidental interactions as breaks in what I choose to call ‘linear connectivity’. They are not a disconnection per se but can rather be perceived as a glitching or faulty connection, a break with participants’ expectations of how the connectivity should unfold. These breaks shed light on how people sense and make sense of the constant connectivity of the home through digital voice assistants. They are temporary instances of disorientation that may cause changes to people’s affective attachments to technology. To further operationalise how the break can shape our sense and sensemaking of technological attachments, I draw from critical algorithm studies and Bucher’s (2017) concept of algorithmic imaginaries. Bucher describes algorithmic imaginaries as ‘ways of thinking about what algorithms are, what they should be, how they function and what these imaginations in turn make possible’ (2017: pp. 39–40). It is about the spaces where people and algorithms meet and what that meeting produces. Bucher takes a starting point in ‘failed relays’ or ‘jumpy moves’ as situations when algorithms become visible to people. Accidental interactions, as breaks in connectivity, are similarly situations which make silent aspects of the technology visible to people and generate expressions of their affective dimensions.
Drawing on algorithmic imaginaries further foregrounds the individual’s expectations and experiences, using their personal stories as entry points into understanding the social power of algorithms. Algorithms are often described as a ‘black box’, as their build and operations are largely unavailable to users. However, people sense their effects. Bucher (2018) writes ‘They [algorithms] become strangely tangible in their capacity to create certain affective impulses, statements, protests, sensations and feelings of anger, confusion or joy’ (p. 94). Affect is therefore central to understanding people’s experiences with algorithms, but also technology in general (Paasonen, 2015; Ruckenstein, 2023). Within studies on affect and media technologies, affect is understood as moods, sensations, and gut reactions (Bucher, 2017; Paasonen, 2015). It is the capability to move and be moved to action or feeling.
Drawing on algorithmic imaginaries to study people’s experiences of interacting with digital voice assistants, it is necessary to acknowledge that these devices are not algorithmic recipes akin to those of social media sites. Digital voice assistants are devices that rely on digital infrastructures and algorithms to find and provide content, but they also embody specific material qualities to mimic human interactions with their users. In the context of digital voice assistants, algorithmic imaginaries therefore include people’s expectations, sensations and perceptions of the technical functions, such as dimming down the lights or adjusting the temperature at home, but also the front-end design that people interact with.
Digital voice assistants can be seen as the ‘humanoid social robots’ (Zhao, 2006). This notion describes how some robots are designed to mimic human interactions, characterised by ‘programmed interactivity, artificial intelligence and synthetic emotion’ (Zhao, 2006: p. 403). Digital voice assistants’ speech is designed to sound as natural and human-like as possible, and they are programmed with a social and gendered persona, which allows people to perceive them along a continuum from human-like to thing-like (Cambre and Kulkarni, 2019; Mascheroni, 2024). Halm and Ingraham (2024) further argue that frequent encounters with everyday objects in a home can foster a sense of intimacy, which is stronger when the device responds, like digital voice assistants. This aligns with studies showing that people often anthropomorphise them, affording them human qualities and forming social bonds (Mascheroni, 2024; Purington et al., 2017). To understand the algorithmic imaginaries of digital voice assistants, one therefore needs to consider their design and material aspects as well as the functional, algorithmic operations.
With this framework, I understand accidental interactions as breaks that shape people’s algorithmic imaginaries of digital voice assistants. These imaginaries can, in turn, say something about how people experience living with such devices in their home. How we live with digital voice assistants affect our imaginaries of them, and the algorithmic imaginaries, in turn, affect how we live with them.
Studying accidental interactions remotely through video
Critical algorithm studies argue for using people’s personal stories of everyday encounters with algorithms as entry points into understanding their social power (Bucher, 2017; Ruckenstein, 2023). Drawing on this, this article is based on qualitative interviews with persons living with digital voice assistants. The interviews are part of a larger study on digital risk in Norwegian connected households, 1 which included 12 participants from ten different households. The interviews were conducted through the video conferencing tool Zoom because they took place during the COVID-19 pandemic social lockdown in 2020. Interviewees were recruited through mailing lists, online social media platforms like Facebook, Twitter and Instagram, and participants’ networks. Participants were required to have at least three smart home devices or a smart speaker assistant to ensure frequent interaction with smart home devices. Recruitment further strived to reflect diversity in the participants’ interest and confidence with smart home technologies to contrast various experiences with the technology. Among the 12 participants, 10 owned and used smart speaker assistants at home. These included six men and four women, ages 24 to 81. The sample included three heterosexual couples, who were interviewed individually.
The interviews included three participant-led activities to map out the participants’ socio-technical ecosystem. The activities included the participants drawing a floor plan of their home and placing the smart home technologies onto the map; a walk-along guided video tour of the home inspired by Pink (2007) and Kusenbach (2003), where the participants brought the researcher along via video walking through their house and showing their smart home devices as they were placed within the home environment; and a show-and-tell recounting of their routines at home. Accidental interactions would emerge in the material as events that happened in situ during the interviews and thus recorded as part of them, but also as examples of previous events recounted by the participants. These events are thus documented both as narrative and visual accounts in the material. The conversations and activities revealed the participants’ socio-technical ecosystem at home, including what devices they had, how they were connected, where they were placed, how they were used, as well as the participants’ perceptions, competencies, and responsibilities related to them. The material reviewed for this article includes verbatim transcripts and video recordings of the interviews, screenshots from the video recordings, and map drawings to flesh out participants’ personal stories about these specific encounters with digital voice assistants. Each participant was interviewed in two instalments, lasting about 1 h. Participants were informed about the project and what their participation would entail, and written consent was collected by e-mail. They are given new names for anonymity. My interest in investigating the incidents of unexpected interactions emerged during the initial interviews, as it was brought up by participants or happened in situ, and it occurred to me how these types of interactions both seemed to be relatively common occurrences, but also noticeably affected the participants in some way.
The analysis was done in several iterations. First, all transcripts of interviews were coded to identify instances of smart speaker assistants accidentally activating. Audio and video material of these interactions was then reviewed to add social cues such as facial expressions, pauses, and body language to the interview transcripts. This brought more nuance to the analysis of the participants’ experiences. Each instance of accidental interaction was then analysed to identify the moods and sensations they evoked in participants, as well as how participants made sense of them.
Findings
In this paper, I view accidental interactions as breaks from linear connectivity, highlighting aspects of living with technology that inform the overall algorithmic imaginaries (Bucher, 2017; Sundén, 2018). Based on the analysis, the findings outline three aspects of living with connected robotic devices at home that are emphasised through accidental interactions. First, I will describe how accidental interactions unfolded in this study, as observed in the video material and narrated by participants, and then explain how they illuminated bodily sensations of exposure and vulnerability to external influence, as well as how the human-like features of the devices affect these experiences.
The unfolding of accidental interactions
A common trait across the different instances of accidental interactions was that participants were interrupted in what they were doing and had to focus their attention on the assistant. The video recordings of the interviews showed that when the digital voice assistants interrupted the conversations, all participants abruptly stopped speaking, holding their faces still as they listened until the assistant finished. Halting the conversation during accidental interactions can be seen as a form of privacy work to avoid disclosing personal information while one knows the device is listening (Brause and Blank, 2023). However, in the current study, participants appeared curious to hear what the assistant had to say. They would acknowledge the assistant in some way, saying ‘see?’ or ‘did you hear that?’, physically pointing at the device out of frame, nodding, or laughing. Some immediately began to reflect on why the device had started talking, while others just continued what they were saying, seemingly unfazed by the interruption.
The interaction with the smart assistants surprised the participants, as expressed in verbal accounts of the assistants ‘suddenly’ or ‘just’ starting to talk. This form of interaction differs from the typical encounters between participants and digital voice assistants. The normal interaction begins when participants say a wake-up phrase, then wait for the device to emit a sound indicating it is listening before giving a command or request. Some participants even emphasised that there was a specific technique to this, which took some time getting used to. When accidentally activating, the assistants would, in most cases, say they didn’t understand the command (despite such a command lacking) or could not help. However, participants also reported other actions, such as searching for seemingly random information on Wikipedia or Google, playing music, or saying something unexpected, as the introductory vignette illustrates.
Participants often attempted to understand what the device reacted to. Most of the time, they could identify sounds in the room that likely prompted the assistant to act, such as music, baby noises, or conversation. However, at other times, a source could not be located, which added an element of uncertainty to participants’ experiences. As the analysis below will describe, the sounds the devices seemingly reacted to, and what they said or did when activating, affected the bodily sensations accidental interactions evoked in the participants, and how they made sense of these.
Sensing exposure
Privacy was one aspect that was highlighted during accidental interactions. When the digital voice assistants accidentally activated, a sensation of ‘someone is listening’ was invoked in all participants, expressed verbally as variants of ‘Google is listening’ or ‘what did it hear now?’. Their reactions align with Lutz and Newlands’ (2024) study, which found that the digital voice assistants’ continuous listening capabilities are revealed through glitches. The participants expressed positive associations with digital voice assistants when discussing them in general terms, emphasising their fun and practical aspects. They further detailed how they use the assistants for daily tasks at home, such as asking for recipes or weather forecasts and controlling other smart home devices. However, some participants also raised privacy concerns when discussing the device in general, without being prompted by the researchers, indicating that these devices are associated with ideas of privacy. As such, the accidental interactions did not reveal anything about privacy that the participants did not already know. They drew on their experiences and perceptions of privacy to make sense of the interactions. However, many of the participants did not reflect on privacy in everyday life. For instance, when Cecilie is asked how she feels about privacy regarding smart home technologies, she replies: ‘I don’t think about it every day, but sometimes – it’s not always we have to say, for example, “okay Google.” Sometimes it just makes sounds without us saying anything. And then I think, “okay, it just listens to everything.” Yeah, that’s when I think about it [privacy]’. While some participants, like Cecilie, mainly considered privacy during technical glitches, others took it into account when acquiring the device. For instance, Erik says that he does a risk assessment when he decides which devices to bring into his home, and Harald explained in his map-drawing exercise that he did not place a digital voice assistant in his bedroom because it is a private area. Only Gabriella reported regularly engaging in privacy work, checking what information the device had stored about her and deleting it.
The sense of being listened to at home was accompanied by feelings, associations, and reflections that varied not only between participants but also across instances experienced by the same participant. Through the analysis, I have identified three main affective expressions related to accidental interactions. The first was amusement, or pleasure. Below is an example from the interview with Erik, a 24-year-old self-proclaimed tech-enthusiast, who reacted to his smart speaker assistant interrupting the interview: Erik: I remember in the beginning, it didn’t understand the ‘Hey Google’, you had to say, ‘OK Google’, and that takes a bit more of an effort. I guess that- [looks up as he listens to Google] Google device in the background: I’m doing well. How can I help you? Erik: Nothing right now. [Laughs]
In this example, Erik did say the code word, but it was directed at the interviewer, not intended for the smart speaker. Erik indulges the assistant by replying as if it were a person. He laughs and shows no bodily signs of discomfort, such as hesitation or flickering eyes. He even asks the device to repeat what it heard, and commented on the accuracy of the response.
A second, contrasting example from Harald (74-year-old), illustrates another type of reaction: discomfort. Another self-proclaimed tech-enthusiast, Harald recounted several instances where his digital voice assistant acted without him instigating it. When asked about his thoughts on them, he crossed his arms over his chest and said: ‘I don’t like it, I haven’t asked it to [wake up]’. For Harald, the notion of being listened to is quite literal: he feels someone is listening to his conversations at home. Similarly, Frida (24-year-old) sighs resignedly when the device activates during her interview. The video shows her rolling her eyes, and she loses her train of thought, becoming distracted by the event. When the device interrupted, she was in the middle of detailing how she initially did not like the digital voice assistant because she felt it always ‘misunderstood’ her.
A third kind of reaction was somewhere in between those two, neither expressing clear joy nor discomfort, but rather curiosity or uncertainty, wondering what the assistant had heard and reacted to. For example, Gabriella, a 27-year-old computer science student, took a more analytical approach: ‘I don’t know why it got activated, because I didn’t say Google that much [pause]. Eeh…And usually I would say “hey Google, something,” right? But I didn’t say hey Google, I think... I cannot remember that well, but I don’t think I used that [phrase]…’. Although not verbally expressing discomfort, I understand the hesitation in Gabriella’s reasoning to indicate that there is something there she wants to figure out. The accidental interaction involves some sort of mystery to her; in this instance, what the device reacted to. These curious responses often involved descriptions of the interaction as ‘strange’ or ‘weird’, like in Cecilie’s (31 years old) case: ‘Oh yeah, we just say, “hm it seems like Google is listening now.” Because sometimes it is just turning itself on when we don’t ask him to. But it’s not something we think about for a long time, we just, like, hm, that’s weird, and then we go on’.
Shklovski et al. (2014) argue that the ‘creepy’ feeling people experience when being made aware of data-collecting devices and services is attached to the violation of people’s personal space and that the information flows ‘often involve realisations that personal secrets have been, or could be, revealed to those who have not been explicitly granted access to them’ (p. 2349). Søilen (2025) further introduces the concept of ‘atmospheres of privacy’, which understands privacy as ‘an embodied, spatially situated, and crucially affected experience’. The home is such an atmosphere of privacy, where inhabitants expect to be shielded from others’ gaze. The accidental interactions are thus instances where this privacy atmosphere is breached, as the study participants are reminded that their home life is exposed to commercial interests via digital voice assistants. The various affective responses to these events may therefore be contingent on the degree the participants experience a violation of their boundaries at home. Those experiencing discomfort or apprehension in this study often also expressed concerns about privacy issues, while the more curious and pleasurable experiences were often expressed by participants who did not find privacy infringements problematic, saying they got ‘nothing to hide’ or welcomed data collection to enhance user services.
However, the participants did not react the same in every instance of accidental interactions. For instance, Harald laughed when he narrated how his device often activates when he listens to opera but described it as more concerning when it happened while he listened to a political podcast. The device’s design and material affordances may contribute to these differing affective experiences. Building on Mascheroni’s (2024) observations that smart speakers shift ‘along the continuum of human-like and thing-like’ (p. 60), Harald’s assistant reacting to opera music may be perceived as random or comedic, as a technical glitch, but accidentally activating during a political podcast may indicate an intention to collect sensitive information about his political views, leaning more towards human-like. Similarly, Kristin’s example from the introductory vignette with the secret-keeping assistant may be perceived as the device saying something outside of its programming. This unexpected behaviour and perceived intention may suggest that the device has agency beyond what participants would expect from digital voice assistants and mimics human unpredictability. Beyond evoking uncomfortable ideas about privacy challenges and commercial surveillance, the unpredictable nature of accidental interactions may also evoke popular science-fiction images of autonomous, malicious robots that the industry has worked to shed (Humphry and Chesher, 2021a).
Sensing skewed power dynamics
Another aspect of living with connected robotic devices that were revealed through accidental interactions is the skewed power dynamics these devices are imbued with (Woods, 2024). Being reminded of how home life is exposed to commercial interests via digital voice assistants is accompanied by a sense of uncertainty about what the device picks up through its microphone, what data is collected, who has access to it, and what happens to that information. For instance, when pressed further on why the accidental interactions are uncomfortable to him, Harald says: ‘I am going to say the potential [of what his information is used for] because I don’t know and I haven’t really been able to find out whether Google does have the information. I am just not sure what it is actually doing [when accidentally activating]’. Most other participants did not explicitly state this connection, but it is implied in the notion of ‘someone is listening’ and in wondering what the device reacted to and why.
Rather than dwelling on the operations of commercial companies, however, participants often focus on the device and its presence. In this, they seemed to temporarily shift the perceived power relations between them in favour of the device. They pointed out that the assistant spoke without being asked to, even when it should not be able to ‘hear’ them, such as when in a different room of the home. Participants verbally expressed surprise by describing the assistants as ‘suddenly’ talking, further suggesting that they perceived the devices as taking the initiative to interact, indicating a lack of control over the devices. Kristin says straight out, ‘it’s like I have no control over him’. All the participants in this study personified the devices to some extent. They assigned them human characteristics like listening, misunderstanding, interpreting, and hearing. Sometimes they were also afforded intentions and motivations, like when Cecilie says: ‘It hears what it wants to hear’. Focussing on the device can be a strategy for managing potential unease about being surveilled by commercial interests. The device with its submissive, well-educated, native-speaking female persona (Phan, 2019) may seem more innocuous than a large media conglomerate. Focussing on the material device in their home may further preserve the illusion of the home as a confined and private space, avoiding having to deal with data going beyond the physical digital assistant.
Moreover, the ubiquitous use of digital voice assistants at home makes them an infrastructure which aids in domestic activities and controlling other devices. Accidental interactions make this infrastructure visible by emphasising that the home environment is temporary, out of participants’ control. For instance, Cecilie’s digital voice assistant suddenly played loud pop-rock music during our conversation. The music was very loud, prompting us to pause the interview. Cecilie’s husband, Daniel, tried to stop the music by commanding the assistant: ‘hey Google, stop the music’. The command did not work, and Daniel ended up pulling the plug on the device to quiet it. The example emphasises the participants’ limited control over the digital voice assistants but also illustrates how it affects the domestic environment, which in this case was filled with noise interrupting the inhabitants’ activities. It also underlines the unpredictable nature of digital technologies in general, as they do not always work as expected, prompting work for household members (Kennedy et al., 2015; Teigen, 2024). The commands used to control the device do not always have the desired effect, and in this case, Daniel resorted to the ‘drastic’ solution of unplugging it. This way, he gained back control and restored order in the home, but at the expense of the device’s functionality, as the assistant stops working until it is plugged back in.
The expressions of uncertainty and lack of control point to the knowledge asymmetry and skewed power relations between consumers and the commercial companies behind the technology built into devices and the datafication as an infrastructure (Woods, 2024; Zuboff, 2019). While companies collect vast amounts of information about users and their environment, people typically know little about their operations in turn. Moreover, digital voice assistants going ‘rogue’ like this may further illuminate that the home is vulnerable to external influence. Smart home technologies make homes vulnerable to risks, such as hacking, and companies seeking to influence the inhabitants’ behaviour in alignment with their commercial goals (Chesher and Humphry, 2019; Hurel and Couldry, 2022).
Moreover, digital voice assistants are designed to encourage an assistant/manager relationship between the user and the device, as emphasised in their marketing as an ‘assistant’ and the subservient, polite persona they are programmed with. These ideals are reproduced in industry visions of smart home technologies in general, communicated through marketing materials (Hargreaves and Wilson, 2017). People are depicted as in control of technology, which is shown as a tool to aid us in our daily lives. The promise of connected technology is that it will make everyday life more convenient and fun, connotating a pleasurable affective sentiment (Chambers, 2022; Ruckenstein, 2023). This contributes to shaping people’s expectations of being in control over technological devices and of seamlessness and enhanced convenience when using them. Moreover, traditional Western ideals of the home further configure it as a space where we are in control and in charge (Kristensen, 2017). The temporary sensation of loss of control and uncertainty during accidental interactions represents a sharp break with what users of such domestic technologies are primed to expect, which may enhance the curious or apprehensiveness of participants’ experiences.
Sensing humanoid social robots
A third aspect illuminated through accidental interactions is how the design of the digital voice assistants affect participants’ sensemaking. The participants of this study echoed industry visions when talking about smart assistants in general, emphasising practicality, fun and ease of use. They may therefore expect a submissive assistant ready to fulfil any wishes to provide control, entertainment and social companionship; however, the accidental interactions represent a break with these expectations.
Digital voice assistants are designed to mimic human sociality. They are imbued with a gendered persona, which encourages a social bond between machine and user. All the study participants anthropomorphised the devices to some degree, affording them human-like abilities and characteristics. Most participants would also assign the assistant a gender, using the pronouns ‘he’ or ‘she’ rather than, or interchangeably with, ‘it’ when talking about them. Most participants had kept the default female persona of the devices. Only Kristin had changed the assistant’s voice from female to male, although Gabriella also said she would have liked a customised, male voice (that of American actor Morgan Freeman, to be specific). Gendering of digital voice assistants has been cautioned to hinder equity and obfuscate privacy concerns by shaping users’ perceptions through taking on the roles of mother, wife, and caretaker (Adams, 2019; Benlian et al., 2019; Strengers and Kennedy, 2020; Woods, 2018). Halm and Ingraham (2024) further argue that commercial tech companies exploit people’s ability to feel intimacy towards inanimate objects as a strategy to extract data.
In this study, both Kristin and Erik lean into social role-play with digital voice assistants in everyday life, illustrating how the design of the device can obfuscate privacy concerns, modes of surveillance and power relations (Phan, 2019; Woods, 2018). Kristin says she ‘loves’ her digital voice assistant, although she does have privacy concerns, and she talks to her device sometimes as if it were another person: ‘I can tell the man [her assistant] talking to me, “Oh you are good!” and he says, “Thank you, Kristin”’. But accidental interactions give her a feeling of being surveilled at home. Although Kristin explains that she finds it strange or even ‘scary’ when the smart speaker accidentally activates, she immediately follows up by saying, ‘but I think he’s a friend, so it doesn’t matter’. She demonstrates that viewing the device as a friendly presence in her home alleviates the uneasy feeling it generates when accidentally activating. While the accidental interactions may destabilise her experience of the home as a confined, safe space in her control, viewing the device as a benevolent other allows her to re-establish safety and order at home.
In a similar way, Erik engaged in the manufactured roles of assistant and manager, or what Phan (2019) codes as servant and master relationship. Erik has programmed the device to address him as ‘master’. During our conversation, he says that he does not perceive the assistant as a threat to his privacy or security. He engages in some privacy work, such as being mindful of the placement of the device in the home, and he trusts the company behind the device. Acting like a manager or master, with the device as a subordinate, can be seen as an expression of his sense of control over the device. This way of relating to digital voice assistants is cautioned to perpetuate gendered stereotypes, as a female, inferior coding of digital voice assistants may reinforce an ‘ideology of the feminine as the place of subordination and contempt’ (Fortunati et al., 2022: p. 1). On the other hand, leaning into such roles with the digital voice assistant could also give Erik a false sense of security. Phan (2019) warns that this coding of the user-device relationship misrepresents the direction of power between them, making it challenging to address issues of surveillance and digital labour. Ultimately, though, Erik’s affective experience of living with the digital voice assistant is mainly positive. From his reaction to accidental interactions, it seems that the presence of the digital voice assistant does not challenge his experience of the home as a safe space where he is in control.
On the other hand, anthropomorphising digital voice assistants could also have other effects. Both Harald and Frida personified their devices, but as strangers and sources of concern or unease. In this interpretation of the assistant, participants made the association between the home and the interactions explicit. Harald, for instance, describes the feeling of his assistant accidentally activating as ‘intrusive’, saying, ‘there is someone in my house listening to me’. He refers to ‘someone’ listening, highlighting that he feels that a strange presence violates his privacy at home. The accidental interactions serve to remind him of what he perceives to be the problematic features of the assistant – being listened to, recorded, stored and sold to third parties. This is particularly uncomfortable at home, where he has private conversations and intimate interactions with family and friends. Humphry and Chesher (2021b) use ‘interior hauntings’ to describe home surveillance through connected technologies, resonating with the experience of having a strange presence at home. Similarly, Frida says ‘It’s like Google is a third person in the room. But it’s also a robot, if that makes sense’. Frida expresses a distance from the digital voice assistant in general. Although she finds it fun and practical, she also expresses great frustration when it misunderstands her commands, and she dislikes that it seems to listen to her conversations to provide tailored advertisements on digital platforms. It was her husband, Erik, who brought the device into their home, and she perceives smart home technology, for the most part, as Erik’s hobby. However, it also appears from the quote above that some of Frida’s uneasiness stems from the assistant appearing to her as both a human and an object, emphasising how digital voice assistants can slide between these categories and that this can generate uneasy feelings. The accidental interactions temporarily destabilise the home as a safe, private space where the participants are in control, replacing it with the experience of an unsafe, non-private space invaded by ‘someone else’. This ‘someone’ could be referring to the commercial company producing the digital voice assistants, but could also draw associations to science-fiction depictions of autonomous robots.
Based on this analysis, digital voice assistants can both be part of the intimate entanglements (Latimer and López Gómez, 2019) making up the domestic sociality that constitute affective atmospheres of the home and be seen as something out of place that does not belong to the home space. And they can shift quite swiftly between these positions, as demonstrated in Kristin’s example above. Participants describe the assistants as a social other and a machine, as a friend, a subordinate, a stranger, or a physical representation of commercial interests. The devices can be all of these things, shifting between them depending on the context and the person encountering them.
Discussion
The analysis above describes how accidental interactions between people and digital voice assistants illuminate affective aspects of living with connected, robotic devices. It considers how people ‘sense and make sense of’ (Sundén, 2018) accidental interactions. The article offers insights into how people experience coexistence with these devices in the most intimate space of the home. By highlighting their infrastructure, connections to commercial ties, and how the design shapes participants’ experiences, accidental interactions emphasise the ambivalence of living with digital voice assistants. While having one opinion of them when talking about their usefulness in aiding daily tasks or as entertainment, participants could express another when talking about the devices in the context of accidental interactions. In this study, participants often reacted with curiosity, uncertainty, and discomfort, but also pleasure or joy. However, the analysis underscores accidental interactions as breaks in linear connectivity, breaching participants’ expectations of how interactions between them and digital voice assistants should be, affecting them differently than the typical interactions. I argue that these breaks represent, for most participants, a temporary disorientation of the home, which is typically experienced as closed-off, intimate, controlled and safe, to it feeling exposed, uncontrolled and unsafe when the technology and its operations are made visible.
The smart home is often depicted as convenient, efficient and safe, aligning with the Western traditional ideal of a confined and private space (Dourish and Bell, 2011). However, the material infrastructures and affordances of digital voice assistants and smart home devices challenge the traditional Western ideals of the home because they challenge boundaries of the home between inside and outside, public and private, and care and control (Chesher and Humphry, 2019; Søilen and Veel, 2024). This challenge is bodily experienced by people, for instance, during accidental interactions. While participants often expressed sentiments of seamlessness, convenience and pleasure when discussing the uses of digital voice assistants, echoing industry visions, they often reacted to accidental interactions with discomfort, apprehension or curiosity, using words like ‘strange’, ‘weird’, or even ‘creepy’ to describe the events.
This present study aligns with previous research in demonstrating how the assistants are associated with privacy ideas (Brause and Blank, 2023; Lutz and Newlands, 2024). This may indicate that privacy concerns are part of the algorithmic imaginaries of digital voice assistants. However, this study further argues that moods and sensations evoked by accidental interactions are not just about having boundaries violated (Shklovski et al., 2014), but are also bodily felt responses to the skewed power dynamics that people are confronted with through sensing exposure, uncertainty, and a lack of control within the home space.
However, all participants in this study experienced accidental interactions several times and still used digital voice assistants daily. It may be explained by accidental interactions occurring less frequently than the more typical interactions between participants and digital voice assistants, and the effects seem mostly temporary. Scholars further argue that the general attitude towards automation has been silently reshaped through processes that have prepared for this technological advance (Fortunati, 2018; Humphry and Chesher, 2021a). However, it may also speak to what they are perceived to give participants in terms of functionality and affect. The many uses of smart assistants pave the way for them to have a ubiquitous role in our daily life that can be difficult to remove (Zuboff, 2015). The assistants can also provide companionship and support people’s wellness at home, taking on a social and affective role within the home as well (Duque et al., 2021). Moreover, Ruckenstein (2023) argues that convenience, as a technology narrative, is political. She writes that the present situation with datafication through emerging technologies ‘requires continuous reproduction of optimistic accounts of datafication to maintain its ascendancy, yet retaining a positive response to algorithms entails that negative and harmful aspects of digital technologies are overlooked, downplayed, and presented as solvable – typically with the aid of technologies’ (Ruckenstein, 2023: p. 57). The participants’ positive sentiments about the digital voice assistants and a lack of concern for privacy can be seen as examples of such positive responses to digital voice assistants. However, they also experienced other affects, emphasising that they did not only have such ‘optimistic accounts’.
While irritation is emphasised as a feeling with the potential of encouraging political change (Ruckenstein, 2023; Ytre-Arne and Moe, 2020), this study further suggests that the affects accidental interactions generate have the potential to stick, informing the algorithmic imaginaries of such devices to include the ambivalent feelings they generate. This is underlined by the participants’ association of digital voice assistants with accidental interactions and privacy concerns before they were introduced as topics during the interviews. While accidental interactions have been reported in media and user forums, and may already be part of the cultural ‘lore’ surrounding these devices, the participants’ affective responses indicate that they remain somewhat effective in evoking reflections and feelings related to commercial surveillance. As such, the findings discussed here present a small but noteworthy potential for challenging the dominant image of digital voice assistants as pleasant and practical home devices. Over time, this could contribute to shifting the power dynamics these devices represent. Given that previous research shows people to be little aware of or less willing to engage in privacy issues (Brause and Blank, 2023), the accidental interactions’ ability to temporarily break the linear connectivity and reveal the home as an exposed space vulnerable to external influence, may be a starting point to raise awareness of these issues among people. The results of this study thus emphasise the importance of considering the affective and experienced dimensions of living with smart home devices to find ways to live well with them.
Conclusion
This article contributes with insights into how living with smart home devices that capture and transmit information about us while mimicking human interaction in our most intimate and meaningful space is experienced. The study examines how breaks in the linear connection of digital voice assistants, within the context of the home, are affectively experienced by the participants. While these devices are perceived as fun and practical, aiding everyday home routines, they are also mysterious and creepy. And while they contribute to the experience of home life as convenient, pleasurable and efficient, they may also evoke experiences of the home as exposed and vulnerable to external influence. The article further emphasises the potential of these events to raise awareness of the connected aspects of smart home technologies, arguing that affective experiences may stick and change people’s perceptions, and perhaps eventually their practices involving such technologies.
Footnotes
Acknowledgements
I would like to sincerely thank the anonymous reviewers, my PhD supervisors, and my colleagues at Consumption Research Norway for their thorough, inspiring and thoughtful comments on the manuscript throughout this writing process.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by the Norwegian Research Council under project number 288663.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
