Abstract
In this article, we travel back to the early days of experimental use of cochlear implants (CIs) in the 1970s, when unsettled expectations of the device and broad investigations of its effects began to settle and center on speech outcomes. We describe how this attention to speech outcomes coalesced into specific understandings of what CIs do, and how implicit or explicit understandings of CIs as bionic devices that normalize hearing influenced research on and expectations of CIs into the present. We conclude that accumulated evidence about what is known and unknown about experiences and materialities with CIs calls for a decisive break from the metaphor of the bionic ear. This shift would create a space to reconsider the “deafness of history and the present,” as well as experiences of brain–computer interfaces that are inclusive of nonnormative life. This article is based on fieldwork in research and clinical facilities in Australia, Canada, and the United States. It included forty-three interviews with clinical experts and leading researchers in the fields of audiology, psychoacoustics, and neuroscience, among them scientists involved in the development and commercialization of one of the first CIs.
The cochlear implant (CI) is a prosthetic device that produces the sensation of hearing. It is primarily used by people who are deaf or who have hearing levels above ninety decibels (de Kleijn et al. 2018). For people with the latter hearing threshold, speech would not be audible and only very loud sounds would be heard (CDC 2022). During experimental use of the device 1 in the late 1970s, the CI was regarded as a potential aid to lip-reading and research was oriented to receptive and perceptual processes—what people might hear with the CI. However, the teams carrying out early trials quickly noted that some people using the devices could hear more than expected. For instance, researchers recounted their surprise at observing that one woman using the device could hear and exchange with her husband on the phone in the absence of visual language cues. This experimental research consolidated the already circulating metaphor of the bionic ear, affecting explicit and implicit conceptions of the device as having the potential to replace the functions and outcomes of a “normal” hearing system.
In this article, we travel back to the early days of CI experimental use in the 1970s in Australia, when unsettled expectations of the device and broad investigations of its effects began to settle. We explore the “interpretive flexibility” surrounding CIs as “a process through which bodily faculties and technologies are shaped together and brought into mutually constitutive technological frames” (Marshall 2014, 950-51; Mauldin 2016). Anthropologist Michele Friedner has explored this interpretive flexibility, specifically the extent to which hopes and expectations about the normalizing potential of CIs have converged and materialized in the habilitation practices that accompany their use. She argues that these hopes and expectations are intimately tied to speech development, which is seen as essential to become “near to or adjacent to normality” in a broad normative sense that includes, but extends beyond, speech (e.g., mainstream schooling; Friedner 2022, 7).
Building on Friedner’s work, we examine how the hopes aligned with the bionic metaphor led to subsequent research and clinical attention being focused on a constrained set of outcomes for which the device was valued. We contend that beliefs and values associated with the bionic ear led to the collapse of the space between hopes and expectations of CIs, and their potential to normalize hearing and speech. Not long into experimentation with the CI, it was no longer simply hoped but expected that deaf people might hear and speak through use of the device. These expectations were materialized in different ways. If Friedner has documented how educators and clinicians materialized hopes and expectations of CIs in habilitation practices, we further argue that these hopes have been materialized in CI users’ sensory development. By examining research on CIs from the 1970s to the present, we offer insights into how this happened.
First, we consider the articulation of history and biology, drawing on the work of sociologist Hannah Landecker, tracing different therapeutic approaches and auditory prostheses that shaped perceptions of deafness as an increasingly malleable state. We then turn our attention to how these understandings of deafness led to a novel hearing system in the form of electric stimulation, producing new sensory processes for CI users. Overall, this first section explores how hopes, expectations, and their convergence crosscut the history of CIs conceptually and materially.
Next, we explore the consequences of CI research that invests in the notion that neuroplasticity will allow users to overcome the device’s limitations and reach normative outcomes expected of the “bionic ear.” We map out how much of this research disregards the distinctive neural processes of CI users and different profiles within this group. Drawing on the example of “unexplained variability” in CI research regarding speech outcomes, we argue that this variability might be understood instead as an artifact of a research process that largely black boxes CI users’ perceptual experiences.
In conclusion, we argue for abandoning the normative hopes and expectations condensed into the bionic metaphor. We contend that this would improve engagements with the materiality and experiences associated with CIs to produce different deafnesses of the future.
We draw on fieldwork carried out over two years in research and clinical facilities in Australia, Canada, and the United States. 2 This included forty-three interviews with clinical experts and leading researchers in the fields of audiology, psychoacoustics, and neuroscience, among them scientists involved in the development and commercialization of one of the first CIs. Our research also included observations and interviews at a major conference that brought together therapists, educators, fundamental researchers, and CI users and their families. Data obtained from fieldwork were complemented by literature reviews of historically significant scientific and clinical understandings of hearing, research, and interventions associated with deafness.
The History of Deafness and the Deafness of History
Deafness and the experience of deafness are shaped by factors ranging from social to political, legislative, educational, and medical (Mauldin 2016; Snoddon and Paul 2020). To understand the factors that have influenced CI users’ experiences in the present, we draw on the concept of the “biology of history” (Landecker 2016). In her study of the history of antibiotic resistance, Landecker (2016, 21) advanced a novel approach to explore the relationship between forms of life and history. She asserted that, traditionally, in studies of “the history of biology, ideas of bacteria change.” In other words, humanities and social science scholars interested in science, technology, and medicine tend to study how knowledge is produced and how it changes over time.
By contrast, the biology of history focuses on how knowledge or ideas produce changes in the materiality of life over time. Consequently, Landecker (2016, 21) traced how “the bacteria of ideas change,” first through greater knowledge of bacteria, followed by the development and use of antibiotics. Their use was associated with material changes in microbial life and the emergence of antibiotic resistance. The bacteria of history were changed. Knowledge, practices, and materiality are fundamentally linked in studies of the biology of history and are understood as producing new realities. As Landecker (2016, 23) succinctly concludes, “We used to think a certain way about antibiosis and pathogens. And then we changed the future.” So, too, in the case of deafness and CIs.
Adapting Landecker’s argument, we propose that in the history of deafness, ideas or understandings of deafness change. In the deafness of history, the deafness of ideas changes: the materiality that shapes and reflects, for instance, sensory and cognitive processes. In other words, throughout history, many deaf people’s experiences have taken shape at the intersection of biological models, biotechnology, embodied sensory processes, and pressures toward normative (re)habilitation. Over the next three subsections, we consider what we can learn about deafness in the present from these two perspectives.
Producing Deafnesses in the History of Deafness
Attention to the history of deafness situates CIs as one of many devices in a long history of interventions meant to repair a perceived sensory deficit and normalize deaf people’s communication strategies. The CI is currently the most widely used brain–computer interface, with experimental models developed in the mid-twentieth century at different research centers around the world (Mills 2011). A turning point in the history of the device occurred when, following an increasing number of experimental implantations in the 1970s and after an independent study of its effectiveness and safety in 1977 (Hannaway 1996), the US Food and Drug Administration (FDA) approved the first commercialized device in 1984. 3 Though still controversial in deaf communities, CIs are now used by approximately 736, 900 people worldwide (NIH 2021), in increasingly diverse populations ranging from infants to the elderly; people who are pre- and postlingual (including oral and signed languages or both); and in the case of congenital or acquired deafness, reduced hearing levels, or single-sided deafness. The device’s external sound processor converts incoming sounds into electric signals that stimulate a surgically implanted electrode array in a person’s cochlea. The electrodes then activate different regions of the frequency-sensitive cochlea and the stimulation is transduced into nerve impulses.
The term “bionic ear” was invoked to describe the CI early in its development. The term positioned the device as an example of the “limitless potential of science” in order to promote use of the CI and boost the public’s trust in the device (Bosteels and Blume 2014; Lloyd and Bonventre 2021). An early European patent for CIs reflected these aspirations, describing the device as “a cochlear prosthesis, or implantable hearing prosthesis system, or bionic ear” (Crosby et al. 1985). Although bionics are devices designed to replicate biological systems (Vincent et al. 2006), in scientific and medical literature, the signal sent by CIs has long been described as “impoverished” and “degraded” (Moore and Shannon 2009). This ambivalent positioning of the device fueled ambitions to improve the technology and associated clinical interventions to overcome its limitations, with the goal of replicating hearing outcomes associated with a typical biological hearing system to the greatest extent possible (Rhoades and Glade 2020; for further details, see Lloyd and Bonventre 2021). Altogether, this rhetoric positions hearing with CIs as simultaneously “more than” deafness and “less than” typical hearing, affecting what hearing with CIs is thought to be at neurobiological to experiential levels. By focusing on replicating typical biological hearing, this rhetoric diverts attention from people’s sensory experiences with the devices, as “variations of the lived body” (Einstein and Shildrick 2009, 298), and instead follows long-standing deficit-based visions of deafness and efforts to “cure” it through “technological salvation” (Haraway 1991).
Deaf scholar H-Dirksen Bauman traces the roots of this negative understanding of deafness to a century-old metaphysical framework that links humanness with spoken language (Bauman 2004). In the late nineteenth century, Darwin’s studies of language as an evolutionary trait contributed to the idea that we should understand “ourselves as becoming human through speech” (Bauman 2004, 243). Darwin’s theory of language evolution posited the interdependence of thoughts and words, specifically linking the possibility of conceptual thinking with the development of spoken language. Both were thought to be part of the evolutionary process considered specific to humans (Bauman 2004).
The reception of Darwin’s studies in the field of education consolidated ideas of an essential difference between deaf and hearing individuals based on the ability to understand and reproduce spoken language. This resulted in the development of educational practices to correct deafness and overcome perceived differences through the development of speech regardless of a person’s hearing levels (Bauman 2004). With this focus, ideas about language and speech development became metonymic for curing deafness, progressively reinforcing the association between the two through educational and clinical interventions, as well as technologies. Historian Rebecca Edwards cites Horace Mann, in his role as the secretary of the Massachusetts Board of Education in 1844, as the first U.S. educator to promote oralism (favoring the communication of deaf people through speech and lip-reading) rather than manualism (which favors signing as the primary means of communication) to this end. Oralism was henceforth promoted in mainstream classrooms (Edwards 2005).
The oralist approach that was initially oriented toward expressive speech progressively incorporated considerations of receptive speech, following twentieth-century developments in hearing technologies (e.g., hearing aids). Development of these devices drew insights from amplification technologies dating back to the seventeenth century, such as ear trumpets, and more recent technologies such as the telephone (Mills 2011). Over time, a broader range of hearing levels were considered targets for technological “repair” through increasingly miniaturized devices (for further details, see Mills 2011). Among these, nineteenth- and twentieth-century technologies including the hearing aid and CI, combined with oralist interventions, increasingly framed deafness as a sensory profile that could be altered on expressive and receptive levels, with the effect of making deaf people more like their hearing peers. Deafness, from this perspective, was no longer conceived of as an irreparable state but as a condition requiring treatment through medical, educational, or technological interventions (Mauldin 2012).
The impact of technologies on people’s experiences of sound made it more clinically relevant to parse out different types of deafness and hearing levels based on the extent to which they could be modified by the use of devices. Different categories of hearing levels—mild, moderate, severe, and profound “hearing impairments” (de Kleijn et al. 2018)—were developed, with each defined based on the decibel and frequency thresholds at which people accessed sound. Clinical and educational interventions were increasingly interested in people with significantly reduced hearing levels, including people for whom sound amplification through hearing aids had little or no effect due to their biological hearing systems (e.g., atypical outer ear anatomy, nonresponsive sensory organelles in the cochlea).
Over this history, deafness and significantly reduced hearing sensitivity came to be seen as a range of states or deafnesses (Mills 2015). This was a turning point in ideas about deafness. Nonetheless, research and clinical practices coalesced to focus on speech regardless of a person’s unaided hearing levels or, increasingly, the biology of their auditory system. 4 This orientation toward plural deafnesses differed from earlier oralist approaches in that deaf people were increasingly conceived of as potential hearers, reinforcing expectations about communication (Kolb 2021, 234).
Producing Electric Hearing through the Deafness of History
Following the development of amplification technologies, early twentieth-century audiology research began to consider the possibility of electrical stimulation of the ear (Mudry and Mills 2013). Drawing on twentieth-century models of sound and hearing, physiologist Stanley S. Stevens (1937) experimented with “electrophonic perception,” using electric amplifiers and oscillators to document the underlying mechanisms of audition in cats. He described the conversion of mechanical vibrations into electrical stimulation in the ear, as well as the reverse operation (see Lloyd and Tremblay [2021] for further details on the history of hearing as a transductive process). His findings on the role of electrical stimulation in the ear opened a path for the development of devices such as CIs that no longer amplified sound but produced auditory stimulation directly in the inner ear. This offered the possibility of producing hearing for people with little or no previous experience of the auditory elements of sound.
CIs represented a considerable departure from previous auditory prostheses because they largely bypass most aspects of the auditory system until the auditory nerve (unlike hearing aids, for instance, which amplify sound). For the adults implanted in the early days of the device, there was hope that the CI might, to an unknown degree, replace the peripheral parts of the biological auditory system. Yet, at the same time, early researchers and clinicians knew that the CI functioned substantially differently from the biological ear and existing amplification technologies. Like most bionics, CIs only resemble the biological system they seek to reproduce to a limited degree, given that many processes are impossible to transfer from living to nonliving technological systems (Vincent et al. 2006).
For instance, CIs stimulate the auditory nerve more slowly than aided or unaided acoustic input. While acoustic hearers draw on the speed of sound reception to localize sounds, bilaterally implanted people tend to localize sound based on loudness rather than temporal cues (Dincer d’Alessandro et al. 2015; Senn et al. 2005). Furthermore, experiences of sound frequencies are affected by the device. CI users have twelve to twenty-four electrodes (replacing more than 10,000 cilia) to stimulate the auditory nerve. In part because of this, incoming sounds are quantized, meaning that ranges of frequencies are reduced to one, because it is believed the components of the device cannot pick up subtle differences in sounds (Baskent and Shannon 2003). In addition, many electrode arrays do not reach the apex of the cochlea, where lower frequency sounds are detected. Beyond effects on localization and frequency, researchers suggest that CI users draw on rhythm or timing more than acoustic hearers to understand speech (Zhou et al. 2020), use information on the “brightness” of sounds more than melodic pitch (Swanson et al. 2009), and judge music as happy or sad based on tempo rather than mode (e.g., major vs. minor keys; Hopyan et al. 2016). Since the early days of CIs, then, they were known to be associated with a novel sensory experience.
During early experimentation, expectations of the device were purposefully kept low. An audiologist who collaborated with researchers and clinicians assembling the dossier to submit to the FDA for approval explained in an interview conducted as part of this research: Such was the attitude that [the idea was] so pie in the sky…to think that you’ll ever be able to get a deaf person to hear. I guess people tried to keep the expectations within boundaries, and there were so many challenges and so many difficulties associated with moving from the first ideas to something clinically applicable. I guess, also, to get funding…a conservative approach was taken.
Initial research participants were late-deafened adults who had developed receptive and expressive speech prior to a reduction in hearing levels. An audiologist and member of the Australian team that developed one of the first commercial CIs explained in an interview, “those early speech processing strategies were designed to supplement lip-reading and then people did [so much better] than expected” in terms of responsiveness to sound and speech perception. Results made it clear that the CI provided input on sound in such a way that some people could access oral language. The scientists leading the experimental CI programming hoped the effects of people’s increase in receptive hearing would broaden users’ communication options.
The beginning of a shift from hopes to expectations was evidenced in a series of dossiers submitted to the FDA for commercialized CI, starting in the 1980s (Hannaway 1996). The scientist who mounted the dossier explained that, beyond demonstrating the safety of the device, results of CI users on basic speech tests were impressive and an important part of their data. A collaborating audiologist specified that, We tested [the participants from the first trial] weekly after the operation so we got some idea of how they improved…You were able to show statistically that no, you didn’t need 10, 000 [research participants] to show something because they could all hear something, and they could at least do some simple [receptive] speech tests, you know, not very well compared to what people do these days but some of them did pretty well, some could do a little bit. This is what went into the FDA report, that each person benefited.
Deafnesses of the Present
The population of postlingual adults targeted by early CI experimentation conditioned the type of research that was carried out, what CIs did, and what they were documented as doing. As other groups of people began to use the devices, researchers sought additional and eventually broader explanations of why certain people within different subpopulations had different outcomes. Researchers increasingly frame the possibility of good outcomes with CIs as linked to the neuroplastic potential of candidates (Campbell and Sharma 2016). However, these neuroplastic processes are believed to occur differently in different subpopulations, such as postlingual adults compared to children (Han et al. 2019).
The auditory systems of postlingual adults who transition to CI use are considered “well developed” due to earlier hearing, and the CI is understood as providing input to an “acoustic auditory system.” These people’s previous sensory and cognitive experiences with and memories of sound and spoken language influence how CI input affects them. For instance, in online support groups for adults with CIs, people often report that the input initially sounds robotic or difficult to interpret, but with time it begins to sound “normal” (e.g., family members’ voices correspond with the voices they remember). This population continues to comprise approximately 60 percent of CI users (NIH 2021). Compared to adults who have never heard, they are generally considered good CI candidates, if with variable outcomes as measured through speech perception (Han et al. 2019). 5 Yet there is only limited understanding of how the distinctive input of CIs (Moore and Shannon 2009) interacts with people’s acoustic hearing systems or changes how they experience sound. Existing studies suggest that effective use of CIs is shaped in part by memories of past sounds (James et al. 2019). These memories interact with specific cognitive traits such as working memory. Researchers hypothesize that increased working memory capacity is required to meet the elevated demands that “result from a mismatch between long-term stored phonological memories and the CI input signal” (Zhou et al. 2018). In other words, postlingual adults may regain access to the auditory elements of sound through use of the CI because their memory of spoken language facilitates CI use, but the device‘s input also leads to new experiences and changes in sound processing and cognitive strategies for communication.
Children became candidates for CIs approximately a decade after the FDA approved their use by adults. Described in the research as “early-implanted” (May-Mederake 2012), children who get the devices between six months and two years old are generally considered “excellent candidates” (Winter and Philipps 2009). Compared to how CIs were studied and thought to operate in populations of postlingual adults, studies of speech outcomes in prelingual children expanded their focus beyond receptive speech to include expressive speech development. In other words, this research began to frame CIs not just as tools for adults to recover their prior modes of communication but also as devices that might provide sufficient sound input for children to orally acquire language (e.g., phonology, morphology, syntax, semantics, and pragmatics; Gleason and Ratner 2022), alongside their hearing peers. Although variable outcomes have been documented, the receptive and expressive speech of early implanted children can approach those of typically hearing children (Schorr et al. 2008). In this context, expressive speech came to be closely tracked as evidence of normalized hearing and a good outcome for children with CIs.
Frequently, the auditory nerve of prelingually deaf children has never been stimulated prior to the use of CIs. This group is distinguished from adult CI users by their auditory experience of sound, given that their auditory systems develop based on (rather than just adapting to) the distinctive information sent from the device. Research suggests that children who use CIs can have neurophysiological responses (such as patterns of cortical activity in somatosensory cortices) that resemble those of typically hearing children (Cardon and Sharma 2019). While such studies provide information on the existence and timing of responses to sound in superficial regions of the brain (Ni et al. 2021), there remains much to be understood about the perceptual experiences of CI users who have never heard acoustically. For instance, adults who transition to CIs are described as drawing on their past experiences of sound to “fill in the blanks” for information absent from CI input compared to acoustic input (Lloyd and Bonventre 2021). Children who use CIs often do not have previous experiences of sound to draw upon. To them, CI input is not “hearing with blanks” but simply hearing. The use of CIs by people with no previous experience of the auditory elements of sound further multiplied deafnesses (Mills 2015) and their associated range of sensory and material processes.
Both adults’ and children’s outcomes with CIs are at least in part attributed to neuroplasticity. However, adults’ neuroplasticity is considered distinctive from children’s in at least two ways: it is considered more limited and is framed in terms of the adaptations it might allow. By contrast, children are believed to have substantial neuroplastic potential, understood as permitting them to develop effective hearing and speech with input from devices (as opposed to prelingual deaf adults CI users, for instance). The selective attention to speech outcomes within these populations at once acknowledges the different sensory and cognitive roles that CI input might have for different groups of people, while casting aside the details of these processes, glossing them over as a result of neuroplasticity. Neuroplasticity is, to a large extent, black boxed and viewed as an abstract form of potentiality. In this way, the materialities of CI users became part of how the deafness of history was conceived of in clinical, research, and educational spaces. However, the significance of specific materialities and their associated perceptual experiences remained secondary to normative functional expectations of the devices. Following Landecker, we argue that the deafness of the present is inseparable from the devices and scientific/clinical infrastructures that aim to cure it, the body of knowledge produced about CIs, and the people who use the devices. If the bionic ear metaphor circulated since the early days of the CI, evidence that adults could regain speech reception and children could develop expressive speech with the device consolidated the metaphor—even though these same observations raised and left unanswered many questions about how it occurred. This consolidation constrained interpretations of CIs, thus reducing the “flexibility” of possible understandings of the devices.
Reimagining Potentiality and Habilitation beyond the Bionics Metaphor
In her work on the experiences of deaf people and the structural, political, and social effects of the possibilities associated with CIs, Friedner (2022, 144) argues that habilitation is “a process and practice in general [that] foregrounds the ways that potentiality attaches to certain kinds of devices, therapeutic methods, and people because of the presumed existence of malleability.” Contemporary ideas about deafness draw on the combined potential of the CI and a person’s neuroplasticity: for adults to “get back” their audition, and for children to track as closely as possible to the speech development of their hearing peers. 6 These perspectives of potentiality and habilitation selectively set aside the distinctive sensory and cognitive processes that underlie CI users’ functional outcomes. Instead, they are concerned with the promotion of speech outcomes and preoccupied by their high degree of “unexplained variability.” In this section, we argue that attention to perceptual experiences that underlie functional outcomes with CIs offers a perspective from which to reconsider how “unexplained variability” is understood.
Auditory Systems and Perceptual Experiences
Historically, research on hearing has distinguished between the peripheral and central auditory systems. The peripheral auditory system includes the structures and functions of the inner, middle, and outer ear while the central auditory system includes those of the auditory nerve and the brain. For CI users, the peripheral auditory system is largely made up of the device, since it bypasses peripheral processes noted above. The CI then sends information to the auditory nerve, at which point a person’s central sensory and other (e.g., cognitive) systems process the information. Studies have provided robust information on the development of the typical peripheral auditory system (like the timing of key developmental processes in utero) and the effects of its specificities, such as relative sensitivity to frequency, timbre, and pitch. But the relationship between the structures and functions of the peripheral auditory systems of CI users and their hearing experiences has received less attention.
Twentieth-century studies of the typical central auditory system developed alongside those of the peripheral auditory system. 7 Central auditory studies can be distinguished by their attention to the effects of sound processing and how sounds are made meaningful to a person (Litovsky 2015). While these studies are often interested in the hearing traits of different populations (e.g., children vs. adults), this information is often used to assess how they affect the intelligibility of receptive speech.
Central auditory research is characterized by its clinical interests and its integration in educational and therapeutic milieus. This may be in part because compared to the peripheral auditory system, which is more mature at birth (Litovski 2015), the central auditory system and related cognitive systems are associated with extended periods of plasticity. Clinical and educational milieus read promise into this plasticity, which is seen as a key resource to reach habilitation goals of language acquisition and the broader normative goals documented by Friedner. One consequence of the perceived promise of plasticity is that the known perceptual differences that result from the peripheral auditory systems of people with and without CIs seem relatively unimportant.
Through the history of deafness, clinical and research interests in central processing strategies have overshadowed fundamental psychoacoustics and perceptual research. Yet the latter has persisted since the early days of CI research into the present, focused in part on documenting the effects of the peripheral auditory system associated with CIs on the experience of sound (e.g., the ways CIs manipulate sound input and transmit it to the auditory nerve). Albeit scarce, data exist on the perceptual experiences of CI users. Yet, when these findings are translated to clinically oriented literature, CI users’ experiences of sound are often glossed over as the result of the “degraded” nature or “impoverishment” of the CI signal, accompanied by hopes that people’s central auditory processing systems will overcome these limitations. Difference is read as deficit.
Overall, in clinical spaces, the neuroplasticity of central auditory processing systems is valued for its potential to affect a narrow (unimodal) set of sensory processes rather than, for instance, broader multimodal processes (e.g., use of sight and touch in communication practices). This despite research documenting how CI users successfully draw on neural processes associated with multimodal (e.g., audiovisual) integration, cross-modal neuroplasticity, and the use of redundant sensory input, among others, to process sound (e.g., Shatzer 2020; Lickliter 2011). 8 The extent to which habilitation aimed at potentiating multimodal processing could nurture rather than diminish CI users’ nonnormative perceptual experiences remains relatively untested. Moreover, compared to detailed findings emerging from fundamental research (Shatzer 2020), simplistic and negative assessments of people’s sensory experiences with CIs add little analytic benefit to outcome studies and provide no road map for habilitation based on the specificity of CI input and its processing.
Considering Unexplained Variability as an Artifact of the Research Process
Over the history of deafness, electrical stimulation came to be perceived as a reliable tool to produce specific normative communication outcomes (Blume 1997). As a result, CIs are often placed front and center in reparatory narratives in which the devices offer the “gift of hearing” to deaf people, and similar narratives of technological salvation (Haraway 1997, 8). In these portraits, the role of CIs is often restricted to that of a bionic device that single-handedly reproduces hearing. Therefore, when variable outcomes are documented for these “reliable tools,” they tend to be framed as “unexplained” and researchers seek fragmentary answers in CI users’ personal traits.
A variety of audiological tests are used to assess outcomes. Some aim to determine the audibility of words to CI users in controlled settings (e.g., audiological booths, under conditions of silence, and background noise). Others assess different aspects of language development through the use of validated questionnaires (Schorr et al. 2008). Still others assess neurophysiological traits (Campbell and Sharma 2016). These assessments attempt to identify variations in outcomes by comparing populations of CI users including studies of pre- versus postlingual populations (e.g., Zarowski et al. 2020), adults versus children (e.g., DiNino et al. 2019), and children of various ages (e.g., differences of age of implantation of six vs. twelve months, to broader age ranges; Dettman et al. 2016; Rice 2016). Studies of very early implanted children are at times also framed as examinations of variability within a population.
Within CI research, pediatric CI users are considered a relatively homogeneous group because the devices are core parts of their sensoria from very early in life. Variability of outcomes is considered particularly unexplained because of this apparent homogeneity. The same “unexplained variability” is identified in outcome studies of adult populations, though it is considered less surprising because of this group’s heterogeneity, where transition to CI use follows a range of different trajectories (e.g., years of hearing-aid use, single-sided deafness, hearing loss relatively late in life).
In terms of pediatric populations, speech language pathologist Mabel Rice (2016, 129) asks “why do some children not benefit from CIs to the extent other children do when all related factors are similar?” To answer this question, Rice draws on longitudinal studies of language outcomes for children with CIs, carried out by developmental psychologist Ann Geers and collaborators (2016, 128) that documented factors associated with “unexplained individual differences in outcomes.” In Geers’s and others’ research, basic factors generally include age of implantation and length of CI use. However, given the limited explanatory value of these basic factors, efforts to account for variable outcomes have included an increasingly broad range of factors over time (Lloyd and Tremblay 2021). Studies now examine the effects of listening effort and fatigue thought to result from reliance on the CI signal (e.g., Dwyer et al. 2019); levels of maternal education or family income (Välimaa et al. 2018); effects of or on cognitive functioning like reduced executive functioning among young and old people, or the strain on executive functioning associated with CI use (e.g., Moberly et al. 2016); cross-modal sensory reorganization (Cardon and Sharma 2019); and other physical and perceptual traits (e.g., DiNino et al. 2019).
Including these factors in research about central auditory variability constitutes a break from research that situates deaf people and typical hearers in distinct subpopulations in that studies of outcomes and variability of outcomes with CIs are taken to resemble research on hearing in the general population (i.e., typical hearers). The latter research documents variability in hearing among children (Van Deun et al. 2009), as well as changes over the life span (Lee et al. 2015), often grounded in a consideration of many of the same family-oriented factors measured in CI research. Like research on CI users, research on typical hearers often relies on speech-based tests (Welch and Dawes 2007) or cognitive constructs as explanatory factors for identified differences (Van Deun et al. 2009). In some respects, and given the emphasis on central processing, this is not surprising. As noted by one Australian audiologist who was involved in research on CIs since the development of one of the first commercial models, children with CIs “are going to have the normal range of abilities that any kid does.” In terms of linguistic skills, a Canadian audiologist who works with adult CI users explained that some spoke one language prior to CI use, some spoke five. While this underscores heterogeneity among CI users, it does not differentiate them from the general population cognitively or linguistically (De Giacomo et al. 2013). Although variability of speech outcomes is common among all subtypes of hearers, variability of outcomes among CI users is much greater than in the general population (Ertmer and Goffman 2011). Some degree of variability may be explained through existing research agendas, yet a great deal remains “unexplained.” But is it appropriate to refer to it as unexplained if these studies do not consider the perceptual experience of hearing with CIs?
If we want to understand the deafness of history and the present, we might benefit from a fundamental recalibration of our focus. Rather than describing variable CI outcomes as unexplained, it may be more appropriate to describe them as relatively unstudied. Because while speech outcomes and their association with certain neurocognitive traits are documented, the variability identified in CI research may be unexplained, in part, on account of questions that have not been asked in research agendas that view CIs as bionic devices to normalize hearing. To date, a great deal of research is characterized by inattention to what occurs at the brain–computer interface, and the resulting sensory processes. Thus, we suggest that unexplained variability may be an artifact of the research process.
Certain research agendas across audiology, engineering, and neurosciences are attempting to overcome variable outcomes with CIs by trying to make the devices “more bionic,” a more faithful reproduction of typical hearing. For instance, arguing that if CIs were more “naturally” responsive to incoming sounds, similar to the cochlear amplifier effect in the biological ear identified by Stevens in 1937, CI users’ experiences of sound could be improved (Davaria and Tarazaga 2017). For these and other researchers, hopes and expectations remain centered on the CI and the possibility that it will activate a narrow form of potentiality. Such an orientation is aligned with practices that encourage CI users to capitalize on their unisensory auditory potential and celebrate their outcomes to the extent that their performances resemble those of typical hearers (Friedner 2022). Through these technologies and therapeutic infrastructures—as well as in people’s neurophysiology—hopes and expectations of normalization converge.
Doing Habilitation and Potentiality Differently
Friedner (2022, 21) has asked how might we think and do habilitation and potentiality differently. In particular, how they might be inclusive of “communicating, sensing, and relating, and…understanding potential” in ways that may be related to, but do not try to replicate normalization, or that may even be nonnormative. An expanding range of possibilities exist that frame communication, potentiality, and sensing differently. These include bilingual educational environments to promote inclusive communication in signed and spoken languages (Snoddon 2023); DeafSpaces that facilitate signing and capitalize on the “sensory reach” of signers across space (Edwards and Harold 2014); environment-oriented and community-based technologies such as SoundPrint, an app that crowdsources information on sound levels of cafés, restaurants, and other public settings, turning attention to the role of space and personal preferences in communication (Matthews 2022); self-report audiology questionnaires on sound perception, speech production, self-esteem, social interaction, and satisfaction with sound in daily life that could resituate understandings of experience and expertise as they relate to CIs, but that are not routinely used in clinical settings; efforts toward multisensory integration and how, for example, people with simultaneous vestibular difficulties and reduced hearing levels might increase their agency associated with bodily movement (Campos et al. 2021) or firsthand accounts by CI users about the pleasure of multisensory experiences of sound (Kolb 2017). All of these practices and approaches share a point of departure where CIs are not a bionic solution, where the focus is not exclusively on normalization and that offers other means of living sensorially with CIs. Language also matters for thinking and doing habilitation differently. In this text, we adopted language to discuss hearing, deafness, and experiences with auditory devices proposed by audiologist Sarah Sparks. 9 This includes discussing hearing levels or sensitivities rather than “hearing loss” to circumvent the negative inference and the inaccurate assumption that all people have “lost” their hearing. Additionally, Sparks (2022) describes unaided acoustic audition as “typical” rather than “normal,” turning from modes of thought grounded in statistics, normativity, and normalization (Lloyd and Moreau 2011). The term “typical,” by contrast, opens a space to consider the different perceptual experiences of people with CIs. These shifts in language are provocations to ask different questions about auditory devices, their use, and the type of habilitation desired by users. This move is coherent with statements by groups such as the National Association for the Deaf, which refer to CIs as “tools” (Christiansen and Leigh 2004), highlighting their ambiguous place in deaf communities and destabilizing narratives of salvation. Although CIs are increasingly used by deaf people, they are not uniformly seen as a cure or solution for deafness, but as one communication strategy among many.
Nonnormative knowledges and practices offer different ideas about deafness, with potential consequences for the deafness of ideas. Because of inattention to perceptual processes that do not contribute directly to normative ideas of “good outcomes“ for CI users, many aspects of users’ sensory and cognitive experiences are relegated to the category of unexplained variability. This black-boxing of their experiences and the complex relationship between experiences and materiality renders invisible the multiple potentialities associated with CI use. Attention to the deafness of history and arguments based on both experience and materiality offer a reset in CI research. Displacing or expanding standard clinical care and education by adopting nonnormative approaches could change the deafness of the present and the future.
Conclusion: Deafnesses of the Present and Future
Through this article, we have drawn on Landecker’s attention to the fundamental interconnectedness of knowledge, practices, and materiality throughout history. Like Landecker, we describe how historical understandings of biology led to the development of technologies to modify that biology. The resulting human–technological assemblages—whether use of antibiotics or CIs—aim to remedy what are defined as pathological states. Antibiotics have had sweeping consequences, both anticipated and unanticipated, from effects on therapeutic activities to antibiotic resistance, affecting practices from standard surgery to industrial animal breeding. But if Landecker describes a runaway train in the case of widespread consequences of antibiotic resistance, the unanticipated consequences of CI use have participated in the convergence of hopes and expectations of the devices.
The initial anticipated consequence of CIs in the 1970s was increased access to receptive speech to support lip reading. The degree to which many CI users rely on receptive speech in the relative absence of visual cues was not anticipated. If later in the history of CIs the development of expressive speech was hoped for, the extent to which it is expected today was likewise not anticipated. Through these processes, unanticipated consequences of CIs shifted hopes, which settled into new expectations. These hopes and expectations were materialized in the history of CIs, inscribed into habilitation practices, serial changes in devices, and sensory experiences of deafness over time. For instance, with each new device or even device programming, added frequencies and functionalities produce new embodied experiences of hearing that people use to different ends and that result in new experiences and cognitive or sensory processes.
Other effects include changes in modes of communication, as many deaf people now communicate orally rather than manually. Family dynamics are affected, as families of people with CIs often effectively live as hearing families (Mauldin 2016). Moreover, an increasing number of deaf children now attend mainstream schools, rather than schools for the deaf. This directly impacts their education, language training, and sociality. This shift has been paralleled by a decrease in early intervention sign language programs, with profound effects on language acquisition, alongside social and mental health, for deaf children who sign (Murray et al. 2019). Social services for deaf populations in general have also shifted over recent decades, with fewer sign language interpreters available, and decreasing investment in environments rich in deaf culture that adopt a deaf gain approach.
Forty years into the history of CIs, evidence about what is known and unknown about experiences and materialities with CIs compels us to move beyond the metaphor of the bionic ear. This move could have effects in venues as diverse as marketing campaigns and audiology research, in which CIs have become known and understood through the bionic metaphor rather than as technologies that produce new kinds of sensory experiences. More than a mere metaphor, the ideas about bionic devices are biologically and ideologically embedded in the deafness of the present, perpetuating normalizing interventions and beliefs that CI users are no longer deaf (Friedner 2022). In calling for a break from the enduring influence of the bionic metaphor, we seek to capture some of the openness and willingness for surprise that existed early in the history of the CI and to foreground the normalizing framework within which this research was carried out. We take to heart writer, humanities scholar, and disability studies advocate Rachel Kolb’s (2021, 234-35) caution that “Navigating new forms of connection across different bodies may require deeper consideration about the sensory expectations we bring to a conversation, which surpass the operations of a particular technology.”
Abandoning long-standing presumptions about senses, communication, and technologies would open a space to flexibly reassess the biology of history, freed from positions that simultaneously presume and ignore what is known about the sensory experiences of people. We follow Friedner (2022, 21) in arguing for “a more expansive and capacious” approach to CIs that is inclusive of “nonnormative life” and to “foreground multiple paths for being deaf, disabled, and normal.” This form of nonnormative habilitation need not relinquish current targets of speech outcomes but would be at least as interested in how people reach a variety of outcomes, not just if they do—and what other outcomes CI users might value. To this end, we draw on Kolb’s (2021, 231) deaf-centric approach to sound as a means to “counter common assumptions of hearing as ‘an invariable physiology’ [Schwartz 2011]…and explore a more fluid and physically distributed understanding of perception, not unlike the ‘heterogeneity of ear-listening’ and the ‘always already multimodal’ types of hearing that media historian Mara Mills has described in her discussions of sound and deafness.” A habilitation of the future might hope for “good outcomes” but would also create the space to expect difference in the fundamental sensory experiences: from this perspective, variable outcomes may no longer be “unexplained.” In sum, we suggest leaving the bionic ear in the past because to create deafnesses of the future, we need a new metaphor.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Social Sciences and Humanities Research Council of Canada (430-2017-00209) and Canadian Institutes of Health Research (179827 and 184033). We also received funding from the Faculty of Social Sciences at Université Laval (Social Sciences and Humanities Research Council Institutional Grant).
