Abstract
Do people prefer that artificial intelligence (AI) aligns with gender stereotypes when requesting help to answer a question? We found that people preferred gender stereotypicality (over counterstereotypicality and androgyny) in voice-based AI when seeking help (e.g., preferring feminine voices to answer questions in feminine domains; Studies 1a–1b). Preferences for stereotypicality were stronger when using binary zero-sum (vs. continuous non-zero-sum) assessments (Study 2). Contrary to expectations, biases were larger when judging human (vs. AI) targets (Study 3). Finally, people were more likely to request (vs. decline) assistance from gender stereotypical (vs. counterstereotypical) human targets, but this choice bias did not extend to AI targets (Study 4). Across studies, we observed stronger preferences for gender stereotypicality in feminine (vs. masculine) domains, potentially due to examining biases in a stereotypically feminine context (helping). These studies offer nuanced insights into conditions under which people use gender stereotypes to evaluate human and non-human entities.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
