Siri's Identity

Reimagining Inclusive AI and Critiquing Today's Identity-Blindness

by Will Penman, Princeton University

Scholarly context

This project analyzes parodies of Siri that people made in order to see themselves more represented in AI. In doing so, this project adds significantly to rhetorical work on identity and artificial intelligence (AI), which have largely been independent bodies of research to this point. Parts 1 and 2 were designed with this scholarly context in mind.

Discursive construction of identity


"I went in there and customized it a little bit, to make it more personal for me. So, I present to you, black Siri." (Black Siri)

In sociolinguistics, the "discursive construction of identity" (e.g. Benwell & Stokoe, 2006, p. 4) contrasts with conceiving of people's identities as static, private, and innate. One of the primary methods for analyzing this, critical discourse analysis, has been productively used in rhet/comp scholarship, including to understand identity construction (Huckin, Andrus, & Clary-Lemon, 2012).

Rhet/comp scholars have argued that identity construction happens in significant ways in digital contexts. Danielle Nielsen (2015), for instance, examined how roleplaying games have offered players the ability to perform identity of gender, race, sexual orientation (and even species!) through designing their own avatar. These gameplay performances extend or accentuate who players are offline. And Estee Beck (2015) noted that identity construction can go the other way around digitally, too: with routine web browsing, automatic tracking processes impute market-based identities to users (e.g. type of house lived in, desire to purchase concert tickets, age range) that are troublingly "invisible" to users themselves.

This project contributes to this important thread of digital identity work by investigating how people (could) build their identity in interaction with voice-driven AI. Siri is neither a digital mapping of the self (à la Nielsen) nor a hidden identifying agent (à la Beck), but is an other, a fellow interactant that is all the more interesting for its strange embodiment.

Hesitations with "identity"

That said, some rhet/comp scholars are hesitant to focus on "identity" as such, preferring the more flexible concept of "difference." Difference, Stephanie Kerschbaum (2012) argued, helpfully avoids stabilizing or "fixing" identities like white, working class, or female (p. 619, with "fixing" carrying a pejorative pun on "trying to solve"). "Difference" also allows critical and pedagogical attention to what students actually mark as relevant, which often exceed big identity categories. (In her example, one student marked where she learned a grammatical rule - namely, at that university - as a relevant difference from her peer, and as a source of authority during peer review. This location-based appeal, Kerschbaum argued, would be hard to access in an identity-based analytic framework.) Thus, identity is "to be understood both through the contexts in which we communicate and act and by our embodiments of it" (p. 617, emphasis in original).

Similarly, Jonathan Alexander and Jacqueline Rhodes (2014) suspected that identity categories, which ostensibly mark difference, actually presume a fundamental similarity among people. In making identity categories linguistically parallel (white, black, Asian, etc.), we may inadvertently create an epistemological parallel and think that we can really know others through comparison to our own experiences. Focusing on identity categories, then, can "flatten" difference.

I find these important qualifications to identity-based inquiries. And the direction taken in Part 1, of sketching ways that an identity-based Siri could play out, does naturally lead toward short-term regularity. However, this project responds to Kerschbaum's concern insofar as implementing Siri in this way is recommended as being necessarily contingent and up for ongoing revision. Likewise, I would submit that the arguments made in Part 2 of this project to de-naturalize the comfort (or lack thereof) that we may feel when interacting with voice assistants like Siri also speak to Alexander and Rhodes' claim that we are somewhat opaque to each other.

At the same time, this project addresses an element of identity construction not discussed by Kerschbaum or Alexander and Rhodes: how identity categories can themselves be a resource for self- and group-identification processes. In the videos analyzed here, identities are communicable and up for discussion, negotiation, and contestation. In other words, the parodies analyzed here end up taking the identity categories of "black," "gay," "Mormon," and immigrant (among others) as ongoing questions of public interest.

Political implications of identity construction

One identity-related challenge that's central to this project regards who is doing the identifying. Sociolinguists Bucholtz and Hall (2005) provided a helpful heuristic here to understand the issues at play. Identity work can be:

[a] in part intentional, [b] in part habitual and less than fully conscious, [c] in part an outcome of interactional negotiation, [d] in part a construct of others’ perceptions and representations, and [e] in part an outcome of larger ideological processes and structures. (p. 585)

This five-part division gets at both the benefits and challenges of the Siri parodies investigated here. On one hand, almost all of the parodies foreground the creators' intentional identity work (i.e. [a]). This is self-directed and thereby easy to interpret as empowering, such as a lesbian woman characterizing lesbians (via Siri) as people who value healthy living.

However, this quickly becomes murky. Such a characterization about lesbian people's lifestyles would be problematic when made in attribution to others [d], like, as a straight person/Siri, telling someone they should do certain things because they're lesbian. It might also be problematic for a lesbian person to reinforce those characterizations to outsiders [c], e.g. "Well, I would get up and get out of the house tonight, since I'm lesbian." Moreover, oppressive (especially essentializing) accounts of identity [e], such as the very idea that "lesbians" are a certain thing, might be accessed even through the intentional aspects of these parodies [a]. Similarly, semi-conscious or habitual reflexes [b] might be at work in the parodies people create.

In other words, there are complicated political questions here about stereotypes and how possible it is to subvert them through the kind of self-conscious, intentional [a] use of them found in the parodies. From the perspective of this project, this is an important question, but is not treated as a show-stopper. In pertaining to implementing an identity-based Siri, these questions are largely beyond the scope of this proof-of-concept paper.

Still, a few preliminary notes are in order to head off objections and provide possibilities for future research. First, the conclusion to Part 1 suggests that treating these parodies as parodies means being willing to bracket some of these challenges; it means attempting to revel in the self- and world-building done in each video. This methodological deferral can itself be viewed as a scholarly contribution: namely, this project claims that parodies can, when considered rhetorically in terms of genre and audience, temporarily simplify complex questions of identity construction. This project also attempts to hold any particular characterization lightly: health efforts may or may not be something a lesbian Siri would/could/should adopt specifically, but any identity-based Siri would likely involve promoting certain values. Finally, from a rhetorical perspective, it would seem that (AI's) ethos and audience would be useful mediating concepts in working these questions through further - who is authoring and responding to these videos?

Interacting with artificial intelligence


By imagining himself as a "Senior Vice President" and styling his video as an Apple commercial, YouTuber Davy So puts himself in a position to comment on Siri's scope and inclusivity. So's video reminds us that AI technologies like Siri are not isolated technical achievements; they exist within technosocial systems of development and promotion. That is, our interactions with AI as users are laden with power relations. ("Introducing the Iphone 5s and 5c")

A second area of scholarly contribution regards our interactions with AI and related technologies. By complicating our sense of who/what can be a rhetor, AI technologies have been of interest to rhetoric scholars across English and Communication (Ingraham, 2014; Coleman, 2018; Brock & Shepherd, 2016; Fancher (2018); Gallagher (2017); Holmes (2018); Arola (2012); Eyman (2015); Brown (2014); Elish and boyd (2018)). One strong contribution to this discussion is Miles Coleman (2018), who theorized "machinic" rhetorics. Machinic rhetorics take an AI's appearance of autonomy seriously. Such a rhetoric focuses on "the 'hidden layer' of agency between human and machine, which allows for a given machine to be imagined as its own interlocutor, replete with its own ethos" (p. 337). In fact, Coleman even speculated that Siri is a productive entry point for this: "Are Speech/Voice User Interfaces possible sites for productive disruption and resistance? Are they yet another instance in which ideology lurks in our “neutral” interfaces? Machinic rhetoric helps us realize that yes, they are" (p. 347). This project can be seen as adding texture and specificity to this claim that systems like Siri operate rhetorically and ideologically. In particular, Part 2 addresses how Siri is laden with racially loaded ideologies of communication.

It's also worth positioning this project relative to other disciplinary approaches to AI. In the field of machine learning, questions about AI's impact on society have been framed with more attention to their algorithmic implementation. Sometimes this is framed as a technical problem of securing rigorous definitions for publicly valued terms, such as "fairness." For instance, after a ProPublica article (Angwin, et al. 2016) explored AI software that was being used to assess a person's chance of recidivism (a problematic construct already) and found that its decisions were racially unfair, machine learning scholars Pleiss et al. (2017) published a paper that compared how competing definitions of fairness manifest mathematically in such algorithms. Naively read, this paper found that "fairness" is an unclear term mathematically, and therefore incoherent. A rhetorical approach as taken here attempts to bring public debate over key terms (in their case, fairness - in this case, "inclusivity" and "representation") as debate (i.e. as up for contestation) back into relevance, insisting that computer scientists are also necessarily participating in structures of identity, fairness, representation, etc.

Other machine learning research applies questions of equity to algorithms that are opaque to users. In a widely cited article, Buolamwini & Gebru (2018) created an ingenious way to assess commercial face classifiers (i.e. products put out by Microsoft, IBM, and others that identify whether a face is present in an image, and conduct simple points of analysis, like what gender that person is). Face classifiers shed light on people's (literal) "visibility" to AI, especially with regard to emerging AI-driven surveillance, e.g. Amazon selling AI-trained face classifiers to police (Del Ray 2018). This project approaches AI through a similar focus on how people might use it - in this case as explored by potential users themselves.

Finally, some scholars use more personal methods for investigating our interactions with AI. For instance, in a set of artistic interactions, Stephanie Dinkins explored how a black robot might allow her to have conversations about racism that a white robot might not (Pardes 2018). From the lens of this project, Dinkins was also engaged in exploring the possibilities of identity-based AI. Her intervention of interacting with AI in order to unpack the effects of racism is a potent possibility for identity-based AI that dovetails with what the Siri parodies examined here address.

The choice of Siri specifically in this project reflects the fact that in the public eye, Alexa and Siri have begun to be recognized as sites of possible cultural transmission (Tsukayama 2018, Newman 2014), and of potentially problematic sources of cultural transmission, at that (Harwell 2018, Paul 2017, Shulevitz 2018, Stern 2017). In other words, this project helps articulate a nascent public skepticism with Siri and Alexa.