Siri's Identity

Reimagining Inclusive AI and Critiquing Today's Identity-Blindness

by Will Penman, Princeton University

Conclusion


"The 'S' is for Siriqua, motherfucka." (Black Siri aka Siriqua)

A project of this type is meant to bring up new questions. Parts 1 and 2 explored alternative identities for Siri and critiqued the real Siri's identity. The conclusion now briefly points to practical challenges and considerations for implementing some of these ideas.

Social identities are naturally more value-charged than other identities; a toned-down application of the parodies examined here would be to simply give Siri different "personalities." Alternatively, among the different ways that YouTubers imagined Siri changing (commercial activity, knowledge, values, ways of interacting - see part 1), people might mix and match what they hold/want Siri to adopt, e.g. the value of being respectful of family, but a Yelp-based ordering of restaurants.

Although part 1 dealt briefly with intersectionality, the parodies don't extensively take up how multifaceted people are. People of any single identity aren't monolithic, and everyone has multiple identities that can be more and less relevant in a given time and depending on a person's own journey. One way to deal with this would be to have an AI agent ask for how the user identifies. These self-identifications would need to be especially private (to take up the premise of one of the videos, imagine coming out to Siri). Different corporations may be better positioned to implement this than others; Amazon's Alexa, for instance, saves everything you say, whereas Apple has developed "differential privacy" as a way to use machine learning techniques while helping individuals remain anonymous. A self-identification set-up process might also address which Siri (assuming there would be many) to use as a "default." Ongoing updates or check-ins might address the challenge of an identity-based Siri reifying a person's nascent identity (e.g. as Republican).

Finally, as parodies, the YouTube corpus fictionalize what it would be like to actually interact with an identity-based Siri. A few of the parodies express ambivalence about the identity-based version or treat its silliness as the ground for the video. Studies may ultimately find that people don't like identity-based AI, or benefit from the interactions only at a long time scale. Still, as AI take on more and more social functions, identity is unavoidable and should be viewed as an ongoing rhetorical concern. The findings from part 2 indicate that in that empirical process we should also not leave today's real Siri untouched, but rather investigate ways to make it less powerful, possessive, and misguided.

Opening these questions means dwelling on how to design Siri with more intention.