If Pinocchio Doesn't Freak You Out, Microsoft's Sydney Shouldn't Either

If Pinocchio Doesn't Freak You Out, Microsoft's Sydney Shouldn't Either

In fact, AI output might be gibberish, and gibberish language can’t lie, threaten, or slander. Philosophers of language point out that our existing theories of meta-semantics, which concern when and how expressions come to have semantic meaning, tell us that chatbot outputs are literally meaningless because expressions are only meaningful if the speaker possesses communicative intentions or speaks with knowledge of linguistic conventions. Given ChatGPT’s probabilistic operation, its outputs aren’t generated with the goal of having a successful communication, and chatbots are not aware of the conventions governing how we speak to and understand each other.

But it’d be weird to maintain that chatbot responses are literally meaningless, since we naturally understand what they’re “saying.” So, the solution is to understand chatbots through the lens of fiction. Words on the page exist; Jo March is a literary figure resulting from an interpretation of those words. Source code and textual outputs exist; Sydney is a persona resulting from an interpretation of those outputs. Neither Jo nor Sydney exist beyond what humans construct from the textual cues they’ve been given. No one literally said “Juliet is the sun,” but we take Romeo to have said those words with communicative intent in the fictional Verona. In the same way, even though there’s no one literally composing ChatGPT outputs, treating chatbot personae like fictional characters helps us see their text as meaningful even as we acknowledge their lack of conscious intention.

Thinking of chatbot personae as fictional characters also helps us contextualize our emotional reactions to them. Stanford professor Blakey Vermeule says we care about fictional characters because being privy to their minds helps us navigate the social world. Fiction provides us with vast swaths of social information: what people do, what they intend, and what makes them tick. This is why we see faces where there aren’t any, and why we worry that Sydney might have a mind of her own.

Chat outputs and the kind of “fictional mind” they generate ultimately say more about our own language use and emotional life than anything else. Chatbots reflect the language they were trained on, mimicking the informational and emotional contours of our language use. For this reason, AI often reproduces sexist, racist, and otherwise violent patterns in our own language. 

We care about humanlike AI not necessarily because we think it has its own mind, but because we think its “mind” reflects something about our world. Their output gives us genuine social information about what our world is like. With ChatGPT, we have fiction in a form that is more interactive, and we care about its “characters” for the same reasons we care about literary characters.

And what about AI safety? Here, too, a comparison to fiction can help. Considerations around AI safety should be focused not on whether it is conscious, but on whether its outputs can be harmful. Some works of fiction, like R-rated movies, are unavailable to minors because they include representations that require a level of maturity to handle. Other works of fiction are censored or criticized when they are overtly propagandistic, misleading, or otherwise encouraging of violent behavior. 

Add a Comment