Your ChatGPT Relationship Status Shouldn’t Be Complicated

Your ChatGPT Relationship Status Shouldn’t Be Complicated

My own research has shown the importance of clear social roles and boundaries for social AI. For example, in a study with my colleague Samantha Reig, we found that AI agents that tried to fill multiple roles that were vastly different from each other (say, providing users with a beauty service and later giving them health advice) lowered people’s trust in the system and made them skeptical of its reliability. 

In contrast, by studying families with teenagers that used conversational AI, we found that the AI agent needs to clearly communicate its affiliation—who does the agent answer to, the teenager or the parents?—in order to gain the trust of users and be useful to them. When families didn’t have that information, they found it difficult to anticipate how the system would act and were less likely to give the AI agent personal information. For example, teenagers were concerned that agents would share more with their parents than they would have liked, which made them more hesitant to use it. Having an AI agent’s role clearly defined as affiliated with the teen, and not their parents, would make the technology more predictable and trustworthy. 

Assigning a social role to an AI agent is a useful way to think about designing interactions with a chatbot, and it would help overcome some of these issues. If a child has an AI tutor, its language model should align with this role. Specific boundaries could be defined by the educator, who would adjust them to educational goals and classroom norms. For instance, the tutor may be allowed to ask guiding questions but not give answers; it could provide assistance with improper grammar but not write entire texts. The focus of conversation would be on the educational material and would avoid profanities, politics, and sexual language. 

But if the agent was in a confidant role for this child, we might expect different guardrails. The constraints might be more broadly defined, giving more responsibility to the child. Perhaps there would be more room for playful interactions and responses. Still, some boundaries should be set around age-appropriate language and content, and guarding the child’s physical and mental health. Social contexts are also not limited to one-human/one-agent interactions. 

Once we acknowledge that agents need social roles and boundaries, we need to accept that AI enters a complex social fabric in which multiple stakeholders can have diverging and even conflicting values. In the AI tutor example, the goals of the educator may be different from those of the child, their parents, or the school’s principal. The educator may want the student to get stuck in a productive way, while the parents may prioritize high performance. The principal, on the other hand, might be more concerned with average class outcomes and tutor costs. This kind of constraint-centered thinking is not just about limiting the system, it is also about guiding the user. Knowing an AI’s social role and context can shape user expectations and impact the types of questions and requests raised to the AI in the first place. Therefore, having boundaries on expectations too can help set the stage for safer and more productive interactions. 

A Path Forward

How can companies begin to adopt social constraints in the design of AI agents? One small example is a feature that OpenAI introduced when launching GPT4. The new demo has a “System” input field in its interface, giving users an option to add high-level guides and context into the conversation—or as this article suggests, a social role and interaction boundaries. This is a good start but not enough, as OpenAI is not transparent about how that input changes the AI’s responses. Also, the System field is not necessarily concerned with the social aspects of the AI’s  role in interactions with users.

A well-defined social context can help structure the social boundaries that we are interested in as a society. It can help companies provide a clear framing of what their AI is designed for, and avoid roles that we deem inappropriate or harmful for AI to fill. It can also allow researchers and auditors to keep track of how conversational technology is being used, and the risks it poses, including those we may not be aware of yet. Without these constraints and with a thoughtless attempt to create an omniscient AI without a specific social role, its effects can quickly spiral out of control.

Add a Comment