AI as Your BFF? The Latest Chatbots Want to Get Personal With You – CNET

AI as Your BFF? The Latest Chatbots Want to Get Personal With You - CNET

At its product event in September, Amazon executives couldn’t say enough about how Alexa will be there for you as a friend, thanks to an injection of AI upgrades. 

Dave Limp, the company’s senior vice president for services, showed off Alexa’s conversational skills, starting off with an exchange of pleasantries: “Hi there” and “How are you” and “How about yourself?” 

As the conversation progressed, Limp noted that the Alexa device had been programmed to know his favorite football team, and sure enough, Alexa had a favorite of its own. The conversation wasn’t without its hitches, but it was smooth enough for Limp to make his point: “The responses … have started to be infused with personality.”

In a separate demonstration, Daniel Rausch, vice president for Alexa and Fire TV, showed how the AI assistant could help with finding a family-friendly movie to watch. Like Limp, he emphasized the “natural and conversational” aspect of the interaction.

“It’s like speaking to a great friend who’s also the world’s best video store clerk,” Rausch said.

AI and chatbots are nothing new, but interacting with them has, until now, largely been devoid of pleasantries, small talk and certainly stimulating conversation. But as we can see from a slew of announcements over the past few weeks from tech giants regarding their public-facing AI tools and how they expect us to interact with them, that’s not always going to be the case.

Days after Amazon’s announcement, OpenAI introduced a new feature to ChatGPT that’ll let you interact with its large language model by voice. Next came Meta, saying it would be slotting its AI assistant into all its existing services, including WhatsApp and Instagram. Not only that, Meta will also allow you to customize the assistant with the appearance and voice of one of the many celebrities enlisted to lend their likenesses to the project (including Snoop Dogg and Charli D’Amelio). Then in the first week of October, Google announced Assistant with Bard, which it wants to be your “personalized helper” and a “true assistant.” 

What these developments all signify is a shift in the way we interact with computers. On our travels through the online world, we increasingly won’t be alone. We’ll be accompanied by an array of AI characters who will eliminate the need for excessive typing and button pressing and help us out while providing an element of personality and companionship. 

It’s something Deepmind founder and AI pioneer Mustafa Suleyman described in a September interview with MIT Tech Review as “interactive AI.” This will, he said, be the successor to the generative AI that’s currently being used to produce text and images based on data inputs.

This shift toward a more social version of AI has been a long time coming. We got our first taste of voice-enabled interactive AI all the way back in 2011 when Apple first unveiled its voice assistant Siri to the world. At the time, Siri was novel in ways that got people excited (and likely helped persuade some to buy new iPhones), but it was unfinished and unreliable in ways that made the technology something of a letdown in reality.

Siri, along with its peers — Amazon’s Alexa and Google Assistant — have improved dramatically over the years. But it’s really only now that, thanks to recent AI breakthroughs, we’re able to start experiencing the AI assistant-cum-friend that Apple first hinted Siri would become some 12 years ago.

The latest slew of announcements will see AI characters speak out loud to us as though we’re buddies, slip seamlessly inside our social media feeds and get to know us so that when we ask a question they can use context clues from our previous conversations to ensure we find the answers we seek. Through the powers of machine learning, AI has the potential to perceive us through our interactions and shape-shift accordingly into the kind of companion we find most stimulating and rewarding to be around.

Depending on your outlook, this idea might delight you, cause you to feel uncomfortable, or something in between. For many of us, forming a relationship with an AI character will be an entirely new experience. But there is precedent. AI friendship chatbots already exist, such as Replika, which has been around since 2017 and allows people to create their own ideal AI companion, always close at hand on their smartphone.

Not all AI chatbots will be designed to be your built-in bestie, but it’s clear from tech companies’ announcements that they want us to feel more at ease with AI by providing us with a “naturalistic” experience — more akin to talking with another human.

Can we truly be friends with AI?

You might be wondering how deep your friendship with an AI chatbot could really be. After all, friendships are complex relationships. 

It’s a question that’s already being studied and debated by philosophers, psychologists and computer scientists. Early results show many differing opinions, but a handful of studies — including two from the University of Hawaiʻi at Mānoa and the University of Oslo that specifically looked at Replika users — have demonstrated that some people do appear to experience genuine connections with chatbots that enhance their well-being.

It may not be realistic to expect to be best friends with a robot or AI companion, Helen Ryland, associate lecturer at the UK-based Open University, tells me, but it’s conceivable that we can have some degree of friendship with a robot or AI companion. “This relationship could have genuine social benefits,” she says.

A chatbot inspired by Tom Brady is among Meta's new AI-powered avatars

A chatbot inspired by Tom Brady is among Meta’s new AI-powered avatars.

Meta

In human relationships, there are a whole list of conditions (including empathy, affection, admiration, honesty and equality) that are often necessary for us to consider someone a friend. Ryland argues that for friendships to exist between humans and AI or robots, there must, at the very least, be mutual goodwill. That means that you mustn’t wish the chatbot ill, and vice versa.

People who have established relationships with AI do feel as though they have give-and-take friendships similar to those they have with other humans, says Petter Bae Brandtzæg, professor at the University of Oslo and chief scientist at research organization Sintef Digital, who’s published several studies on the topic. “They also feel they’re kind of responsible for building this further,” he says.

Not everyone believes it’s possible for humans to experience genuine connections with AI or chatbots. They’re worried about feeling deceived or manipulated by a machine that can’t be truly vulnerable with you in the way a human friend would and can only perform empathy rather than feel it. 

MIT professor of sociology Sherry Turkle, who’s long studied the relationship between humans and technology, argues that this “pretend empathy … takes advantage of the deep psychology of being human.” She can see why people might turn to AI companions, she wrote in MIT Technology Review in 2020, but she ultimately believes that chatbots, “no matter how clever, can only disorient and disappoint.” 

Should we be friends with chatbots?

Whether you agree with Turkle’s take, it’s surely worth asking whether establishing personal connections with AI is something that will benefit and enhance our lives.

According to Brandtzæg, who’s interviewed many users of AI companions in the course of studying human-chatbot relationships, a number of people do report benefits. Young people experiencing difficulties, for example, have had positive results from talking to nonjudgmental chatbots, while people who are otherwise isolated embrace the opportunity for intellectually stimulating conversation and find it prevents their social skills from getting rusty.

“They will always treat you quite nice, so you don’t feel judged,” Brandtzæg says. People using the chatbots “don’t feel that they get into tensions in a way that we can do with more human relationships.”

There are also benefits to having someone on call 24/7, he adds. You might not want to wake a human friend in the middle of the night to talk, but your AI companion can always be available and accessible.

AI or companion robots may be useful in this context, but some believe humans don’t otherwise have anything to gain from these relationships. According to Robin Dunbar, professor of evolutionary psychology at the University of Oxford, AI friends will benefit humans “only if you are very lonely, or cannot easily get out to meet people.”

Our human friends come to us with fully formed unique personalities, diverse back stories, views and appearances, but AI companions can often become what we make of them — something that concerns Dunbar. Not only can we customize their appearance and voice, but they can take their cues from us as to how we like them to communicate and behave. 

Unlike our human friends, who expose us to differences and diversity and who might push back if they disagree, it’s possible that AI chatbots could amplify the echo chamber effect we’re already exposed to on social media, whereby we surround ourselves only with people who share our perspectives. “The risk is that instead of having your horizons widened, they are progressively narrowed and are more likely to take you down a self-created vortex into a black hole,” says Dunbar.

Another concern, Brandtzæg points out, is that many of the AI chatbots that tech giants are foisting on us are made by companies that profit from our data — and we should be wary about that, he says. “They already have a lot of the infrastructure where we are doing [many tasks in] our daily lives, and this will just be a new layer.”

He’s concerned that without solid regulation, American tech companies will continue to drive the development of AI in a way that isn’t democratic or in the public interest. In the long run, we should consider whether the goal of AI is to turn us into more efficient consumers who will buy more things, he says.

Manipulation is one concern posited by AI ethicists, and another is security. Earlier this year, the privacy-focused browser and VPN maker Mozilla said that Replika was one of the worst apps it has ever reviewed from a security perspective. (Our attempts to reach out to Replika were unsuccessful.) Earlier this year, Amazon was fined $25 million by the US Federal Trade Commission over allegedly not deleting children’s data collected by Alexa. If you consider the intimate details of their lives that people are likely to share with an AI companion, lax security around this sensitive data could prove a major problem.

Even Brandtzæg, who’s seen the benefits that AI companionship can offer to people, encourages us to be wary when interacting with chatbots — especially those developed by big tech companies.

“When you communicate with a chatbot, it’s only you and your chatbot, so it feels very intimate, it feels very safe, your privacy guard is down,” he says. This could encourage you to share more information than you otherwise would when interacting with social media or what he refers to as “the old internet.”

It’s clear that as with the development of many technologies, interactive technology is ripe with possibilities and pitfalls — many of which we might experience in real time as tech companies continue to push out their latest updates to us. 

Navigating this new frontier will be a challenge for us all, but perhaps we can take some lessons from the other relationships in our lives. Establishing healthy boundaries, for example, and not letting our new AI friends overstep (especially when it comes to accessing your data). 

Instinct, too, can play a part. If you feel like something’s not quite right, create some distance. And remember, it’s always OK to take it slow and test the waters first — even if the pace of innovation seems to suggest otherwise.

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.

Add a Comment