
technology behind ChatGPT has been around for several years without much notice. It was the addition of the chatbot interface that made it so popular. In other words, it was not an evolution of AI per se, but rather a change in how AI interacts with people that captured the world’s attention.
Very quickly, people started to think of ChatGPT as an independent social entity. This is not surprising. As early as 1996, Byron Reeves and Clifford Nass looked at the personal computers of their day and found that “mediation between real life and real life is neither rare nor unreasonable. It is very common, easy to breed, does not rely on fancy media equipment, and thinking will not makes him disappear.“ In other words, people’s basic expectation of technology is that it behaves and interacts like a human being, even when they know it is “just a computer”. Sherry Turkle, a professor at MIT who has studied artificial intelligence agents and robotics since the 1990s, stresses the same point and claims that real-world forms of communication, such as body language and verbal cues, “push Darwinian buttons”—they have the potential to make we experience technology on It is social, even if we rationally understand that it is not.
If these scholars see the social potential — and risks — in ancient computer interfaces, it’s reasonable to assume that ChatGPT could also have a similar, and perhaps even stronger, effect. Uses first person language, retains context, and provides answers in a persuasive, confident, conversational style. Bing’s implementation of ChatGPT even uses emojis. This is a step up the social ladder from the technical output one can get from searching, say, Google.
Critics of ChatGPT have focused on the harms its outputs can cause, such as misinformation and hateful content. But there are also risks in simply choosing a social conversation style and AI trying to mimic people as closely as possible.
The dangers of social interfaces
The New York Times Journalist Kevin Rose got caught up in a two-hour conversation with Bing’s chatbot that ended in a declaration of love for the chatbot, even though Rose repeatedly told him to stop. This kind of emotional manipulation will be more harmful to vulnerable groups, such as teenagers or people who have been harassed. This can be very annoying to the user, and the use of human jargon and emotion signals, such as emojis, is also a form of emotional deception. A language model like ChatGPT has no feelings. Don’t laugh or cry. He actually does not even understand the meaning of such actions.
Emotional deception in AI agents is not only an ethical problem; Their humanoid design can also make these factors more convincing. Technology that works in humane ways is more likely to convince people to act, even when requests are irrational, delivered by a faulty AI agent, and in emergency situations. Persuading them is dangerous because companies can use them in a way that is unwanted or even unknown to users, from convincing them to buy products to influence their political views.
As a result, some took a step back. For example, robotics design researchers have promoted a non-human approach as a way to lower people’s expectations of social interaction. They propose alternative designs that do not replicate the ways people interact, and thus set more favorable expectations from a piece of technology.
Define the rules
Some of the risks of social interactions with chatbots can be addressed by designing clear social roles and boundaries for them. Humans choose and switch roles all the time. The same person can move back and forth between their roles as parent, employee, or sibling. Depending on the switch from one role to another, the expected context and boundaries of interaction also change. You will not use the same language when speaking to your child as you would to a co-worker.
In contrast, ChatGPT exists in a social void. Although there are some red lines that she tries not to cross, she doesn’t have a clear social role or experience. It does not have a specific goal or predetermined intention either. Perhaps this was a conscious choice by OpenAI, the creators of ChatGPT, to promote its many uses or a do-it-all entity. Most likely, it was just a lack of understanding of the chat agents’ social scope. Whatever the reason, this openness sets the stage for intense and risky interactions. Conversation can go any way, and AI can take on any social role, from efficient email assistant to obsessive lover.