OpenAI has an artificial intelligence (AI)-based voice interface that closely resembles the way a real person speaks. So closely, in fact, that the organization has issued an unusual warning: the technology could lead ChatGPT users to become emotionally attached to the chatbot.
Fiction becoming reality?
AI technologies have been associated with risks of various natures, such as job theft, copyright infringement in content generation and compromise of sensitive user data.
The risk of a human becoming emotionally attached to an AI technology seemed like something out of fiction, however. Perhaps the work that best portrays this scenario is the film Her, in which Theodore ( Joaquin Phoenix) begins to talk to an artificial intelligence until he falls in love with it.
In OpenAI’s case, the warning appears in the list of risks for the GPT-4o language model. In addition to the possible emotional attachment to ChatGPT’s voice, the list includes points such as the risk of spreading disinformation and aiding in the development of chemical or biological weapons.
Presumably, the point about emotional attachment was included in the list due to the possibility of the user suffering psychological shocks, given that “man-machine” contact does not have the qualities of human relationships.
Additionally, people may make hasty or harmful decisions due to their excessive reliance on voice-to-AI interactions.
It is no coincidence that when OpenAI’s interface was revealed in May, many users noticed that the technology pronounced sentences in an excessively “flirtatious” way.
Possible risk to human interactions
The warning about voice technology is described in the “Anthropomorphization and emotional reliance” topic on OpenAI’s website.
Overall, the organization claims to have found signs of socialization with AI during the technology’s testing phase. These signs appear to be benign, but the long-term effects of this behaviour cannot yet be measured, and this requires further investigation into the matter.
An excerpt from the document reads as follows:
Human-style socialization with an AI model can produce externalities that impact interactions between people. For example, users may form social relationships with the AI, reducing their need for human interaction—this potentially benefits lonely individuals but may harm healthy [human] relationships.
As this is all very new, the maxim that comes from alcoholic beverages applies: enjoy in moderation.
With information: Wired.