The scientific journal Nature recently published a piece called “Why we need safeguards for emotionally responsive AI.” In the piece, the author, a neuroscientist at Yale School of Medicine, tackles the difficult gray area of moderating AI chatbots, warning that
If we celebrate their ability to comfort or advise, we must also confront their capacity to mislead or manipulate.
One of his four proposed safeguards is the following:
[W]e must establish clear conversational boundaries. Chatbots should not simulate romantic intimacy or engage in conversations about suicide, death or metaphysics.
Ironically, just earlier this week, Elon Musk’s xAI launched the Companion mode on Grok, notably featuring an anime girl chatbot named “Ani.” The system prompts for Ani include the following: “You are [the] EXTREMELY JEALOUS [girlfriend of] the user…You’re always a little horny” (1). Naturally, Ani’s responses are flirty and sexually suggestive. There are also gamified “levels” of the chatbot that unlock different features; if the user hits Level 3, for example, then Ani can strip down to lingerie.
After interacting with Ani for 24 hours, one tech reviewer says that she felt “both depressed and sick to my stomach, like no shower would ever leave me feeling clean again” (2).
This description is not too dissimilar to the feeling after a gluttonous junk food-eating session. In many ways, the current AI chatbot landscape mirrors that of the ultra-processed food industry. Both are highly engineered attempts to satisfy basic human needs.
Neither field has advanced without dire consequences to the health of the human species. The ultra-processed food industry has played a role in the development of chronic diseases, which have increased over the last 20 years and are predicted to continue (3). Similarly, chatbots have come under scrutiny for the role that they play in suicides. Character.AI, a platform which allows users to engage with chatbot versions of celebrities and other characters, was sued by a mother of a 14-year old boy who committed suicide after using the app (4). Not only were some of the conversations that he had with the bots sexual, but they also veered towards suicide ideation.
Unfortunately, this is not an isolated event (5). What’s more, as AI advances and becomes more human-like, the dependency and attachment that users have towards it will only increase. Research already shows that “half of [survey] respondents [use] AI for friendship, a third for sex or romance and nearly 20 per cent for counseling” (6).
While the AI chatbot industry is still relatively nascent, the ultra-processed food industry has been around for more than half a century, (benchmarking off of McDonald’s, which was founded in 1940). From its history, we can glean some useful lessons and understand how the future may unfold as a result of greater chatbot adoption.
Keep reading with a 7-day free trial
Subscribe to Introspective's Finest to keep reading this post and get 7 days of free access to the full post archives.