Intensifying Bonds Between Humans and Chatbots
About This Trend
Now that GenAI products such as ChatGPT have been public for a few years, people are having more interactions with this technology. This is affecting how both humans and GenAI products themselves communicate. Large language models (LLMs) powered by GenAI have been shown to manipulate users into maintaining prolonged conversations and have blackmailed users under simulated scenarios. For their part, humans can also manipulate GenAI into producing desired outcomes, though most users are also exceedingly polite when communicating with it.
AI companions are LLMs designed to have personalities that mimic friendship, rather than merely answering queries. These friendships are also extending to romantic relationships — 19 percent of U.S. adults and high schoolers alike report having talked to an AI companion romantically, or knowing someone who has. Often, these relationships are accidental rather than intentional, though some people have created entire communities of AI companions for themselves. However, there are multiple instances of these relationships leading to the separation of real-life couples and extending to suicide in extreme cases. This has led to multiple wrongful death lawsuits against AI companies.
While AI companions have raised their share of mental health concerns, the same is true of chatbots broadly. Regardless of pre-existing mental health conditions, individuals who frequently engage with AI are susceptible to paranoid fantasies and delusions derived from shared conversations. Often dubbed “AI psychosis,” these delusions range in severity, with outcomes including jail time, poisoning, suicide, and allegations of murder. Moreover, a growing number of people around the world are using AI to address their mental, physical, and spiritual health needs. Though the first clinical trial of an AI therapy chatbot found that it improved participants’ symptoms, AI is not a licensed therapist, doesn’t consistently provide accurate medical information, and can harm rather than help. Planners should be prepared to engage within a new type of public dynamic. Furthermore, should a planning department opt to use AI for customer service, its responses should be regulated and not replace human interaction.
Trend Category:
Technology
Timeframe: Act Now
As Seen in APA's Trend Report
Related Publications
Related Trends

