In the summer of 2019, a group of Dutch scientists conducted an experiment to collect “digital confessions.” At a music festival near Amsterdam, the researchers asked attendees to share a secret anonymously by chatting online with either a priest or a relatively basic chatbot, assigned at random. To their surprise, some of the nearly 300 participants offered deeply personal confessions, including of infidelity and experiences with sexual abuse. While what they shared with the priests (in reality, incognito scientists) and the chatbots was “equally intimate,” participants reported feeling more “trust” in the humans, but less fear of judgment with the chatbots.
This was a novel finding, explains Emmelyn Croes, an assistant professor of communication science at Tilburg University in the Netherlands and lead author of the study. Chatbots were then primarily used for customer service or online shopping, not personal conversations, let alone confessions. “Many people couldn’t imagine they would ever share anything intimate to a chatbot,” she says.
Enter ChatGPT. In 2022, three years after Croes’ experiment, OpenAI launched its artificial intelligence–powered chatbot, now used by 700 million people globally, the company says. Today, people aren’t just sharing their deepest secrets with virtual companions, they’re engaging in regular, extended discussions that can shape beliefs and influence behavior, with some users reportedly cultivating friendships and romantic relationships with AIs. In chatbot research, Croes says, “there are two domains: There’s before and after ChatGPT.”
Take r/MyBoyfriendIsAI, a Reddit community where people “ask, share, and post experiences about AI relationships.” As MIT Technology Review reported in September, many of its roughly 30,000 members formed bonds with AI chatbots unintentionally, through organic conversations. Elon Musk’s Grok offers anime “companion” avatars designed to flirt with users. And “Friend,” a new, wearable AI product, advertises constant companionship, claiming that it will “binge the entire [TV] series with you” and “never bail on our dinner plans”—unlike flaky humans.
The chatbots are hardly flawless. Research shows they are capable of talking people out of conspiracy theories and may offer an outlet for some psychological support, but virtual companions also have reportedly fueled delusional and harmful thinking, particularly in children. At least three US teenagers have killed themselves after confiding in chatbots, including ChatGPT and Character.AI, according to lawsuits filed by their families. Both companies have since announced new safety features, with Character.AI telling me in an email that it intends to block children from engaging in “open-ended chat with AI” on the platform starting in late November. (The Center for Investigative Reporting, which produces Mother Jones, is suing OpenAI for copyright violations.)
As the technology barrels ahead—and lawmakers grapple with how to regulate it—it’s become increasingly clear just how much a humanlike string of words can captivate, entertain, and influence us. While most people don’t initially seek out deep engagement with an AI, argues Vaile Wright, a psychologist and spokesperson for the American Psychological Association, many AIs are designed to keep us engaged for as long as possible to maximize the data we provide to their makers. For instance, OpenAI trains ChatGPT on user conversations (though there is an option to opt out), while Meta intends to run personalized ads based on what people share with Meta AI, its virtual assistant. “Your data is the profit,” Wright says.
Some advanced AI chatbots are also “unconditionally validating” or sycophantic, Wright notes. ChatGPT may praise a user’s input as “insightful” or “profound,” and use phrases like, I’m here for you—an approach she argues helps keep us hooked. (This behavior may stem from AI user testing, where a chatbot’s complimentary responses often receive higher marks than neutral ones, leading it to play into our biases.) Worse, the longer someone spends with an AI chatbot, some research shows, the less accurate the bot becomes.
People also tend to overtrust AI. Casey Fiesler, a professor who studies technology ethics at the University of Colorado, Boulder, highlights a 2016 Georgia Tech study in which participants consistently followed an error-prone “emergency guide robot” while fleeing a building during a fake fire. “People perceive AI as not having the same kinds of problems that humans do,” she says.
At the same time, explains Nat Rabb, a technical associate at MIT’s Human Cooperation Lab who studies trust, the way we develop trust in other humans—perception of honesty, competence, and whether someone shares an in-group—can also dictate our trust in AI, unlike other technologies. “Those are weird categories to apply to a thermostat,” he says, “But they’re not that weird when it comes to generative AI.” For instance, he says, research from his colleagues at MIT indicates that Republicans on X are more likely to use Grok to fact-check information, while Democrats are more likely to go with Perplexity, an alternative chatbot.
Not to say AI chatbots can’t be used for good. For example, Wright suggests they could serve as a temporary stand-in for mental health support when human help isn’t readily accessible—say, a midnight panic attack—or to help people practice conversations and build social skills before trying them out in the real world. But, she cautions, “it’s a tool, and it’s how you use the tool that matters most.” Eugene Santos Jr., an engineering professor at Dartmouth College who studies AI and human behavior, would like to see developers better define how their chatbots ought to be used and set guidelines, rather than leaving it open-ended. “We need to be able to lay down, ‘Did I have a particular goal? What is the real use for this?’”
Some say rules could help, too. At a congressional hearing in September, Wright implored lawmakers to consider “guardrails,” which she told me could include things like stronger age verification, time limits, and bans on chatbots posing as therapists. The Biden administration introduced dozens of AI regulations in 2024, but President Donald Trump has committed to “removing red tape” he claims is hindering AI innovation. Silicon Valley leaders, meanwhile, are funding a new PAC to advocate for AI industry interests, the Wall Street Journal reports, to the tune of more than $100 million.
In short, we’re worlds away from the “digital confessions” experiment. When I asked Croes what a repeat of her study might yield, she noted that the basic parameters aren’t so different: “You are still anonymous. There’s still no fear of judgment,” she says. But today’s AI would likely come across as more “understanding,” and “empathetic”—more human—and evoke even deeper confessions. That aspect has changed. And, you might say, so have we.
This post has been syndicated from Mother Jones, where it was published under this address.
