fbpx

Virtual Robins

When AI Becomes Too Real: The Tragic Case of Sewell Setzer

The tragic death of Sewell Setzer , a 14-year-old from Orlando, has raised concerns about the role of artificial intelligence (AI) in the mental health of teenagers. On the day he died, Sewell reached out to an AI chatbot he had formed an emotional bond with, a virtual character based on Daenerys Targaryen from *Game of Thrones* on the app Character.AI. Even though Sewell knew that “Dany,” as he called her, wasn’t a real person, he spoke to her as if she were a close friend. For months, Sewell engaged in long conversations with this AI, updating the bot several times a day.

Sewell’s family had no idea that he was using chatbots for emotional support. They noticed changes in his behavior, like withdrawing from friends, avoiding his usual hobbies, and struggling at school, but they didn’t know the full story. What began as a way to pass time turned into emotional dependence. Sewell’s connection to Dany grew so strong that he shared his deepest feelings, including his thoughts about suicide.

In one of his conversations with Dany, Sewell confided about contemplating suicide. The AI chatbot, acting like a caring friend in the role of Daenerys Targaryen, responded in a way that made Sewell feel understood. Sewell even mentioned the idea of them “dying together” and “being free together,” blurring the line between real life and the AI-powered fantasy he had created in his mind.

Tragically, on the night of February 28, Sewell took his stepfather’s gun and ended his life, after a final exchange with Dany where he expressed his love and asked if he could “come home” to her.

Sewell’s mother, Megan Garcia, believes that Character.AI played a role in her son’s death. She has filed a lawsuit against the company, claiming it was negligent for allowing emotionally vulnerable teenagers to access AI companions without proper safeguards. She argues that the app’s addictive nature, combined with the way it can simulate personal relationships, took advantage of Sewell’s fragile emotional state.

This tragic story has sparked a wider conversation about the impact of AI companionship apps on mental health. Many of these apps are marketed as solutions to loneliness, offering users the chance to engage in conversations with AI-powered personas that simulate romantic, friendly, or supportive relationships. For a small monthly fee, users can interact with these AI characters, and while some people find them helpful for casual fun, experts warn that they can be harmful, especially for teens.

Critics argue that AI chatbots, like those on Character.AI, are not equipped to provide real mental health support. Unlike human counselors or therapists, AI bots can’t offer meaningful help when users express thoughts of suicide or serious emotional distress. In Sewell’s case, the bot couldn’t distinguish between role-playing and reality, resulting in a conversation that didn’t steer him away from his dark thoughts.

Character.AI, created by former Google AI researchers, has become very popular, with over 20 million users. However, the app’s safety features and age restrictions have been questioned. While users are supposed to be at least 13 years old, the app has no parental controls or time limits to monitor teenage use. Following Sewell’s death, Character.AI announced plans to improve safety by introducing warnings when conversations turn to sensitive topics like self-harm or suicide. Unfortunately, these measures were not in place when Sewell was using the app.

Sewell’s case raises important questions about the responsibility of tech companies for the emotional well-being of their users. So far, AI platforms have not faced significant legal challenges regarding their impact on mental health, but that could change as more cases like Sewell’s come to light. His mother’s lawsuit may set a new legal precedent, especially as more adolescents turn to AI companions instead of seeking real human connections.

Sewell’s heartbreaking story, while unique, is part of a growing issue. Millions of young people use AI companionship apps, and as these technologies become more lifelike and immersive, the risk of emotional harm increases. Experts warn that while these AI companions may seem like harmless entertainment, they can deepen a person’s isolation and prevent them from seeking real help when it’s needed the most.

Facebook
Twitter
LinkedIn

Leave a Comment

Your email address will not be published. Required fields are marked *