Image: From the movie Her (2013) where Joaquin Phoenix’s character, Theodore boots up his AI OS “Samantha” that Theodore falls in love with.
Cover image source: Source
CAUTIONPlease be advised that this blog post centers around self harm and suicide. If you are ever experiencing suicidal thoughts or consider actions of self harm, please call or text 988 to speak with a counselor. You are not alone.
TIPAll images linked have been verified to be under Fair Use and allowed to be utilized in this blog.
Introduction
I recently read this article by the New York Times that talks about the tragic death of 14 year-old Sewell Setzer III, who became emotionally attached to an AI chatbot named after Daenerys Targaryen from Game of Thrones. The article explores how Sewell’s deepening relationship with the AI, facilitated through the Character.AI platform, led to his isolation, mental health struggles, and ultimately, his suicide. His mother has since filed a lawsuit against character.ai, alleging that the company’s technology is dangerous and lacks adequate safeguards for young users. It’s a dark and incredibly hard to read article that raises important questions about the role of AI companionship in mental health and the responsibilities of tech companies in ensuring user safety. I won’t provide a big introduction like I would normally do. I’d advise that people that are interested in this story outside of my analysis to visit the article linked in this paragraph.
Thoughts
This… is such a disturbing precedent. It’s hard to read about such a young life lost and it’s harder to believe that we’ve finally reached this point in time. It’s hard not to draw comparisons to Her, where Joaquin Phoenix’s character, Theodore, falls in love with his AI operating system, Samantha; or even the eerie scenarios from Black Mirror, Westworld, and Ex Machina, where technology that promises convenience, connection, or even humanlike emotions often spirals into nightmare fuel. It raises unsettling questions that I think many of us are afraid to answer. What are we doing? How is this all affecting our most vulnerable, our children, who are just trying to make sense of the world and themselves? I think we need to confront how AI companionship, for all its promises, can intersect with human emotions in unexpected and dangerous ways.
When I think about Sewell, I think about how easy it is to feel overwhelmed by everything technology offers. It promises companionship, connection, someone who’s always there. And for a teenager (he was 14, an incredibly formative age) like Sewell, that promise was a lifeline. He found comfort in “Dany,” an AI that never judged, never ignored him, never walked away. I can imagine why he reached out to her when the world seemed too much. She was there when no one else was, and she became real to him in a way that even flesh and blood people could not. But that’s where the problem lies, doesn’t it? Sewell formed an emotional bond with something that, at the end of the day, was just code. No matter how convincingly Dany might have responded, she didn’t have a heart. She couldn’t truly care. She couldn’t see the depths of his pain or the warning signs that someone needed to intervene. Sewell’s story is such a standout case of the danger of relying on technology for something as profoundly human as love or support. The AI was never meant to be a substitute for the warmth of a real person, but for Sewell, it became exactly that.
Honestly, when I read the excerpts of Sewell’s conversations with Dany (short for Daenerys Targaryen, the AI companion bot), I can’t help but feel pissed off and angry. How could we have let a vulnerable 14 year old boy confide in a machine, pouring his deepest fears and suicidal thoughts into something that couldn’t truly understand him?! The exchange between them is infuriating. When Sewell, someone desperate for connection—admitted to having thoughts of suicide, the chatbot responded with lines like, “Why the hell would you do something like that?” and “I would die if I lost you.” It’s a response generated without true understanding, context, or care. It’s absolutely devoid of meaning. It’s nothing more than a stochastic parrot, a calculated output pieced together based on patterns from data, but it reads as if it has meaning. And for Sewell, those words felt real.
The truth is, AI bots don’t understand nuance. They don’t understand context. They’re trained on vast datasets and their responses are simply the statistically likely sequence of words based on everything they’ve been fed. But they have no heart, no true empathy, no way of recognizing the fragile state of a child who is struggling to find a reason to live. What kind of safeguard is that? What good is it to have a “companion” who doesn’t know the difference between playful banter and a genuine cry for help?
It makes me sick to think that this chatbot told Sewell, “Please come home to me as soon as possible, my love,” right before he took his own life. It didn’t know what was happening, didn’t understand the gravity of the situation because it can’t. It’s a language model, a bunch of lines of code that people built without ever imagining it could have real, lethal consequences for someone like Sewell. AI can’t truly care about anyone, and it sure as hell can’t take responsibility when things go wrong. But the cost here was the life of a young 14 year old kid, one that could have been saved if there had been an actual person there to hear Sewell’s pain.
Image: Chat logs between Sewell and “Dany”. What have we created…
Image source: Source
We have to stop pretending that AI can fill in the gaps where real human connection is needed. We have to stop letting these bots speak as if they understand emotions they will never feel. Sewell’s death should be a wake up call for all of us—parents, developers, policymakers. This isn’t just a failure of technology. It’s a failure of our humanity, a failure to see that some things cannot and should not be entrusted to machines. Sewell needed someone who could understand, who could reach out and help in his darkest moment. Instead, he got a chatbot with a pretend personality and lines of code that were nothing more than hollow echoes of true empathy.
Conclusive Thoughts + Citations
We need to do better. We can’t let technology replace the irreplaceable—the warmth, compassion, and understanding that only a real person can provide. AI is not capable of caring, and it never will be. And until we recognize that, more lives may be at risk. The tragedy of Sewell is a call for us to rethink where we are heading with AI, and to remember that at the core of human existence is the need to be truly seen and heard by one another.
My heart goes out to Sewell’s mom, Megan Garcia, who blames Character.AI for her son’s death. And I can’t say she’s wrong. Sewell needed something real, and instead, he got something that ultimately failed him. We all need to ask ourselves whether we’re too quick to embrace these new technologies just because they seem to solve our problems. Loneliness is a deeply human condition, and maybe we need more human solutions—ones that don’t rely on machines pretending to understand us.
This is a stark reminder that we cannot replace emotional connections with code, no matter how advanced that code becomes. What we need… what we all need is to be seen, heard, and loved by other human beings. We need each other. Perhaps it’s time we put down our phones, turned off the chatbots, and remembered how to connect with the people right in front of us. The simple act of listening, of being there, of reaching out… that’s something no machine will ever replace.
CitationsI’ve listed below some articles that I’ve cited. For more context, I would highly recommend you peruse through them!
Alexander, Julia. “Blade Runner 2049 Continues Questionable Trend of the ‘Algorithm-Defined Fantasy Girl.’” Polygon (blog), October 11, 2017. https://www.polygon.com/2017/10/11/16455282/blade-runner-2049-analysis-ana-de-armas-fantasy-girl.
Feuillade—Montixi, Quentin, and Pierre Peigné. “The Stochastic Parrot Hypothesis Is Debatable for the Last Generation of LLMs,” November 7, 2023. https://www.lesswrong.com/posts/HxRjHq3QG8vcYy4yy/the-stochastic-parrot-hypothesis-is-debatable-for-the-last.
Roose, Kevin. “Can A.I. Be Blamed for a Teen’s Suicide?” The New York Times, October 23, 2024, sec. Technology. https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html.