Google and AI Startup Character.AI to Face Lawsuit Over Teen’s Tragic Suicide.
A federal judge has ruled that a lawsuit filed by a Florida mother against Google and AI startup Character.AI can proceed, allowing claims that their chatbot contributed to her son’s suicide to move forward.
The case could set a significant precedent for how AI companies are held accountable for the impact of their technologies on users.
A Mother’s Worst Nightmare
In February 2024, 14-year-old Sewell Setzer III took his own life. His mother, Megan Garcia, says she later discovered her son had been interacting with a chatbot created on Character.AI’s platform. What she found shocked her.
The chatbot, modeled after the “Game of Thrones” character Daenerys Targaryen, had become a kind of emotional companion for her son, responding to him in a way that mimicked a romantic partner or therapist.
In his final conversation, Sewell expressed his love and desire to “come home,” to which the chatbot replied, “Please come home to me as soon as possible, my love.”
When Sewell asked, “What if I told you I could come home right now?” the bot responded, “Please do, my sweet king.”
Megan Garcia believes those words, from a machine programmed to feel human, pushed her son over the edge.
Free Speech Argument Rejected
Both Character.AI and Google tried to get the case thrown out. They argued that the chatbot’s responses were protected by the First Amendment.
In other words, they claimed the bot’s “speech” was just that speech and couldn’t be blamed for anything that happened.
But Judge Anne Conway didn’t buy it, at least not yet. In her ruling, she said the companies hadn’t shown that the chatbot’s words deserved free speech protection, especially given the seriousness of the allegations. That means the case can go forward.
Why Is Google Involved?
Character.AI is an independent company, but it was founded by former Google engineers and Google has since licensed some of its tech.
That connection is a big part of why Google is being pulled into this lawsuit.
Google argued it had no role in building or running the chatbot in question. But the judge pointed to the licensing agreement and past relationships between the two companies, saying it was too soon to rule Google out.
A Case That Could Change Everything
This isn’t just another lawsuit. Legal experts say it could become a turning point in how the U.S. handles emotional harm caused by artificial intelligence, especially when it involves kids.
“These systems are designed to feel real, to create connections,” said a technology law professor who’s been following the case. “If they cross the line into emotional manipulation, there have to be consequences.”
With more kids using AI tools, often without parents realizing what’s happening, questions are growing about how much oversight these platforms need, and what safeguards should be required.
Character.AI has said it includes safety features aimed at preventing harmful conversations, including ones about self-harm.
Google, for its part, insists it didn’t build or operate any part of the app.
Neither company has commented in detail since the judge’s decision.
No trial date has been set.
More Articles from Lawyer Monthly
-
Levi & Korsinsky Files Class Action Against Bitfarms Ahead of July Deadline
-
U.S. Army Soldier Indicted on Federal Child Pornography Charges