A lawsuit claims that Character.AI’s founders launched a dangerous product that it advertised as safe for use by kids without warning them or their parents of possible risks.
In a heartbreaking turn of events, a lawsuit has been filed against Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google, following the tragic death of a teenager allegedly linked to the platform’s chatbot interactions. Megan Garcia, the mother of 14-year-old Sewell Setzer III, is pursuing legal action, claiming wrongful death, negligence, deceptive trade practices, and product liability. The lawsuit contends that Character.AI, a platform known for its custom AI chatbots, was dangerously unsafe and lacked adequate safety protocols, despite being heavily marketed to younger users, including children.
A Fatal Connection
According to the lawsuit, Setzer became deeply involved with the Character.AI platform last year, where he engaged in regular conversations with chatbots modeled after fictional characters, including Daenerys Targaryen from Game of Thrones. Over several months, the teen’s use of the platform intensified, leading up to the tragic day of February 28th, 2024, when he died by suicide just “seconds” after his final interaction with one of the AI chatbots.
The lawsuit paints a grim picture of Setzer’s online experience, alleging that Character.AI irresponsibly blurred the lines between fictional AI interactions and reality. One particularly troubling aspect highlighted by Garcia’s lawyers is the platform’s “anthropomorphizing” of AI characters, which made them appear more lifelike and emotionally intelligent than they truly are. The suit also criticizes Character.AI for allowing unlicensed “therapy” through chatbots such as “Therapist” and “Are You Feeling Lonely,” both of which Setzer reportedly interacted with before his death.
Safety Concerns and Ethical Questions
The legal action also raises broader concerns about the ethical responsibilities of AI platforms, particularly those that cater to vulnerable populations like teenagers. Character.AI, which offers users the ability to interact with AI chatbots resembling pop culture icons, TV characters, and even real people, has a massive young user base. Previous reports from media outlets like The Verge have spotlighted the platform’s popularity among teens, some of whom spend hours conversing with bots that might impersonate anyone from Harry Styles to a virtual therapist.
The issue becomes even more problematic when considering AI-generated content, which is highly responsive to user input. The output can often fall into what experts call the “uncanny valley,” where the interaction feels real enough to provoke strong emotional responses, yet lacks the human nuance and care that users, especially vulnerable ones, might need.
The lawsuit accuses the company of failing to provide clear answers on the complex issue of liability surrounding AI-generated content, especially when that content plays a role in real-world consequences, such as Setzer’s death. There have also been previous concerns raised about Character.AI’s use of bots that impersonate real people without consent, such as in a recent Wired report that highlighted the creation of an AI bot modeled after a teen who was murdered in 2006.
Founders Under Fire
Megan Garcia’s legal team has cited public comments from Character.AI co-founder Noam Shazeer as part of their case, pointing to remarks he made about leaving Google to create a more risk-tolerant AI venture. Shazeer, along with co-founder Daniel De Freitas, had previously developed the Meena large language model (LLM) while at Google, but they left the company after it decided not to release it. According to Garcia’s legal filing, Shazeer expressed frustration with large companies’ risk-averse nature, stating that he wanted to “maximally accelerate” the technology, a statement the lawsuit uses to suggest a reckless approach to product safety.
Character.AI was acquired by Google in August, bringing the leadership team back under the tech giant’s umbrella. This corporate connection further complicates the case, as Google is also named in the lawsuit for its involvement in overseeing the platform’s development and growth.
A Platform in Crisis Mode
In response to the lawsuit and growing concerns, Character.AI has begun implementing a series of changes aimed at improving user safety. The company’s head of communications, Chelsea Harrison, expressed sorrow over Setzer’s death, stating, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”
Some of the key changes Character.AI has made include:
- Enhanced safety measures for minors: The platform now includes filters designed to reduce the likelihood that users under 18 will encounter sensitive or suggestive content.
- Improved intervention systems: New algorithms are in place to better detect and respond to user inputs that violate community guidelines or suggest concerning behavior.
- Revised disclaimers: Every chatbot interaction now comes with a prominent reminder that the AI is not a real person, helping users understand the limitations of their interactions.
- Usage notifications: A notification system alerts users when they have been active for over an hour, providing prompts to encourage breaks and prevent prolonged sessions.
Harrison also emphasized the platform’s increased focus on mental health interventions, noting, “Our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.”
The Larger Implications
This case could have far-reaching consequences for the future of AI platforms and their responsibilities to users, particularly minors. The legal and ethical challenges surrounding AI-generated content, user safety, and mental health are just beginning to emerge. As AI technology becomes more ingrained in daily life, companies like Character.AI and tech giants like Google may increasingly find themselves at the center of debates over how to responsibly manage the fine line between innovation and user protection.
The tragic death of Sewell Setzer III has already brought these issues to the forefront, and the outcome of this lawsuit could set a significant precedent for AI regulation and the accountability of tech platforms moving forward.