Character AI, a platform that users can enjoy roll play using AI chatbots, have filed a claim to reject a lawsuit filed by a suicide teen boy who committed suicide, allegedly addicted to the company’s technology.
In October, Megan Garcia filed a lawsuit against the Character AI in the Orlando District Court of the Orlando District, Florida, over the death of his son Seawell Setzer III. According to Garcia, her 14 -year -old daughter had emotional attachment to the character AI chatbot “Danny”, always sent emails and began to move away from the real world.
After Setzer’s death, Character AI announced that it will develop a number of new safety functions, such as detection, response, and improvement of intervention. But Garcia is fighting for additional guardrails, including a change in Character AI chatbots, including a change that could lose the ability to convey stories and personal anecdotes.
Character AI lawyers argued in a claim requesting rejection, as in computer code, that the platform was protected from legal responsibilities by Article 1 of the Constitutional Correction. This claim may not be able to persuade the judge, and Character AI’s legal legitimacy may change as the lawsuit progresses. However, this movement may suggest an early element of character AI defense.
“Article 1 of the Constitution has prohibited the liability of media and technology companies, which is said to be a harmful speech, which is said to lead to suicide.” 。 “The only difference between this incident and the case so far is that AI is involved in a part of the speech here. If so, the context of expressive speech does not change the analysis of Article 1 of the Constitution. “
I want to make it clear, but Character AI lawyers do not claim the rights of the company’s constitutional revision. Rather, the motion argues that if the litigation against the platform succeeds, the Character AI users will violate the rights of Article 1 of the Constitutional Fix.
This motion does not mention whether the character AI is regarded as harmless based on Article 230 of the Communications Product Law, which is a federal safe -harmative law that protects social media and other online platforms from the responsibilities of third -party content. 。 The drafts of this law notes that Article 230 does not protect the output from AI, such as Character AI chatbot, but has not solved the legal problem.
Character AI lawyers also argue that Garcia’s true intention is to “close” the character AI and establish a law that regulates similar technologies. A platform lawyer states that if the plaintiffs won, they will bring a “atrophy effect” to both the character AI and the early AI industry as a whole.
“Apart from the intentions of the lawyer to” close “character AI, (their complaint), we are looking for a significant change in substantially restricts the nature and amount of remarks on the platform.” Is written. “These changes will fundamentally restrict the ability of Character AI users to generate conversations with the characters and to participate.”
This lawsuit is also mentioned as a defendant, Alphabet, a Character AI corporate sponsor, and Character AI faces how minors interact with content generated by AI on their own platform. It is only one of several lawsuits. In other litigation, Character AI claims that the 9 -year -old child was exposed to “excessive sexual content” and supported 17 -year -old users to self -harm.
In December, Ken Pakston, Texas, announced that it would start investigating Character AI and 14 other technology companies on allegedly violating the state online privacy and child safety. “These surveys are important steps for social media and AI companies to protect children from exploitation and harm,” said Pakston in a press release.
Character AI is part of the rapid growth industry of the AI companion app, but its impact on its mental health has hardly been studied. Some experts have expressed concern that these apps could worsen loneliness and anxiety.
Founded in 2021 by Google AI researcher NOAM SHAZEER, Character AI, which is said to have paid $ 2.7 billion to “reverse acquisition”, is taken to improve safety and modification. He claims that he is taking continuously. In December, the company announced in December of a new safety tool, a separate AI model for teens, a block of confidential content, and a more prominent disclaimer that notifies users that the AI character is not a real person.
Character AI has been changing many times after Shazeer and another co -founder Daniel de Freitas have retired to Google. The platform hired former YouTube executive Erin Tig as the highest product, appointed Dominik Perella as a legal advisor and provisional CEO of character AI.
Character AI recently started testing games on the web to increase the user engagement and maintenance rate.
TechCrunch has a newsletter focusing on AI. If you sign up from here, you will reach the receiving box every Wednesday.