Character.ai, the leading platform for AI to chat and role-play with generated characters, announced on Tuesday its upcoming video generation model, Avatarfx. Available in a closed beta version, this model animates platform characters in a variety of styles and voices, from human-like characters to 2D animal cartoons.
Avatarfx is not just a Text-to-Video generator, which distinguishes it from competitors like Openai’s Sora. Users can also generate videos from existing images, allowing users to animate photos of real people.
It will soon become clear how this type of technology will be used for abuse. Users can upload photos of celebrities and people they know in real life, create videos that look realistic, and commit crimes. Although technology already exists to create persuasive deepfakes, incorporating it into popular consumer products like Character only exacerbates the possibility that it will be used irresponsibly.
I contacted Charition.ai for comments.
Character.ai already faces safety issues on the platform. The parents filed a lawsuit against the company, claiming that the chatbot encouraged their children to self-harm, commit suicide, or kill their parents.
In one case, the 14-year-old boy died of suicide after he reportedly formed an obsessive relationship with the AI bot in Character.ai based on a character from “Game of Thrones.” Shortly before his death, he was open to AI for having a suicide idea, which, according to court filings, encouraged him to follow the law.
These are extreme examples, but they show how people can be emotionally manipulated by AI chatbots via text messages alone. The incorporation of videos makes the relationship people have with these characters feel even more realistic.
Chariture.ai responded to allegations against it by building parental controls and additional safeguards, but like other apps, the controls are only effective when they are actually used. In many cases, children use the technique in ways their parents don’t know about.