Researchers from Tiktok’s owners have demonstrated the Omnihuman-1, the new AI system that can generate perhaps the most realistic deepfake video ever.
Deep AI is a product. There’s no shortage of apps that can insert someone into a photo or make them look like they’re saying something they didn’t actually say. However, most deep fakes, especially video deep fakes, are unable to clear the creepy valleys. Usually there are obvious signs that AI is being involved somewhere.
Not so in Omnihuman-1 – at least from the cherry blossom samples, the Baite dance team released.
This is the fictional performance of Taylor Swift. This is a TED talk that didn’t happen. And here is the serious Einstein lecture:
According to ordinance researchers, Omnihuman-1 only requires a single reference image and audio, such as speeches and vocals, to generate clips of any length. The aspect ratio of the output video is adjustable, just like the subject’s “body proportion,” that is, how much of the body is displayed in the fake video.
Trained with 19,000 hours of video content from private sources, Omnihuman-1 can also edit existing videos. You can also change the movement of a person’s hands and feet. It’s truly amazing how convincing the results are.
Certainly, Omnihuman-1 is not perfect. The ordinance team says that “low quality” reference images don’t produce the best video and the system seems to be struggling with specific poses. In this video, beware of the strange gestures in a wine glass.
Still, Omnihuman-1 has easily shouldered head and shoulders over previous deep furc techniques, which may be a sign of what’s going forward. Bytedance doesn’t release the system, but the AI community tends to take less time to reverse engineer models like this.
I’m worried about the meaning.
Last year, political deepfakes spread like wildfires around the world. On Election Day in Taiwan, a group of Chinese Communist Party posted misleading audio generated by AI politicians casting support behind pro-Chinese candidates. In Moldova, deepfake video has resigned from the country’s president, Maia Sandhu. And in South Africa, the depth of rapper Eminem, who supports South Africa’s opposition parties, spread ahead of the country’s election.
Deepfakes are also increasingly being used to carry out financial crimes. Consumers are being fooled by celebrity deepfakes who offer fraudulent investment opportunities, while businesses are being deceived by millions of people by deepfake impersonators. According to Deloitte, content generated by AI contributed to more than $12 billion in fraud losses in 2023, potentially reaching $40 billion in the US by 2027.
Last February, hundreds of people in the AI community signed an open letter calling for strict Deepfark restrictions. If there is no law criminalizing deepfakes at the federal level in the US, more than 10 states have enacted laws against impersonation of AI confidence. California law – now at a dead end – is the first to allow you to defeat deepfake posters or order deepfake posters that could potentially be faced with financial penalties.
Unfortunately, deepfakes are difficult to detect. While some social networks and search engines have taken steps to limit their spreads, the amount of Deep Farkent content online continues to grow at an incredibly fast rate.
In a May 2024 survey from ID verification company Jumio, 60% of people said they had encountered Deepfark in the past year. 72% of poll respondents say they are worried about being fooled by deepfakes every day, with the majority supporting laws to address the spread of fake fakes generated by AI. did.
TechCrunch has a newsletter focused on AI! Sign up here to get it every Wednesday in your inbox.