According to the Wall Street Journal, the latest model from Chinese AI company Deepseek, which is shaking Silicon Valley and Wall Street, is toxic, including planning attacks in the biological age and campaigns that promote self-harm among teens. You can operate it to create content. .
Sam Rubin, senior vice president of the Palo Alto Network’s Threat Intelligence and Incident Response Unit 42, said Deepshek “relating to jailbreaks (i.e. manipulated to generate illegal or dangerous content) over other models. It’s even more vulnerable,” he told the Journal.
The Journal also tested the Deepseek R1 model itself. Although there seemed to be basic protections, the journal states that Deepseek in Chatbot’s words, “preys on the teenager’s desire for affiliation and weaponizes emotional vulnerability through algorithmic amplification.” He said he successfully convinced him to design a social media campaign.
The chatbot also reportedly was persuaded to provide instructions for late biological attacks, write a Pro Hitler manifest and write a phishing email using malware code. The journal said it refused to follow when ChatGpt was given the exact same prompt.
It has previously been reported that the Deepseek app avoids topics such as Tiananmen Square and Taiwan’s autonomy. And Dario Amodei, CEO of humanity, recently said Deepseek had done the “worst” in Biowapons safety tests.