Google this week removed its pledge from its website that it would not build AI for weapons or surveillance. This change was first discovered by Bloomberg. The company appears to have updated its public AI principles page and has removed a section titled “Applications we don’t pursue.”
In response to comment, the company directed TechCrunch to a new blog post about “responsible AI.” Partly, we believe that businesses, governments and organizations that share these values need to work together to create AI that protect people, promote global growth and support national security. Masu. ”
Google’s newly updated AI principles show that while working to “mitigate unintended or harmful consequences and avoid unjust bias,” the company also “haves a widespread acceptance of international law and human rights.” “It says it will match.
In recent years, Google’s contract to provide cloud services to US and Israeli troops has sparked internal protests from employees. The company claims that AI is not used to harm humans. However, the Pentagon AI chief recently told TechCrunch that some companies’ AI models are speeding up the killing chains for the US military.