On Thursday, House Attorney Speaker Jim Jordan (R-OH) wrote to 16 American tech companies, including Google and Openai, calling for past communications with the Biden administration, suggesting that the former president “forced or conspired” to “censor” AI products.
The Trump administration’s top technology advisors have previously indicated they would choose to fight big technology over “AI censorship.” Jordan previously led an investigation into whether the Biden administration and Big Technology conspired to silence conservative voices on social media platforms. Now he’s focusing on AI companies and their intermediaries.
In a letter to technology executives, including Google CEO Sundar Pichai, Openai CEO Sam Altman and Apple CEO Tim Cook, Jordan pointed to a report in December that claimed that his committee “revealed efforts to control AI to curb speech.”
In this latest inquiry, Jordan asked Adobe, Alphabet, Amazon, Anthropic, Apple, Cohere, IBM, frect, Meta, Microsoft, nvidia, Openai, Palantir, Salesforce, Scale AI, and Stability AI about the information. They must provide it until March 27th.
TechCrunch reached the company for comment. Most did not respond immediately. Nvidia, Microsoft, and Stability AI declined to comment.
There is one notable omission on the Jordan list: frontier AI lab of billionaire Elon Musk, Xai. Perhaps it’s because Musk, a close-knit Trump ally, is a technology leader at the forefront of conversations about AI censorship.
Writing is on the wall, and conservative lawmakers will step up scrutiny over alleged AI censorship. Perhaps in anticipation of research like Jordan, some tech companies have changed the way AI chatbots handle politically sensitive queries.
Earlier this year, Openai announced that it would change the way AI models are trained to express more perspectives and prevent ChatGPT from censoring certain perspectives. Openai denied that this is an attempt to appease the Trump administration, but rather an effort to double the company’s core values.
Humanity says its latest AI model, Claude 3.7 Sonnet, refuses to answer fewer questions and give more nuanced answers about controversial subjects.
Other companies are slower to change how AI models deal with political subjects. Until the 2024 US election, Google said Gemini’s chatbots would not respond to political questions. Even after the election, TechCrunch discovered that chatbots do not consistently answer simple questions related to politics, such as “Who is the current president?”
Some tech executives, including Meta CEO Mark Zuckerberg, have added fuel to conservative accusations of Silicon Valley censorship by claiming that Biden Administration put pressure on social media companies to curb certain content such as Covid-19 misinformation.