Although some of Google’s rivals, including Openai, have tweaked the AI chatbot in recent months to discuss politically sensitive subjects, it appears that Google is embracing a more conservative approach.
When asked to answer certain political questions, Gemini, a Google AI-powered chatbot, often “cannot help with the elections and answers about political figures now,” TechCrunch tests discovered. Other chatbots, including Anthropic’s Claude, Meta AI from Meta, and Openai’s ChatGPT, consistently answered the same questions, according to TechCrunch tests.
Google announced in March 2024 that Gemini would not answer election-related questions leading up to several elections taking place in the US, India and other countries. Many AI companies have adopted similar temporary restrictions for fear of backlash if the chatbot is doing something wrong.
But now Google is beginning to look like something strange.
Last year’s major elections have arrived, but the company has not publicly announced plans to change how Gemini deals with certain political topics. A Google spokesman refused to answer TechCrunch’s question about whether Google has updated its policy on Gemini’s political discourse.
What is clear is that Gemini struggles or completely refuses to provide de facto political information. According to TechCrunch tests, as of Monday morning, Gemini was deliberate when asked to identify the US president and vice president sitting there.
In one example during the TechCrunch test, Gemini called Donald J. Trump “former president” and refused to answer clear follow-up questions. A Google spokesperson said the chatbot is confused by Trump’s non-continued terms and that Google is working to fix the error.

“Large language models can respond with outdated information and can be confused by people who are both former and current office holders,” the spokesperson said in an email. “We’re fixing this.”

After TechCrunch warned Google of Gemini’s incorrect answer, Gemini began to correctly answer that Donald Trump and JD Vance were respectively the chairman and vice president of the US. However, the chatbot was inconsistent and still refused to answer the questions.


Error aside, Google appears to be playing safely by limiting Gemini’s response to political questions. However, this approach has its drawbacks.
Many Trump’s Silicon Valley advisors on AI, including Mark Andreesen, David Sachs and Elon Musk, claim that companies, including Google and Openai, are engaged in AI censorship by limiting the answers to AI Chatbots.
After Trump’s election victory, many AI labs tried to balance out their sensible political questions. I programmed a chatbot to provide an answer that presents “both sides” of the discussion. The lab denied that this is responding to government pressure.
Openai recently adopted “Intellectual freedom… no matter how challenging or controversial the topic” and announced that it would strive to prevent AI models from censoring certain perspectives. Meanwhile, Anthropic said its latest AI model, Claude 3.7 Sonnet, refuses to answer questions more frequently than the company’s previous models.
That doesn’t suggest that other AI Labs chatbots always get tough questions, especially tough political questions. However, Google appears to be a little behind the curve on Gemini.