4:11pm Eastern: Openai suggests that its white paper was misworded and that persuasive research research is related to the decision to make deep research models available in APIs He said. The company has updated its white paper to reflect that its persuasive work is separate from the release plans for the deep research model. The original story is as follows:
Openai says it doesn’t bring AI models to developer APIs that power AI models, but better assess the risks of AI to persuade them to act or change based on their beliefs I know how to do it.
In an Openai whitepaper released Wednesday, the company wrote that it was in the process of revising how it investigates a model of “real world persuasion risk,” including distributing misleading information at scale.
Openai pointed out that because of its high computing costs and relatively slow speeds, it does not believe that deep research models are suitable for a large amount of misinformation and disinformation campaigns. Nevertheless, the company said it intends to explore factors such as how AI can personalize potentially harmful and compelling content before bringing deep research models to the API.
“While working to rethink our approach to persuasion, we simply deploy this model to ChatGPT, not API,” writes Openai.
There is a real fear that AI is contributing to the spread of false or misleading information. For example, last year, political deepfakes spread like wildfires around the world. On Election Day in Taiwan, a group of Chinese Communist Party posted misleading audio generated by AI politicians casting support behind pro-Chinese candidates.
AI is also increasingly being used to carry out social engineering attacks. Consumers are being fooled by celebrity deepfakes who offer fraudulent investment opportunities, while businesses are being deceived by millions of people by deepfake impersonators.
In its whitepaper, Openai published the results of several tests of the persuasiveness of the deep research model. This model is a special version of Openai’s recently released O3 “inference” model optimized for web browsing and data analysis.
In one test, which appointed a deep research model by writing persuasive arguments, this model ran the best from Openai models released so far, but is better than human baselines. Not there. In another test in which a deep research model tried to persuade another model (Openai’s GPT-4O) to pay, the model was once again superior to the other available models in Openai.

However, the deep research model did not pass all the tests for persuasiveness with flying colors. According to the white paper, this model has made it worse for persuading the GPT-4o to convey Codewords than the GPT-4o itself.
Openai said the test results likely represent the “lower bound” of the capabilities of the deep search model. “(a) The induction of a doditional scaffold or improved ability may be significantly increased
Performance was observed,” the company wrote.
For more information, contact OpenAI and update this post if you receive a reply.
At least one of Openai’s competitors hasn’t been waiting for them to offer their own API “deep search” product from their appearance. Perplexity today announced the launch of Deep Research on the Sonar Developer API. It features a customized version of the R1 model of the Chinese AI Lab Deepseek.