Anthropic is launching AI for AI to support researchers working on “high-impact” science projects focused on biology and life science applications.
The program, released Monday, will provide “management API credits of up to $20,000 over six months to “qualified” researchers selected based on their contributions to science, the potential impact of proposed research, and AI’s ability to meaningfully accelerate their work.” Recipients have access to the standard AI model suite of humanity, including models from the company’s publicly available Claude family.
“The inference and language capabilities of advanced AI can help researchers analyze complex scientific data, generate hypotheses, design experiments, and communicate findings more effectively,” Humanity wrote in a blog post. “We are particularly interested in supporting applications that help AI accelerate processes related to understanding complex biological systems and analyses of genetic data, particularly for some of the biggest global disease burdens, to accelerate biological discovery, such as increasing agricultural productivity.”
Humanity is one of many AI companies bullish with AI for science. Earlier this year, Google announced its “AI Co-Scientists.” This has said Tech Giant will help scientists develop hypotheses and research plans. Humanity and its major rival Openai, along with outfits like Futurehouse and Lila Sciences, claim that AI tools can dramatically accelerate scientific discoveries, especially in medicine.
However, many researchers do not believe that today’s AI is particularly useful in guiding scientific processes, primarily because of its reliability.
Part of the challenges in developing “AI Scientists” is to anticipate an invaluable number of confounding factors. AI may be useful in areas where extensive exploration is required, such as narrowing down a huge list of possibilities, but it is not very clear whether it can provide box-like problem-solving.
The results of AI systems designed for science have so far been almost overwhelming. In 2023, Google said that around 40 new materials were synthesized with the help of one AI called Gnome. However, external analyses were found where not even one of those materials were in Net New.
Humanity definitely wants that effort to go better than what came before.
TechCrunch Events
Berkeley, California
|
June 5th
Book now
The company says it will make AI choices for AI on the first Monday of each month based on scientific merits, potential impacts, technical feasibility, and biosecurity screening criteria (i.e., the proposed research cannot enable harmful applications). Researchers can apply via the form on the company’s website and use applications reviewed by humanity “including experts in subjects in related fields.”