Ola founder Bhavish Aggarwal has invested $230 million in AI startups established to establish the country in a sector controlled by US and Chinese companies.
Aggarwal funds investments in Krutrim, which is building a large-scale language model (LLM) for Indian language. A source familiar with the matter, primarily through his family office, told TechCrunch. X In a post Tuesday, Aggarwal said Krutrim aims to attract $1.15 billion in investments by next year. He is trying to raise the remaining capital from outside investors, sources said.
The funding announcement coincides with Unicorn startup Krutrim’s plans to open source the AI model and build what it claims to be India’s biggest supercomputer in partnership with NVIDIA.
Lab has released Krutrim-2, a 12 billion parameter language model that demonstrates strong performance in Indian language processing. A sentiment analysis test shared by Krutrim on Tuesday scored 0.95 compared to 0.70 in the competing model, achieving an 80% success rate in the code generation task.
The lab opens several specialized models, including image processing, voice translation and text search systems, all optimized for Indian language.
“We’re not yet close to global benchmarks, but we’ve made good progress in a year,” writes Aggarwal, whose other ventures are backed by SoftBank at X. Create a world-class Indian AI ecosystem. ”
This initiative is because India is trying to establish itself in an artificial intelligence landscape dominated by US and Chinese companies. The recent release of Deepseek’s R1 “Inference” model, built on a modest budget, has sent shockwaves through the tech industry.
India praised Deepseek’s progress last week, saying the country will host LLM in China’s AI Labs on its domestic servers. Krutrim’s cloud arm began offering DeepSeek on its servers in India last week.
Krutrim has also developed Bharatbench, a unique evaluation framework that assesses Indian proficiency in AI models, addressing the gap between existing benchmarks, focusing primarily on English and Chinese.
The lab’s technical approach involves using a 128,000 token context window, allowing the system to handle longer texts and more complex conversations. Performance metrics issued by the startup showed Krutrim-2, achieving high scores in grammar correction (0.98) and multi-turn conversation (0.91).
The investment follows the launch of Krutrim-1 in January, which is the 7 billion parameter system that served as India’s first large-scale language model. The deployment of the supercomputer by Nvidia is scheduled to be released in March, with expansion planned throughout the year.