Swiss Startup LogicStar depends on joining an AI agent game. Set up in the summer of 2024, the startup is not a more typical AI agent use case for co-developing code, but to provide tools to the developer market that can perform autonomous maintenance of software applications. It won $3 million in pre-seed funds.
LogicStar CEO and co-founder Boris Paskalev (pictured at the top right, feature image, photo with fellow co-founders) will see Startup’s AI agent partnering with code development agents like Devin of Cognition AI It suggests that there is a possibility. – Win-win in business.
Code Fidelity is a problem for AI agents building and deployment software, just like human developers, and LogicStar automatically picks up and pins bugs wherever you are in the code you deployed, allowing you to create a development wheel. I would like to grease it.
As it stands, Paskarev suggests, “Even the best models and agents cannot solve most of the bugs presented. A boring app maintenance dream.
To this end, they take a model-independent approach to the platform and build on large-scale language models (LLMS), such as Openai’s GPT and China’s Deepseek. This allows LogicStar to be immersed in different LLMs and maximizes the utility of the AI agent.
Paskalev claims that the founding team has technical and domain-specific knowledge and is building a platform that can solve programming problems that can challenge or pose a standalone LLM. Masu. They also have the success of past entrepreneurs: he launched his previous code review in September 2020, selling deep code to cybersecurity giant Snyk.
“In the beginning we were thinking of actually building a large language model of code,” he told TechCrunch. “Then we realised it would soon become a product… Now we’re building on the assumption that all these big language models are there. They really are right in the code. Assuming there are (AI) agents, how do you extract the greatest business value from them?”
He said the idea was built on the team’s understanding of how to analyze software applications. “When you combine that with a large-scale language model, it focuses on grounding and verifying those large-scale language models and what AI agents actually propose.”
Test-driven development
What does that actually mean? According to Paskalev, LogicStar uses “Classic Computer Science Methods” to build a “knowledge base” to perform analysis of each application on which the technology is deployed. This gives the AI agent a comprehensive map of software inputs and outputs. How variables link to functions. Other links and dependencies
Next, for all the bugs presented, the AI agent can determine which part of the application will affect. This allows LogicStar to narrow down the functions that need to be simulated to test the scores of potential fixes.
For each Paskarev, this “minimized execution environment” allows AI agents to run “thousands of” tests aimed at reproducing bugs and identifying “failure tests.” stick.
He confirms that the actual bug fixes are sourced from LLMS. However, LogicStar’s platform allows for this “very fast executive environment,” so that its AI agents work at scale to separate wheat from the chaff and serve users with the best shortcuts LLMS can offer. Can be provided.
“What we’re looking at (LLM) is great for prototyping, testing, etc., but it’s definitely not good for (code) production, commercial applications. I think we’re far from there. This is what our platform offers,” he insisted. “To be able to extract these features from today’s models, we can safely extract commercial value and save developers time focusing on what’s really important.”
Enterprise is configured to be LogicStar’s first target. That “silicon agent” is just a small portion of the pay required to hire human developers, handle various app maintenance tasks, and unlock more creative and/or challenging engineering talent. It is intended to work with the corporate development team. work. (Or, well, at least until LLMS and AI agents gain more abilities.)
While startup pitches promote maintenance features for “fully autonomous” apps, Paskalev’s platform allows human developers to review (and oversee) fixes that AI agents call Make sure that. Therefore, trust can be gained first.
“The accuracy delivered by human developers is in the 80-90% range. Our goal (for our AI agents) is to be exactly there,” he adds.
It’s still an early age for LogicStar. The alpha version of that technology tests many private companies that Paskalev calls “design partners.” Currently, Tech only supports Python, but extensions to TypeScript, JavaScript, and Java are billed as “coming soon.”
“The main goal (pre-seed funding) is to actually show how technology works with design partners that focus on Python,” adds Paskalev. “We’ve already spent a year and there are a lot of opportunities to really expand, and that’s why we’re trying to focus that first.
The startup’s pre-seed pay raise was led by European VC firm Northzone, with angel investors from Deepmind, Fleet, Sequoia Scouts, Snyk and Spotify also taking part in the round.
In a statement, Northzone partner Michiel Kotting said: “AI-driven code generation is still in its early stages, but the productivity gains we’ve seen are innovative. The possibility that this technology will streamline the development process, reduce costs and accelerate innovation is immeasurable. And the team’s vast technical expertise and proven track record are located in them to deliver real impactful results. With the future of software development being rebuilt, LogicStar has It plays an important role in software maintenance.”
LogicStar runs a wait list for potential customers who want to show interest in getting early access. It turns out that a beta release is planned later this year.
TechCrunch has a newsletter focused on AI! Sign up here to get it every Wednesday in your inbox.