Recursive Superintelligence, a four-month-old AI research lab with roughly 20 employees and no product, has raised at least $500 million at a $4 billion pre-money valuation. GV (formerly Google Ventures) led the round, with NVIDIA joining as a strategic investor, according to the Financial Times. The round was so oversubscribed it could reach $1 billion.

The Founding Team

The company was incorporated on December 31, 2025 by Richard Socher (former chief scientist at Salesforce), Tim Rocktäschel (AI professor at University College London and previously principal scientist at Google DeepMind), Josh Tobin, Jeff Clune, and Tim Shi. The roughly 20-person team also includes former OpenAI researchers along with alumni from Google and Meta, according to The Decoder.

The Thesis: Automate AI Development Itself

The pitch is straightforward and ambitious: build agentic research systems that automate the entire AI development pipeline. Evaluation design, dataset curation, model training, post-training optimization, and research direction, all without requiring human intervention at each step. The concept, often described as recursive self-improvement, is viewed by many researchers as the key mechanism for reaching superintelligence, per The Decoder.

So far, the concept remains in the research phase and hasn’t been tested over long stretches of time. The company plans a public launch around mid-May 2026, according to TechFundingNews.

The Competitive Landscape

Recursive is not alone in betting on the next wave of AI architecture. AMI Labs, started by Meta’s chief AI scientist Yann LeCun, is focused on world models. Ineffable Intelligence, founded by DeepMind’s David Silver, is centered on reinforcement learning. Both are pursuing fundamentally different approaches to the same question: what comes after scaling transformer models, as reported by TechFundingNews.

The Capital Floor Keeps Rising

The raise underscores a continuing pattern in frontier AI: the minimum capital required to compete keeps climbing. $500 million for a pre-product lab with 20 people signals that investors see self-improving AI systems as a category worth funding at venture scale before any demonstrated results. NVIDIA’s participation adds an infrastructure dimension. The company is simultaneously backing the labs that will consume its chips and the platforms (like OpenShell and NeMo) that govern how agents run.

For API-dependent startups, the implication is practical: if self-improving systems compound even modestly per cycle, the capabilities of your underlying model provider become unpredictable on 12-month timelines. Teams building on top of frontier models are increasingly planning for shorter lock-in periods and budgeting for model obsolescence cycles rather than stable multi-year roadmaps.