DARPA announced MATHBAC (Mathematics of Boosting Agentic Communication) on April 7, a 34-month research program seeking proposals to develop the mathematical and scientific foundations for how AI agents communicate and coordinate with each other. The agency is offering up to $2 million per team in Phase I funding, with research abstracts due April 30, according to The Register and the SAM.gov solicitation.
Two Technical Tracks
The program is structured around two tracks. The first focuses on developing mathematics for understanding and designing agent communication protocols: how agents should exchange information, how those exchanges should be structured, and what makes one protocol more effective than another, according to The Register.
The second track examines the content of agent-to-agent interactions. DARPA wants researchers to figure out whether groups of specialized AI agents can infer general scientific principles from data, extracting “compact, generalizable nuggets” that become shared knowledge across cooperating agents, as The Register reported.
DARPA’s own example of a hard goal for Phase I: starting from data-driven analysis, rediscover something equivalent to Mendeleev’s periodic table for atoms, then extend it to a “multidimensional analog” for molecules.
No Incremental Work Accepted
DARPA is explicit about the bar. The solicitation states that research “that primarily results in incremental improvements to the existing state of practice” will not be funded, according to The Register. Phase II raises the ambition further, asking teams to build AI tools “that enable systematic evolution and invention of new science.”
“While AI excels at navigating solution spaces, it struggles to systematically explore hypothesis spaces, which are essential for generating transformative and generalizable scientific insights,” DARPA stated in the program announcement, as reported by The Register.
Why This Matters for Agent Builders
Most multi-agent frameworks today, including LangChain, CrewAI, and Autogen, handle agent communication through ad hoc prompting and message passing. There is no mathematical theory governing when agents should share information, what format that information should take, or how to verify that agents are actually collaborating rather than duplicating work. DARPA is essentially arguing that the current approach, trial and error with prompts, will not scale.
If MATHBAC produces results, the implications extend beyond defense applications. Enterprise agent orchestration platforms, multi-agent coding environments, and autonomous research systems all face the same coordination problem: how do you get multiple AI agents to communicate efficiently enough to be more useful together than apart? DARPA is betting that the answer requires new mathematics, not just better engineering, according to Computerworld.