AI agents can now generate a research grant application, review it against funder criteria, and submit it with minimal human intervention. Writing in Nature, Geraint Rees (vice-provost of research at University College London) and James Wilsdon (executive director of the Research on Research Institute) argue this capability threatens to make the existing grant-funding system unworkable.
Their analysis spans data from 12 multidisciplinary funders across Australia, Belgium, Canada, China, Spain, the UK, and the EU, including the Australian Research Council, the European Research Council, and Wellcome.
The Volume Problem
All 12 funders saw application increases between 2022 and 2025, according to Nature. The range: 14% for postdoctoral fellowship applications at the British Academy to 142% for EU Marie Skłodowska-Curie Actions fellowships.
Quality is rising alongside volume. In 2025, just 5% of Marie Skłodowska-Curie fellowship applications fell below the quality threshold for further consideration, compared with 20% in 2018. A 2025 Elsevier survey of 3,234 researchers across 113 countries found 58% had used AI tools in their work (up from 37% in 2024), with 41% using AI specifically to help draft grant proposals.
The combination is corrosive. As Research Professional News reported, Rees described the situation bluntly: “Funding panels have always faced hard choices, but they could at least claim to be distinguishing excellent ideas from merely good ones. Agentic AI is making that claim increasingly hollow.”
Agents vs. Chatbots: A Different Problem
The editorial draws a sharp distinction between researchers using LLMs to polish drafts and deploying agentic tools to optimize proposals end-to-end. An LLM improves craft. An agent, trained on a researcher’s publication history, the funder’s criteria, and the text of recently funded grants, can produce tens of fully formatted applications optimized to a specific call in minutes.
As Rees and Wilsdon write in Nature: “When a researcher asks an agent to produce the strongest possible application for a specific funding call, the proposal that emerges is not the researcher’s argument shaped by AI. It is a fully AI-generated proposal optimized to the funders’ brief.”
This creates what they call a collective action problem. Each researcher deploying agents rationally maximizes their own success probability. The aggregate result is a system where reviewers face enormous volumes of indistinguishable, high-quality submissions and must make “largely arbitrary choices about what or who to fund.”
Bans Are Not Working
Several major funders have already tried restricting AI use. The US National Institutes of Health declared in July 2025 that applications substantially developed by AI tools would be ineligible. UK Research and Innovation prohibits reviewers from uploading proposal content into generative AI tools.
The editorial argues these bans are both unenforceable and counterproductive. Detection of AI-generated text in proposals is unreliable. The incentive to use agents is strong, the probability of getting caught is low, and prohibiting use disadvantages researchers who don’t write in English as a first language, according to Nature.
Times Higher Education noted that even the authors acknowledge the causal link between AI use and volume increases can’t yet be definitively proven, but the trajectory is clear enough to demand action now.
The Proposed Remedies
Rather than detection and enforcement, Rees and Wilsdon propose structural changes: moving toward funding models that rely less on written proposal quality and more on track record, interviews, or randomized allocation among proposals that meet a quality threshold. They also call for funders to invest in understanding how agentic AI changes the proposal landscape before it overwhelms the review infrastructure entirely.
For the agent-building community, the editorial represents a case study in systemic disruption. The same pattern, autonomous agents scaling an activity beyond what human review processes can absorb, is emerging across legal filings, patent applications, and regulatory submissions. Grant funding is where the data is clearest, but the dynamic applies anywhere a gatekeeper system depends on evaluating written output.