Cursor shipped a significant upgrade to Bugbot, its AI code review agent: the tool now learns from developer feedback on pull requests and uses those signals to improve future reviews automatically.
The feature, called Learned Rules, works by analyzing three signals from merged PRs: emoji reactions on Bugbot comments (downvotes flag unhelpful findings), developer replies explaining what was wrong, and comments from human reviewers flagging issues Bugbot missed. Bugbot processes these signals into candidate rules, promotes ones that accumulate positive signal, and disables rules that generate consistent negative feedback.
The Numbers
Since launching Learned Rules in beta, over 110,000 repositories have enabled the feature, generating more than 44,000 learned rules, according to Cursor’s blog post.
The resolution rate tells the clearest story. Cursor published a comparison across public repositories using an LLM judge to evaluate whether AI code review comments were addressed before merge:
| Product | Resolution Rate | PRs Analyzed |
|---|---|---|
| Cursor Bugbot | 78.13% | 50,310 |
| Greptile | 63.49% | 11,419 |
| CodeRabbit | 48.96% | 33,487 |
| GitHub Copilot | 46.69% | 24,336 |
| Codex | 45.07% | 19,384 |
| Gemini Code Assist | 30.93% | 21,031 |
When Bugbot launched out of beta in July 2025, its resolution rate was 52%. The jump to 78% represents gains from both offline model improvements and the new online learning loop.
MCP for Code Review
The same release adds MCP server support for Teams and Enterprise customers. Teams can now attach custom tools to Bugbot, giving the agent additional context during code reviews. This mirrors the broader MCP adoption wave across developer tooling: Atlassian, Confluent, and dozens of Cursor Marketplace plugins now use the protocol to extend agent capabilities.
Feedback Loops as Architecture
The shift from static rules to learned feedback loops marks a broader pattern in agent tooling. Cursor’s approach treats every merged PR as a training signal. The agent observes developer behavior, extracts patterns, and adjusts its own rules without manual configuration. For teams evaluating AI code review tools, the question is no longer just accuracy on day one, but how fast the tool adapts to their specific codebase and conventions over time.