GitHub’s MCP Server can now scan code changes for exposed secrets before developers commit or open pull requests from AI coding agent sessions. The feature, announced on March 17 and now in public preview, works in any MCP-compatible IDE or AI coding agent environment with GitHub Secret Protection enabled. It means GitHub is treating AI coding agent sessions as a distinct attack surface requiring dedicated credential scanning.

The timing is pointed. GitGuardian’s 2026 State of Secrets Sprawl report found 24,008 unique secrets exposed in MCP-related configuration files across public GitHub, including 2,117 unique valid credentials. AI-service secret leaks surged 81% year-over-year, with 29 million secrets hitting public repositories overall.

How the MCP Scanning Works

When an AI coding agent has the GitHub MCP Server configured, it can invoke secret scanning tools during a coding session. The agent sends code to GitHub’s secret scanning engine, which returns structured results with the locations and types of any secrets found. Developers can prompt their agent with something like: “Scan my current changes for exposed secrets and show me the files and lines I should update before I commit.”

For GitHub Copilot CLI users, the tool is activated via copilot --add-github-mcp-tool run_secret_scanning. In VS Code, the advanced-security agent plugin provides a /secret-scanning command in Copilot Chat.

37 New Detectors in March

Separately, GitHub’s March 31 changelog update added nine new secret detectors from seven providers including Langchain, Salesforce, and Figma. DevOps.com reports the monthly total across all March updates reached 37 new detectors across 22 providers. Push protection now covers 39 token types by default, with new defaults added for Figma, Google, OpenVSX, and PostHog tokens. Validity checks for npm access tokens were also added, allowing automatic verification of whether detected secrets are still active.

The Langchain additions are notable for the agent ecosystem: Langsmith license keys and SCIM bearer tokens are now push-protected by default, covering one of the most widely used agent orchestration frameworks.

For anyone running AI coding agents in production, the message from GitHub is explicit: your agent is a potential credential leak vector, and the security tooling is now being built to match that reality.