Anthropic and FIS announced a partnership on Monday to build autonomous AI agents for financial institutions, starting with a Financial Crimes AI Agent that independently investigates drug trafficking, terrorism financing, and money laundering across bank systems. FIS shares rose approximately 7% in after-hours trading on the news, The Wall Street Journal reported.

How It Works

The agent combines Claude’s reasoning with FIS’s financial data infrastructure to autonomously gather evidence for potential cases. It pulls transactions, account information, and other data spread across multiple systems without requiring a human to manually compile each piece, according to FIS CEO Stephanie Ferris. Human investigators still make final decisions on cases.

Anthropic’s Applied AI team and forward-deployed engineers are embedded within FIS to co-design the agent, confirmed Jonathan Jager-Hyman, Anthropic’s head of industries. Jonathan Pelosi, Anthropic’s head of financial services, told PYMNTS that “every conclusion the agent reaches links back to its source data,” positioning explainability as a core design principle for the regulated environment.

First Deployers

Bank of Montreal and Amalgamated Bank will be the first institutions to use the financial crimes agent, Ferris said, per the WSJ. Broader availability to FIS clients is expected in the second half of 2026, according to the BusinessWire release.

Why Financial Crime First

Financial crime investigation is operationally intensive and evidence-gathering heavy, making it a natural fit for autonomous agent deployment. Investigators spend most of their time compiling data from disparate sources before they can evaluate a case. An agent that handles evidence assembly while humans retain judgment authority reduces cost per case without raising the regulatory question of whether an AI is making enforcement decisions.

Claude’s Regulated-Industry Position

This marks Anthropic’s most significant deployment in a high-compliance banking vertical to date. The partnership structure, with Anthropic engineers embedded directly at the enterprise software vendor rather than selling API access at arm’s length, mirrors the “forward-deployed” model that Palantir pioneered for government contracts. It also positions Claude specifically in a domain where explainability and source-linking are regulatory requirements, not optional features. For Anthropic, the deal validates Claude’s acceptance in environments where model outputs must be auditable and traceable to source data.