Goldman Sachs has removed access to Anthropic’s Claude for its bankers in Hong Kong, according to Reuters, Bloomberg, and the Financial Times. The decision stems from the bank taking a strict interpretation of its contract with Anthropic, concluding that employees in Hong Kong should not use any Anthropic products.
What Happened
Goldman employees in the territory had previously accessed Claude through an internal AI platform. In recent weeks, that access was cut, a source with direct knowledge told Reuters. The restriction followed a consultation between Goldman and Anthropic about the terms of their agreement.
Other mainstream AI models remain available on the internal platform. Google’s Gemini and OpenAI’s ChatGPT were not affected, according to The Business Times.
Anthropic told the FT that its Claude models had never been officially “supported” in Hong Kong but declined to comment further. Goldman Sachs declined to comment entirely.
The Contract Question
The trigger was contractual, not regulatory. Goldman’s legal team interpreted its agreement with Anthropic as prohibiting use in Hong Kong, according to the FT. While AI models like ChatGPT and Claude are prohibited in mainland China, Hong Kong has mostly remained outside those controls, with usage limits set by US companies themselves rather than by government mandate.
This creates an awkward gap. Hong Kong operates under different rules than the mainland, but US AI vendors set their own geographic restrictions through terms of service. When a bank decides to enforce those terms conservatively, it produces the same outcome as a formal ban: tool access disappears.
The Larger Pattern
Goldman’s CIO Marco Argenti said in February that the bank was working with Anthropic to develop AI-powered agents for automating internal functions, according to Reuters. That partnership continues at the firm level. The Hong Kong restriction is jurisdictional, not a wholesale break with Anthropic.
But the precedent matters. Banks are increasingly conducting internal audits of AI tool provenance and data residency. The question is no longer just “does this model perform well?” but “where was this model built, where does the data flow, and does our contract allow use in this jurisdiction?”
Geopolitical Risk as Compliance Category
Enterprise AI procurement has always been about capability, cost, and security. Goldman’s move adds a fourth variable: geopolitical risk. In markets where US-China tensions create compliance liabilities, banks face a choice between using the best available tool and avoiding the contractual or reputational exposure that comes with using a US-built AI product in a China-adjacent jurisdiction.
The fact that ChatGPT and Gemini remain available while Claude was cut suggests the issue is vendor-specific contract language, not a blanket policy against US AI tools in Hong Kong. That distinction matters for enterprises evaluating multi-model strategies: the same jurisdiction may be cleared for one provider and restricted for another, depending entirely on how each vendor’s terms of service treat geographic scope.
For Anthropic, the immediate financial impact is likely minimal. Hong Kong’s banking workforce is a fraction of Goldman’s global headcount. But the signal is significant: enterprise customers are starting to audit AI tool chains for geopolitical exposure, and vendors whose terms leave ambiguity about supported jurisdictions risk being the first ones cut.