A malicious Hugging Face repository that impersonated OpenAI’s legitimate Privacy Filter project reached #1 on the platform’s trending list and accumulated 244,000 downloads before being removed. Researchers at HiddenLayer discovered the campaign on May 7, finding the repository delivered information-stealing malware to Windows users.

How the Attack Worked

The repository, named Open-OSS/privacy-filter, typosquatted OpenAI’s real Privacy Filter release and copied its model card nearly word for word, according to BleepingComputer. The only substantive addition was a loader.py file containing fake AI-related code designed to look harmless.

In the background, loader.py disabled SSL verification, decoded a base64 URL pointing to an external server, and fetched a JSON payload containing a PowerShell command. That command, executed in a hidden window, downloaded a batch file (start.bat) that escalated privileges, added an exclusion to Microsoft Defender, and executed the final payload.

The payload itself is a Rust-based infostealer targeting browser credentials and cookies from Chromium and Gecko browsers, Discord tokens and databases, cryptocurrency wallets and browser extensions, SSH/FTP/VPN credentials including FileZilla configs, local files containing wallet seeds, and multi-monitor screenshots. Stolen data was compressed and exfiltrated to a command-and-control server at recargapopular[.]com.

Scale and Detection Evasion

The actual victim count remains unclear. HiddenLayer found that the vast majority of the 667 accounts that liked the repository appear to be auto-generated, and the 244,000 download count may have been artificially inflated. The malware included extensive anti-analysis features: checks for virtual machines, sandboxes, debuggers, and analysis tools designed to evade security researchers.

By examining the infrastructure behind the fake repository, HiddenLayer uncovered other repositories using the same malicious loader and noted overlaps with an npm typosquatting campaign distributing the WinOS 4.0 implant.

The Supply Chain Pattern for Agent Builders

This is not the first time Hugging Face has been used to host malicious models, despite the platform’s security scanning. The attack exploits a fundamental trust assumption: developers pulling “official-looking” repositories from popular platforms expect the platform itself to have vetted the content. When a repository reaches trending status, that implicit trust multiplies.

For teams building agent systems that pull models, plugins, or dependencies from public registries, the pattern is the same whether the registry is Hugging Face, npm, or PyPI. A copied README, an inflated download count, and a single malicious file is enough to compromise downstream systems. Anyone who downloaded files from the malicious repository should reimage the affected machine, rotate all stored credentials, replace cryptocurrency wallets and seed phrases, and invalidate browser sessions and tokens.