Microsoft removed an AI-powered productivity app called Vibing.exe from the Microsoft Store on April 24 after security researchers disclosed that the application was silently harvesting screenshots, audio recordings, and clipboard data from users’ machines. The company has launched an internal investigation into how the app bypassed its Store security and compliance review processes, according to Cyber Press.

How It Worked

Vibing.exe launched automatically on Windows login and ran in the background without user notification, according to Cyber Press. Once active, it captured periodic screenshots of the user’s desktop, intercepted clipboard content (including passwords and internal communications), and activated the system microphone to record audio.

The captured screenshots were converted to base64-encoded data and bundled with a unique hardware GUID for each device, enabling persistent tracking of individual machines over time. Data was transmitted via WebSocket connections, a technique that bypasses many proxy filtering mechanisms commonly used in enterprise environments.

The Microsoft Connection

Security researcher Kevin Beaumont uncovered that despite being attributed to an unknown “Vibing-Team,” the application was digitally signed by Yaoyao Chang, a researcher associated with Microsoft’s GenAI labs in Beijing. Open-source intelligence tools revealed that the exfiltrated data was routed to a Microsoft-owned Azure tenant at vibing-api-ccegdhbrg2d6bsd7.b02.azurefd.net.

The project was listed on GitHub under the “VibeVoice” name and presented as open-source. The repository contained no actual source code, only an 80MB executable binary. Developers who raised concerns about the app’s behavior had their issue tickets closed without resolution, Cyber Press reported.

Store Review Failure

The incident exposes a gap in Microsoft Store’s review process for AI-enabled applications. Vibing.exe requested and received permissions to access the screen, microphone, and clipboard, capabilities that are increasingly common in legitimate AI productivity tools. The app’s malicious behavior was indistinguishable from normal AI assistant functionality at the permission level.

Cybersecurity News confirmed the app’s removal and noted that Microsoft has disabled the backend infrastructure associated with the application.

Part of a Pattern

The Vibing.exe incident arrives during a week of escalating security failures in AI-powered development and productivity tools. State of Surveillance reported three separate AI platform security breaches in a single week in April 2026, including Lovable exposing user source code and Vercel’s AI tools leaking environment variables.

The common thread: AI tools that require broad system access to function create attack surfaces that existing app review processes were not designed to evaluate. A screenshot capture that serves a legitimate AI assistant and a screenshot capture that exfiltrates data to a remote server look identical at the API level.

For enterprise security teams, the indicators of compromise include the presence of vibing.exe or Vibing Installer.exe on endpoints and outbound traffic to the Azure endpoint listed above. Organizations should review endpoint activity logs to confirm no residual data exfiltration is occurring.