Alien, a startup building trust infrastructure for the agentic economy, raised $7.1 million in pre-seed funding, according to SiliconANGLE. The company’s thesis: existing identity and access management (IAM) frameworks were designed for human principals, and they break down when autonomous AI agents start acting on behalf of users across the web.
“Alien is building the trust infrastructure for the agentic economy,” founder and CEO Kirill Avery told SiliconANGLE in an exclusive interview.
How It Works
The system has two identity layers. Humans get an Alien ID through facial recognition on iOS and Android devices. The company says it does not permanently store biometric data or require government identification — it retains facial recognition information only as long as needed to confirm the person is real, then layers on social graph activity, connections to other verified humans, and probabilistic scoring over time.
AI agents receive a corresponding credential called Agent ID. The purpose, according to Avery, is less about identifying agents for their own sake and more about tying them back to accountable humans. When an agent interacts with a service, that service can quickly determine whether a verified human is behind it.
“You cannot just allow agents without reputation simply do anything on your website,” Avery told SiliconANGLE.
Why Agent Reputation Is the Hard Problem
The weakness in most emerging agent reputation systems, as Avery frames it, is that they depend on slow accumulation of behavioral history. An agent needs months of activity before external services can assess whether it’s trustworthy. Alien’s approach shortcuts this by anchoring agent reputation to a known human identity from day one.
The timing aligns with a real architectural gap. Agentic browsers like OpenAI’s ChatGPT Atlas and Perplexity’s Comet are making purchases, filling forms, and navigating websites autonomously. Open-source frameworks like OpenClaw let anyone deploy agents that interact with external services. The services on the receiving end currently have no standardized way to verify that the agent making requests is controlled by a legitimate, accountable human.
Avery’s motivation is partially personal. He told SiliconANGLE that watching Kremlin bot propaganda tear his family apart in Russia during the Ukraine war drove him to work on identity verification. The scaling of agentic AI makes that problem exponentially worse: individuals no longer need to sit on social media typing manually when armies of agents can operate on their behalf.
At $7.1M in pre-seed, Alien is a small bet on a category that could define the next decade of internet trust infrastructure.