The EU AI Act’s August 2, 2026 enforcement date is 105 days away, and one of its least-discussed provisions has the widest blast radius in enterprise HR: any AI system used in employment decisions now qualifies as high-risk under Annex III. That classification triggers mandatory annual third-party bias audits, full technical documentation, human oversight mechanisms, and transparency disclosures to every candidate evaluated by the system. The penalty for non-compliance: €15 million or 3% of global annual turnover, whichever is higher.

What Counts as High-Risk

The scope covers more ground than most HR teams realize. Resume-ranking algorithms inside applicant tracking systems, AI-powered interview scoring tools, and automated job ad targeting systems all qualify. According to Asanify’s compliance digest, the classification applies to any AI tool evaluating EU-based candidates, regardless of where the deploying company is headquartered. A US or India-based company with remote roles open to EU applicants faces the same obligations as a Berlin-based employer.

Raconteur’s technical audit guide confirms that anything touching employment decisions must be labeled high-risk, with tracking and audit processes available in real time.

The Audit Requirements

An Annex III bias audit is not a checkbox exercise. Certified auditors need access to a model’s training data composition, outcome data disaggregated by protected characteristics (gender, age, ethnicity), and evidence of continuous monitoring. Most companies have not built the data pipelines to produce these artifacts. Many don’t even know which AI components in their hiring stack make decisions versus surface recommendations.

The deployer liability rule compounds the problem. According to Asanify, “we use a vendor’s tool” is not a valid defense. The obligation falls on the company deploying the AI system, not the company that built it. If your ATS vendor’s algorithm discriminates, you bear the regulatory consequence.

The Auditor Bottleneck

Certified third-party auditors qualified under the EU’s conformity assessment framework are limited in number. According to compliance practitioners tracking the deadline, companies targeting August 2 compliance need audit engagement letters signed now, not in July. The core requirements for high-risk AI systems, including documentation, human oversight, and audits, become enforceable on that date with no grace period.

For a 100-person company that adopted AI-based resume screening in the past year, the minimum requirements before August 2 include: a written risk assessment, documentation of training data sources, a human review process for rejected candidates, and a signed engagement with a certified auditor.

The Compliance Gap for Agent Deployments

The regulation extends beyond traditional hiring tools. Companies deploying AI agents for HR workflows, including onboarding bots, benefits enrollment agents, and performance review assistants, face overlapping obligations if those agents influence employment-related decisions. Any agent that scores, ranks, or filters employees or candidates falls under the same Annex III classification.

The practical question for enterprises running agentic HR systems: does your agent produce auditable decision logs disaggregated by protected characteristics? If the answer is no, and the agent touches EU-based workers or applicants, the €15M penalty clock starts August 2.