The EU AI Act’s Annex III logging obligations become enforceable on August 2, 2026. For any team deploying AI agents that touch high-risk domains in the EU, or serving EU customers in those domains, that deadline is now 107 days away. A Help Net Security guide published April 16 translates the regulation’s abstract requirements into concrete engineering tasks.

Which Agents Are Affected

The Act does not mention “AI agents” by name. What matters is what the system does. If an agent scores credit applications, filters resumes, decides who gets healthcare benefits, prices insurance, or triages emergency calls, it falls under Annex III and is classified as high-risk, according to Help Net Security.

Article 6(3) offers an exemption if the system does not materially influence decision outcomes. In practice, that exemption is difficult to claim for an agent that calls tools and acts on results autonomously, per the guide.

General-purpose AI models have separate obligations under Chapter V. The model provider keeps its Chapter V obligations, but the integrator who deploys the model in a high-risk context picks up provider obligations under Article 25.

The Four Articles That Matter

Article 12 requires high-risk AI systems to “technically allow for the automatic recording of events (logs) over the lifetime of the system.” Two words carry weight: “automatic” means logs must be generated by the system itself, not through manual documentation, and “lifetime” means from deployment to decommissioning, according to Help Net Security.

Article 12(2) defines three categories logs must cover: situations where the system might present a risk or undergo substantial modification, data for post-market monitoring, and data for operational monitoring by deployers. The regulation does not prescribe a format or require specific fields, per the guide.

Article 13 requires documentation of how deployers can collect and interpret logs.

Articles 19 and 26 set a six-month minimum for log retention. Financial services firms can fold AI logs into existing regulatory paperwork. Everyone else holds logs for at least six months, possibly longer depending on sector rules, according to Help Net Security.

Standard Application Logging Is Not Enough

The guide identifies a specific gap: standard application logging captures agent tool calls, sub-agent delegations, LLM responses, and final outputs without difficulty. The problem surfaces when a regulator asks, six months later, to prove the logs were not modified. Application logs live on infrastructure someone controls and can be edited or replaced without detection, per Help Net Security.

Article 12 does not explicitly say “tamper-proof.” But if logs can be silently altered and the deployer cannot demonstrate otherwise, their evidentiary value is zero.

The guide recommends cryptographic signing of agent logs: signing each agent action with a key the agent does not hold, chaining each signature to the previous one, and storing receipts outside the agent’s trust boundary. The specific cryptographic scheme matters less than the architectural principle, according to Help Net Security.

No Finalized Standard Yet

No technical standard for Article 12 logging has been finalized. Two drafts are in progress: prEN 18229-1 (AI logging and human oversight) and ISO/IEC DIS 24970 (AI system logging), per the guide. Teams building compliance infrastructure now are designing to a regulation that defines outcomes without specifying how.

The 107-Day Window

The regulatory timeline leaves teams inside the engineering window for compliance work. Building a logging architecture from scratch typically requires schema design, pipeline integration, retention policy configuration, and tamper-evidence mechanisms. For teams that have not started, the window is closing. For teams that have, the absence of finalized standards means their implementations may need revision once prEN 18229-1 or ISO/IEC DIS 24970 are published.