KavachOS is open source. Cloud launching soon.
kavachOS

03/COMPLIANCE

What EU AI Act Article 13 requiresin plain English

The transparency rules that just kicked in for high-risk AI systems. What Article 13 says, what it means for your logs, and how we mapped it to kavachOS audit trails.

GD

Gagan Deep Singh

Founder, GLINCKER

Published

March 31, 202610 min read

Our compliance lawyer sent me a 40-page PDF on EU AI Act Article 13. Here is what it actually says, stripped of the footnotes, formatted so you can tell your engineer what to build, and honest about the parts where you still need qualified legal counsel.

Article 13 is the transparency article. It applies to providers and deployers of high-risk AI systems and it has been enforceable since August 2026 for new systems. The core ask is simple: if your system makes consequential decisions about people, those people, and the organizations overseeing them, must be able to understand how.


01

Who Article 13 applies to

Article 13 targets high-risk AI systems. The EU AI Act defines this category in Annex III. The list covers AI used in critical infrastructure, education, employment, essential services, law enforcement, border management, justice, and democratic processes. If your system makes hiring decisions, credit determinations, or influences access to healthcare or education, it almost certainly qualifies.

Two roles matter here. The provider builds and places the system on the market. The deployer uses it in an operational context. Both carry obligations. Providers must build transparency in. Deployers must operate it transparently and cannot delegate that responsibility to a vendor.

If you are building an AI product that enterprises will deploy against their employees or customers, you are likely the provider. Your enterprise customers are the deployers. Both of you need to read this.


02

What Article 13 requires

The article requires that high-risk AI systems be designed to allow deployers to correctly interpret the system's output. That sounds vague. The actual requirements break into three buckets: instructions for use, capability disclosure, and logging. Each has specific sub-requirements.

10 years

Log retention

For critical infrastructure AI

3 years

Standard retention

Most high-risk categories

72 hours

Incident notification

Serious incident reporting window


03

The 5 sub-rules of Article 13

Article 13 is organized as five specific obligations. Here is each one, in plain language.

13.1

Transparency by design

High-risk AI systems must be built so their operation is sufficiently transparent for deployers to understand outputs and use the system appropriately. This is a design requirement, not just a documentation requirement. You cannot write a manual that explains a black box. The system itself must be explainable.

13.2

Instructions for use

Providers must supply instructions covering the system's identity and purpose, its capabilities and limitations, the circumstances under which it may produce unreliable results, the human oversight measures needed, and the technical measures for data protection. These must be written for the deployer, not a machine learning engineer.

13.3a

Identity and version disclosure

The instructions must identify the provider, their point of contact, the intended purpose, the date and version of the system, and the input data types it is designed to work with. Version tracking is a hard requirement, not a best practice.

13.3b

Performance and risk disclosure

Providers must disclose the system's performance on relevant benchmarks, its known limitations, and the circumstances that may affect reliability. This includes demographic and geographic performance gaps if they exist.

13.3c

Human oversight and logging

The instructions must describe human oversight measures, the technical capabilities needed to interpret outputs, and the logging requirements. Specifically, logs must enable post-hoc auditing of the system's operation. The regulation uses the phrase 'automatic recording of events' and specifies the retention periods vary by category.


04

How kavachOS audit logs map to each sub-rule

The audit trail in kavachOS records every agent action, every authentication event, and every permission change, with full timestamps, actor identity, and delegation chain. Here is how that maps to each Article 13 sub-rule.

Sub-ruleRequirementkavachOS coverage
Art. 13.1Transparent operation for deployersEvery agent action logged with actor, timestamp, input context, and outcome. Queryable via API.
Art. 13.2Instructions including oversight measuresHuman oversight events logged separately: manual reviews, overrides, and escalations appear as first-class audit entries.
Art. 13.3aSystem identity and version in instructionsAgent identity records include version, creation date, and the user identity that authorized creation. Immutable after creation.
Art. 13.3bPerformance and limitation disclosureNot covered. kavachOS logs auth and delegation, not model performance metrics. You need to supply this from your model infrastructure.
Art. 13.3cAutomatic event logging with required retentionConfigurable retention up to 10 years. Logs include all agent calls, permission checks, delegation events, and anomaly detections.

The gap at 13.3b is deliberate. kavachOS handles identity and authorization. Model performance benchmarks, known failure modes, and bias disclosures come from your model infrastructure or your AI system documentation. We log who called what and when. We do not instrument the model itself.


05

What you still need to do yourself

kavachOS covers the auth and delegation layer. Article 13 compliance for a real high-risk AI system is broader. The remaining pieces are yours to build.

  • Instructions for use document. Article 13.2 requires a written document that goes to deployers. kavachOS does not generate this for you. It needs to describe your system's purpose, limitations, and oversight requirements in language a non-technical deployer can act on.
  • Model performance disclosure (13.3b). If your system has known demographic or geographic performance gaps, those must be disclosed. This comes from your evaluation pipeline, not from your auth layer.
  • Incident reporting process. Serious incidents involving high-risk AI must be reported to national authorities within 72 hours. kavachOS anomaly detection can surface signals, but your organization needs a defined escalation path and a designated point of contact.
  • RBAC policy for human oversight. Article 13 requires that human oversight is possible. Use kavachOS RBAC (see the RBAC docs at docs link below) to define which roles can review, override, or pause AI decisions. Those override actions then appear in the audit trail.
  • Data governance alignment. The audit logs contain personal data in the form of user IDs, agent IDs, and request metadata. Your organization's data governance and GDPR program applies to those logs as it would to any personal data store.

The permissions model covers role assignments in detail. If you are managing compliance-relevant roles from an identity provider, the SCIM provisioning feature handles that automatically.

Topics

  • #EU AI Act
  • #Article 13
  • #AI compliance
  • #AI transparency
  • #high-risk AI
  • #audit trail AI

Keep going in the docs

Read next

Share this post

Get started

Audit-ready auth for AI systems

Immutable audit trail, configurable retention, RBAC for oversight roles. Free up to 1,000 MAU.