EU AI Act: the compliance gap upstream of the deadline

The EU AI Act becomes broadly enforceable on 2 August 2026. Fines for non-compliant high-risk AI systems can reach €35 million or 7% of global annual turnover.

These figures have generated significant board attention, legal briefings, and compliance programmes across Europe and internationally.

Most of those programmes are addressing the right problem in the wrong order.

The Act’s requirements — risk management systems, technical documentation, human oversight mechanisms, transparency obligations, data governance — are demanding but manageable. The harder problem sits upstream of them.

Before an organization can meet any requirement, it needs to know what AI systems it actually operates, how they’re classified, and who holds the compliance obligation for each one. Most organizations do not yet have those answers.


The inventory problem is larger than most organizations expect

At enterprise scale, AI is not deployed from a single point. It is embedded in vendor platforms, SaaS applications, business systems, and internally developed tools — procured across different functions at different times, often without centralized oversight.

General counsel has a different picture of what is running than the CTO. Operations has approved tools that IT didn’t know about. Procurement has signed vendor contracts with embedded AI capabilities that no one has yet reviewed through a regulatory lens.

Before any compliance documentation can begin, an organization needs a complete AI system inventory: every system, across every business unit, classified by function, data input, and decision impact.

This is not a desk exercise. It requires structured stakeholder interviews, vendor contract review, and cross-functional coordination. Organizations consistently underestimate the effort — and overestimate how much they already know.


“We’re using a vendor’s AI system” is not a compliance defence

The Act draws a clear distinction between providers (those who develop or place AI systems on the market) and deployers (those who use AI systems in a professional context).

Deployers of high-risk AI systems carry their own obligations under the Act — distinct from those of the provider, and not transferred by contract.

An organization using a vendor’s AI-powered credit decisioning tool, claims processing system, or recruitment platform is a deployer under the Act. It has obligations for human oversight, record-keeping, and transparency regardless of what the vendor’s documentation says.

Many compliance programmes are waiting for vendor certifications before moving forward. That is the wrong sequencing. Deployer obligations exist independently, and they need to be assessed now.


Risk classification requires judgment, not checklists

Annex III defines eight categories of high-risk AI systems: biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice.

Within those categories, classification depends on the specific use case and the nature of the decision being supported.

Whether a given system is high-risk is not always obvious. A fraud detection system may or may not be high-risk depending on whether its outputs affect access to financial services. An internal HR tool may or may not be high-risk depending on whether it materially influences recruitment or promotion decisions.

Classification requires someone with legal and technical judgment working through each system in context — not a compliance checklist completed in an afternoon.


The documentation obligation reaches board level

High-risk AI systems require technical documentation, logging, and record-keeping adequate for regulatory audit.

More significantly, the obligation to establish and maintain a risk management system for each high-risk AI system is organizational — it requires documented processes, named owners, and governance structures that leadership can stand behind.

This is not an IT project or a legal filing. It is a governance obligation. The board cannot credibly attest to AI compliance if the organization cannot demonstrate that it knows what systems it operates, how they are classified, and what controls govern them.


The real deadline is earlier than August 2026

A structured compliance engagement — inventory, classification, gap analysis, and remediation roadmap — takes time.

An organization beginning that work in June will be completing it as enforcement starts, with no margin for the stakeholder friction, rework, and sign-off cycles these projects invariably involve.

Organizations that begin before the end of May can realistically complete the compliance foundation — inventory, classification, gap analysis, and a board-ready roadmap — before August 2.


A structured path forward

The compliance gap has a manageable structure. Four workstreams address it.

AI System Inventory

A complete map of AI systems in use, under development, or procured — classified by function, data input, and decision impact. Built through structured stakeholder interviews and documentation review, not self-reporting alone.

Risk Classification

Each system assessed against Annex III categories. Prohibited, high-risk, limited-risk, and minimal-risk designations assigned with documented rationale. This is where expert judgment is most critical, and where errors create the greatest downstream exposure.


Compliance Gap Analysis

For each high-risk system, a systematic assessment against applicable Act requirements: risk management, data governance, human oversight, technical documentation, transparency, and record-keeping. Gap-by-gap, not category-by-category.

Board-Ready Compliance Roadmap

EngA prioritized action plan with owner assignments, effort estimates, and a sequenced path to compliance before the August deadline — formatted for board or C-suite presentation, not for a compliance team's internal files.


If your organization has not yet completed its AI inventory and risk classification, the honest question is not whether you have a compliance programme. It is whether that programme is addressing the right problems in the right order — with enough time remaining.

If you'd like to discuss where your organization stands, I am available for a 30-minute scoping conversation. No obligation required before that call.

eric@demorgoli.com   or   View engagement criteria