SATURDAY · 02 MAY 2026

Michael English

Clonmel · Co. Tipperary · Ireland
Essay

Lawful AI rollout in 2026 — DPIA, LIA, EU AI Act tier-mapping, in order

2026-05-02 · By Michael English

Most AI rollouts I've seen stall not because the model is wrong but because the paperwork is in the wrong order. A team builds the pilot, the legal review lands halfway through procurement, the DPIA gets started after the vendor is chosen, and then someone reads the EU AI Act and realises the use case is closer to "high-risk" than anyone wanted to admit. The work then loops back on itself for weeks. The fix isn't more documents. It's doing the same documents in the right sequence, with the right people in the room, before money is spent.

What follows is the sequence I'd run for a typical mid-market firm in 2026 — say, fifty to a thousand staff, a handful of AI use cases in flight, a DPO who is part-time or shared, and a board that wants this done properly without a six-month delay. The names of the artefacts are boring on purpose: tier-map, DPIA, LIA, model card, vendor file. The order is what saves you.

Step one: tier-map before you scope

The EU AI Act sorts systems into tiers — prohibited, high-risk, limited-risk with transparency duties, and minimal-risk — and the obligations differ sharply between them. The single most useful hour you can spend at the start of any rollout is mapping each proposed use case to a tier before you write the project brief. Not after. Before.

The reason is that the tier dictates almost everything downstream: whether you need a conformity assessment, whether human oversight has to be designed into the workflow rather than bolted on, whether you owe the user a disclosure that they're talking to a machine, and whether the system can lawfully be deployed at all. A CV-screening tool, a credit-decisioning assistant, and a customer-service chatbot are three different regulatory animals. Treating them as one project is how teams end up rebuilding the chatbot in month six because someone finally noticed it was making eligibility decisions on the side.

For AI Act tier mapping, I write a one-page table per use case. Columns: what the system does, who the subjects are, what decisions it influences, what tier it falls into, and which Article of the Act drives that classification. If the answer to "what tier" is "we're not sure", the use case isn't ready to be scoped, never mind procured.

Step two: data inventory, then DPIA

Once the tier is clear, the next document is the data inventory — what personal data flows into the system, where it came from, what the original purpose was, and whether the new AI purpose is compatible with that original purpose. This is dull but it's the spine of everything that comes after. Most DPIA failures I've reviewed trace back to a data inventory that was guessed rather than built.

With the inventory in hand, the DPIA for AI is straightforward to draft. The structure is the same as any GDPR DPIA — necessity, proportionality, risks to rights and freedoms, mitigations — but with three additions that matter for AI specifically:

The DPIA isn't a one-shot artefact. It's a living document that gets re-opened when the model is retrained, when the vendor changes its terms, or when a new use case is added on top. Build it so it can be updated in an afternoon, not rewritten from scratch.

Step three: legitimate-interest balancing, done honestly

For most internal AI use cases — productivity tools, summarisation, drafting, code assistance — the lawful basis under GDPR will be legitimate interests rather than consent. Consent doesn't fit an employment context cleanly, and contract isn't usually the right hook either. So you end up running a Legitimate Interests Assessment, and the LIA is where a lot of rollouts quietly cheat themselves.

An honest legitimate interest AI assessment has three parts and a tie-breaker. The purpose test asks whether the interest is real, specific, and lawful — "we want to be more efficient" doesn't pass; "we want to reduce time-to-first-draft on customer correspondence so agents can spend more time on complex cases" does. The necessity test asks whether you could achieve the same outcome with less data or a less intrusive method. The balancing test weighs the interest against the rights and reasonable expectations of the people whose data is being processed — staff, customers, third parties mentioned in the inputs.

The tie-breaker is the one most teams skip: what mitigations would tilt the balance back in favour of the rollout. Things like input redaction, no-training contractual clauses with the vendor, retention limits on prompts and outputs, opt-outs where they're meaningful, and clear internal guidance on what not to paste in. The LIA isn't a hurdle; it's the document that tells you what the deployment actually has to look like to be lawful.

Step four: the vendor file

By the time you're talking to vendors, you should already know the tier, the data flows, the DPIA risks, and the LIA mitigations. The vendor file is where you check whether the supplier can support what you've already decided you need. Not the other way round.

For a high-risk system the AI Act puts obligations on both providers and deployers, and the contract has to allocate them clearly. For limited-risk systems you still need transparency, logging, and a clean answer on training-data use. Either way, the questions are the same: where is the model hosted, where is the data processed, what sub-processors are involved, what's the position on training on customer inputs, what logging is available to you as the deployer, and what's the incident-notification path when something goes wrong.

If a vendor can't answer those in writing, that's the answer. Move on.

Step five: human oversight, written down

Article 14 of the AI Act treats human oversight as a design requirement for high-risk systems, not a poster on the wall. Even for systems below that threshold, writing down who is accountable for which decisions — and what they're empowered to override — is the cheapest insurance you can buy. I'd rather have a one-page oversight protocol that's actually used than a forty-page policy that nobody reads.

The protocol should name roles, not people: who reviews flagged outputs, who can pause the system, who signs off on retraining, who handles a data-subject request that touches model outputs. If the same person is named in all four boxes, the protocol isn't real.

Step six: the model card and the staff-facing note

Two documents close out the rollout. The internal model card describes what the system does, what it doesn't do, what its known limitations are, and what the human in the loop is expected to check. The staff-facing note — a page, in plain language — tells the people using the tool what they can and can't put into it, and what the firm is doing with their prompts.

These two documents are also what an auditor or a regulator will ask for first. Having them written before the system goes live, rather than reconstructed afterwards, is the difference between a calm conversation and a bad week.

The order, in one line

Tier-map → data inventory → DPIA → LIA → vendor file → oversight protocol → model card and staff note. Then deploy. Then revisit on a schedule.

If you do them in that order, no single document blocks the next one for long, because each one feeds the next with information it actually needs. If you do them out of order — DPIA before tier-map, vendor file before LIA — you'll redo work and the rollout will drift by months.

What to do this week

Pick the AI use case in your firm that's furthest along and write the one-page tier-map for it before Friday. If the tier is unclear, that's your finding for the week — stop the procurement conversation until it's settled. At IMPT we run this same sequence on our own internal AI tooling and on the AI-native booking agent we're building for hotels; the tier-map sits at the front of every project folder, the DPIA gets opened the day a new data flow is proposed, and the LIA is revisited whenever a vendor changes its terms. None of it is glamorous. All of it is what keeps the lawful AI rollout moving instead of stalling in legal review while the market moves on.

Letters from Clonmel

Quarterly long-form founder letters. Subscribe by email at mike@impt.io.

Subscribe