If you're rolling out an AI tool inside an Irish business right now — whether that's a copilot for your finance team, a document classifier for legal review, or a clinical triage assistant — you need three documents on file before the first prompt is sent in anger: a Data Protection Impact Assessment, a Legitimate Interests Assessment, and an EU AI Act tier classification. None of these are optional, none of them are interchangeable, and none of them are well served by a generic template downloaded from a vendor blog. This article walks through what each one actually contains, how they fit together, and where Irish-specific obligations from the Data Protection Commission and the forthcoming national AI Act competent authority change the shape of the work.
Why three documents, not one
People conflate the DPIA and the AI Act conformity assessment because both ask "is this risky?" — but they answer different questions for different regulators. A DPIA is a GDPR Article 35 obligation, enforced in Ireland by the DPC. It assesses risk to data subjects from processing personal data. An AI Act tier mapping is a Regulation (EU) 2024/1689 obligation, enforced by whichever competent authority Ireland eventually designates (the legislative process is ongoing). It assesses risk from the AI system itself — including risks that have nothing to do with personal data, like manipulation of behaviour or unsafe automation in critical infrastructure.
The Legitimate Interests Assessment sits underneath the DPIA. If your lawful basis for processing is Article 6(1)(f) — legitimate interests — you need a documented LIA showing you've balanced your interest against the data subject's rights. For most internal AI rollouts (employee-facing copilots, document review tools), legitimate interests is the cleanest basis, but only if the LIA actually demonstrates the balancing test rather than asserting it.
You can't collapse these into one document. The DPC will look for the DPIA in DPC format. A future Irish AI authority will look for the conformity assessment in AI Act format. Trying to merge them produces a document that satisfies neither.
The DPIA — what actually goes in it
Article 35 GDPR and the DPC's own guidance set out the structure, but the substantive content for an AI rollout has six parts that matter:
- Systematic description of the processing. Not "we use AI to help staff." Spell out: what model, hosted where, what data goes in the prompt, what data is retained, what's logged, who can see the logs, what's used for training (ideally nothing), and what the output is used for. If you're using a hosted LLM API, the description has to include the processor's sub-processors and their locations.
- Necessity and proportionality. Why does this require AI rather than a rule-based system or a human? If you can't articulate this, the DPIA fails before it starts.
- Risks to data subjects. Be specific. "Hallucination producing incorrect personal data in a customer-facing reply" is a risk. "AI is risky" is not. For each risk, severity and likelihood, then mitigation.
- Transfers. If your model runs in a US region or the provider's sub-processors do, you need the SCCs, the Transfer Impact Assessment, and a clear note on whether the data ever leaves the EEA. On-premise deployment removes most of this section, which is one reason regulated Irish firms increasingly look at it.
- Data subject rights. How does someone exercise erasure when their personal data may have been embedded in a vector index? What's the rectification path when an AI output is wrong? Most DPIAs handwave this. Don't.
- Consultation. Article 36 requires prior consultation with the DPC if residual high risk remains after mitigation. State explicitly whether you've concluded consultation is required.
The DPIA template the DPC publishes is a starting point, not a finishing point. For AI specifically, the European Data Protection Board's guidance on automated decision-making and the DPC's own AI-related publications fill in gaps the generic template leaves open.
The LIA — three tests, written down
The LIA is shorter than the DPIA but trips up more rollouts because people skip the second test. The three parts are:
- Purpose test. What's the legitimate interest? "Improving operational efficiency" is real but weak. "Reducing the time staff spend manually classifying inbound contracts so they can focus on negotiation" is real and specific.
- Necessity test. Is the processing necessary for that interest? Could you achieve it with less data, anonymised data, or a non-AI approach? This is where most assessments are thin. If a regex would do the job, an LLM isn't necessary.
- Balancing test. Does your interest override the rights of the data subject? Consider their reasonable expectations, the relationship (employee vs customer vs third party), and whether the processing is intrusive. Employees being told their drafts are processed by an internal AI assistant is one thing; clients whose privileged correspondence is sent to a third-party API without notice is another.
If the balancing test is close, you need additional safeguards: opt-outs, additional transparency, data minimisation. Document them in the LIA itself, not in a separate file nobody reads.
AI Act tier mapping — the part most rollouts get wrong
The EU AI Act creates four tiers: prohibited, high-risk, limited-risk (transparency obligations), and minimal-risk. The mistake I see most often is assuming an internal productivity tool is automatically minimal-risk. It might be. But the high-risk list in Annex III includes systems used in employment (recruitment, performance evaluation, task allocation), access to essential services, and several other categories that catch internal tools by surprise.
A practical tier mapping for an Irish rollout has four steps:
- Identify the AI system. The Act defines it broadly. A wrapper around an LLM is an AI system. So is a document classifier built on top of someone else's foundation model.
- Identify your role. Provider, deployer, importer, or distributor. Most Irish businesses rolling out a third-party tool are deployers, which is a lighter set of obligations than provider — but not zero. If you fine-tune a model and put it into service under your own name, you become a provider, with a much heavier set of obligations.
- Check Annex III against your use case. Carefully. "Used for evaluating employee performance" catches a lot of tools that nobody initially thought of as HR tech.
- Document the conclusion. Even if the answer is "minimal risk, no obligations beyond transparency where applicable", write it down with the reasoning. When the Irish competent authority is established and starts issuing guidance, you'll want a contemporaneous record of why you reached your tier decision.
For deployers of high-risk systems, the obligations include human oversight, monitoring, logging, and notifying serious incidents. For limited-risk systems — including most chatbots and any system generating synthetic content — the transparency obligation means users must know they're interacting with AI or seeing AI-generated content. This applies regardless of whether the system handles personal data, which is why it sits outside the DPIA.
Where the three documents intersect — and where they don't
The DPIA and LIA overlap on lawful basis and risk to individuals. The DPIA and AI Act assessment overlap on system description, logging, and human oversight — but they're scored against different rubrics. The LIA and AI Act assessment barely overlap at all.
The practical consequence: build a single source of truth for the system description (architecture, data flows, retention, sub-processors, oversight mechanism) and reference it from all three documents. When the architecture changes, you update one document and the references stay consistent. When you don't do this, you end up with a DPIA describing one data flow and an AI Act assessment describing a slightly different one, and the regulator who notices the discrepancy will be the one you least want to notice it.
This is exactly the problem the methodology side of the Intelligence Brain methodology is designed to address — keeping the system description, the risk register, and the regulatory mapping in one place rather than three drifting Word documents. For firms running AI on premise, the architectural choice also collapses several of the transfer and sub-processor sections of the DPIA, which is a separate practical reason to consider it.
Irish-specific wrinkles
A few things that don't appear in pan-European templates but matter in Ireland:
- The DPC is active and detail-oriented. A thin DPIA will be picked up if there's a complaint. Sectoral guidance from the DPC on areas like CCTV, biometrics, and direct marketing carries over into AI use cases that touch those areas.
- Sectoral regulators are layered on top. The Central Bank for financial services, the Medical Council and HIQA for healthcare, the Legal Services Regulatory Authority for law firms. Each has its own posture on AI, and a rollout in those sectors needs the sectoral assessment as well as the GDPR and AI Act ones.
- Data localisation is not legally required, but is increasingly contractually required. Public sector tenders, financial services counterparties, and healthcare bodies are writing data residency clauses that go beyond what the law strictly mandates.
- The AI Act competent authority is not yet designated. Until it is, treat the obligations as live (the Regulation applies regardless) but expect detailed national guidance to follow.
Where to start this week
Pick one AI tool currently in use or about to go live. Write the system description in plain English — what goes in, what comes out, where it runs, who sees the logs. From that single description, draft the three