Intelligence Brain · founder

Twenty years of engineering and the Irish AI question

← Back to Intelligence Brain

I spent twenty years inside the machine rooms of Irish enterprise — Tesco and Oracle — before I started IMPT in 2024. That's not a credential I lead with often, because the people who matter already know what those years actually involve: late-night cutovers, schema migrations that nobody documented, integration projects where the original architect left in 2011 and took the tribal knowledge with him. What I want to write about here is what that experience tells me about the AI question facing Irish firms right now, and why most of the advice circulating is either too American or too academic to be useful on the ground in Clonmel, Cork, or the IFSC.

What twenty years in Irish enterprise actually teaches you

The first thing you learn working on big Irish retail and database systems is that the interesting problems are almost never algorithmic. They're integration problems. They're data-quality problems. They're "this field means three different things depending on which subsidiary populated it" problems. When I was at Tesco, the systems we ran were not exotic — they were SAP, Oracle, bespoke point-of-sale stacks, and a constellation of feeds going in and out. The complexity was in the joins, the reconciliations, and the human processes wrapped around them.

That experience matters because the current wave of AI discourse — particularly the LLM discourse — is dominated by people who have never had to reconcile a stock ledger to a financial ledger across multiple legal entities. They talk about "the data" as if it were a single object. It isn't. In any real Irish enterprise, "the data" is forty systems, three of them mainframe-adjacent, two of them held together by a stored procedure written by someone who retired, and at least one Excel workbook that the finance team treats as the source of truth despite IT's protests.

An Irish AI engineer who hasn't done that work — who hasn't sat in a room at 2am while a batch job fails because a date format changed — will build AI systems that fall over the moment they meet production. This isn't a criticism of younger engineers. It's an observation that domain knowledge is the scarce resource, not model knowledge.

Oracle Ireland, and what databases taught me about LLMs

My years as an Oracle Ireland engineer shaped how I think about the current generative AI stack more than any of the recent ML literature I've read. Here's why.

Oracle, for all its commercial reputation, is fundamentally a discipline. You learn ACID. You learn that consistency is not a nice-to-have. You learn to ask, before any architecture decision, what happens when this transaction half-completes. You learn that an index is a contract, that a query plan is a hypothesis, and that the difference between a system that works at ten users and one that works at ten thousand is almost always invisible to the person specifying the requirements.

Now look at how most organisations are deploying LLMs in 2024 and 2025. They're chaining API calls to OpenAI or Anthropic with no transactional semantics, no audit trail, no rollback, and no clear answer to "what happens when the third call in the chain returns garbage." They're treating non-deterministic systems as if they were deterministic ones. They're shipping AI features without the basic engineering hygiene that any Oracle DBA would demand of a stored procedure.

That's not the AI's fault. It's an engineering culture problem. And it's one of the reasons I think the next two years will be brutal for organisations who deployed AI fast and shallow, and quietly good for the ones who took it slow and built proper infrastructure around it.

The Irish enterprise AI question, in plain terms

Irish enterprise AI conversations tend to start in the wrong place. They start with "which model should we use" or "should we build or buy". Those are downstream questions. The upstream question is: what is your organisation's actual relationship with its own information?

In most of the firms I talk to — legal practices, accounting firms, mid-market property businesses, regional public bodies — the honest answer is "fragmented and partly tribal". Knowledge lives in people's heads, in email chains, in shared drives organised by whoever set them up in 2017, and in document management systems that nobody fully trusts. The AI question, properly framed, is not "how do we add a chatbot" but "how do we turn this fragmented knowledge into something an organisation, rather than an individual, can rely on".

That reframing matters because it changes what you build. A chatbot bolted onto a chaotic SharePoint will surface chaos faster. A retrieval system built on a properly curated, access-controlled, audited knowledge layer will surface answers. The work is in the layer, not in the chatbot.

This is why I built the Intelligence Brain as a founder-led product rather than a consulting offer. The repeatable engineering is in the layer. The bespoke work is in connecting it to whatever idiosyncratic stack the customer actually has.

Why on-premise matters more in Ireland than in San Francisco

One of the things that surprises people outside the regulated-industry conversation is how non-negotiable on-premise and sovereign deployment is becoming for serious Irish firms. There are three forces pushing this.

The first is regulation. GDPR has not gone away, the EU AI Act is now in force, and Irish professional bodies — the Law Society, Chartered Accountants Ireland, the Medical Council — all impose confidentiality obligations that sit uncomfortably with sending client data to a US-based inference API. "Uncomfortably" is a polite word. In several cases I'd argue it's flatly incompatible, and the firms doing it are running risk they haven't quantified.

The second is commercial. If your competitive advantage is your knowledge of a specific market — say, Irish commercial property law, or agricultural accounting, or the particular shape of HSE procurement — then putting that knowledge into a third-party model's training pipeline or retrieval cache is not a clever move. The firms that understand this instinctively are the ones who built their book over decades.

The third is operational. Cloud LLM APIs change underneath you. Models get deprecated. Pricing changes. Latency drifts. If you're running a regulated business that needs to reproduce, six months from now, exactly the answer a system gave a client today, you need control over the inference stack. That's an on-premise problem, or at minimum a sovereign-cloud problem with full version pinning.

What the engineering actually looks like

For anyone building in this space, the technical shape is reasonably consistent. You need a document ingestion pipeline that handles the messy reality of Irish enterprise documents — Word files with tracked changes from 2019, scanned PDFs from the courts, emails with attachments that reference other attachments. You need OCR that works on Irish-issued documents, including bilingual ones. You need entity extraction that knows the difference between a PPS number, a CRO number, and a VAT number, and knows the validation rules for each.

You need a vector store, yes, but you also need a relational store, because most useful queries are hybrid: "find me documents from this client, in this matter, mentioning this concept, between these dates". Pure vector search collapses on that. You need a permissions model that mirrors the firm's actual access rules, not a simplified version of them. You need an audit log that satisfies a regulator, not just a developer's curiosity.

And you need an inference layer that runs locally — either on the firm's own hardware or in a sovereign tenancy — with model versions pinned, prompts versioned, and outputs logged. None of this is exotic. It's the boring engineering that the consumer AI world has been allowed to skip and that regulated firms cannot.

If you want the architectural overview without the founder-history detour, I've written it up at the Intelligence Brain overview.

What the next decade looks like for Irish AI engineers

I'm bullish on Irish engineering talent in this space, and not for nationalistic reasons. I'm bullish because the work that needs doing — careful integration with messy enterprise systems, regulatory-grade audit trails, multilingual document handling, deep domain modelling — is exactly the work Irish engineers have been quietly doing for the multinationals based here for thirty years. We have a generation of people who know how to build systems that don't fall over under audit. That's a rare and now-valuable skill set.

What I'd warn against is the temptation to compete on model novelty. Ireland is not going to out-train OpenAI. It can, however, out-engineer them on the deployment side, and out-domain them on regulated verticals. That's where the durable businesses will be built.

Where to start this week

If you're a founder, partner, or technical lead in an Irish firm trying to figure out where to begin: don't start with a model. Start with a single document workflow that currently costs you real time — a contract review, a case file summary, a tender response, a compliance check. Map it end to end. Find the actual decision points. Then ask what a properly built knowledge layer underneath it would change. That conversation is worth more than any vendor demo, and it's the one I'm always happy to have.

Book a 30-minute assessment

Direct with Michael. No charge. No pitch deck.

Pick a slot →