Intelligence Brain · workshop

Who the Clonmel AI Brain Workshops are for — and who they aren't

← Back to Intelligence Brain

I run small-group AI workshops out of The Old Museum in Clonmel. They're not for everyone, and I'd rather say that upfront than have someone drive down from Dublin and feel they wasted a day. This piece is the honest version: who gets value from sitting in that room, who doesn't, and what the workshops actually cover at a technical level. If you're trying to figure out whether to book a seat or send a colleague, read this first.

The person these workshops are built for

The typical attendee runs something. They're a partner in a small firm, a finance director, an operations lead, a practice manager, or the founder of a business doing somewhere between a handful and a few dozen people in headcount. They've used ChatGPT or Copilot. They've felt the pull of "we should be doing more with this" and the equal-and-opposite pull of "I don't know what I don't know, and I'm responsible if it goes wrong."

That second feeling is the important one. The workshops are built for people who carry liability. If you sign off on client work, audit files, patient records, tenancy agreements, or board papers, you can't treat AI as a toy. You need to know where the data goes, what the model can and can't do, what a hallucination looks like in your domain, and what your regulator will say when they ask. That's the audience. Irish AI training audience, in the broadest sense, but specifically the slice of it that has skin in the game.

The other group I get a lot of are technically curious operators — people who can write a spreadsheet formula but aren't developers, who want to understand what an embedding is, why retrieval-augmented generation matters, and why running a model on your own hardware is different from calling an API. Those people get a lot from the day because we go into the mechanics, not just the outcomes.

Who shouldn't come

I'll be direct. If you're a senior ML engineer or you've shipped production LLM systems, the workshop will frustrate you. We cover ground you already know. You'd be better off with a one-to-one technical session focused on a specific deployment problem — get in touch separately for that.

If you're looking for a sales pitch dressed up as training, this isn't it. I demo the Intelligence Brain because it's the system I built and the easiest way to show on-premise inference, RAG over real documents, and audit logging in one place. But the workshop isn't a closing meeting. People leave with notes and a working mental model, not a contract.

If you want to learn how to write better prompts for marketing copy, there are cheaper and better options online. The workshops assume you care about regulated, evidence-bearing work — the kind where being wrong has consequences beyond a clumsy LinkedIn post.

And if you're sending a junior team member because "they're the techy one" while the decision-makers stay in the office, you'll get less from it. The workshop is structured around decisions: what to build, what to buy, what to ban, what to log. Junior staff can implement those decisions but rarely make them.

What the technical content actually covers

The morning is foundations. Not "what is AI" — I assume you've read at least one think-piece — but the architecture you need to reason about your own deployment. We cover:

  • Tokens, context windows, and why long documents break things. If you don't know why a 200-page contract makes a model behave differently from a 2-page email, the rest of the day won't land.
  • The difference between fine-tuning, RAG, and prompt engineering. Most people conflate these. They solve different problems and have very different cost and risk profiles.
  • Embeddings and vector search. Why "find me the clause that means X" works at all, and where it fails — usually on negation, dates, and cross-references.
  • On-premise versus API. What you actually give up when you call OpenAI or Anthropic, what you gain by running Llama or Mistral on your own box, and where the line is for GDPR, professional privilege, and sectoral rules.
  • Hallucination, grounding, and citation. Why a model that "shows its working" by quoting source documents is fundamentally safer than one that doesn't, and how to verify the citations are real.

The afternoon is applied. We take real documents — anonymised contracts, sets of accounts, tenancy files, board packs, depending on who's in the room — and we run them through a working system. Attendees see the retrieval step, see the prompt the model receives, see the citations, and see what happens when we deliberately ask it something the documents don't contain. That last part is the most important demo of the day. Watching a model say "I don't have that information" instead of inventing an answer is the moment most people understand what grounding actually buys them.

Why Clonmel, and why small groups

I keep groups small on purpose. Once you go past a certain size you've got a conference, not a workshop, and the value of being able to interrupt me with your specific situation drops to zero. People in regulated industries don't ask their real questions in front of fifty strangers. They ask them in front of six.

Clonmel works because it's neutral ground. Dublin attendees treat a trip down as a deliberate day out of the office, which is exactly the headspace you want. Cork, Limerick, Waterford and Kilkenny are all inside an hour. And running it from The Old Museum means I can show the actual hardware — the box the models run on, the network setup, the audit log — rather than handwaving at slides. For anyone serious about on-premise deployment, seeing a real rack matters more than people expect.

The Clonmel workshop attendees I've had so far have ranged from solo practitioners up to people running operations for groups with several offices. The mix is part of the point: a sole-trader accountant and a multi-partner law firm are solving different problems, but the underlying questions about data residency, model selection, and human-in-the-loop review are the same.

What you'll be able to do the next morning

I care about this part more than the workshop content itself. If you go back to your office and nothing changes, the day was wasted. So the workshops are designed to leave you with three concrete things.

First, a written-down understanding of where AI fits and doesn't fit in your specific workflow. Not a generic framework — your workflow, named processes, named risks. We do this on paper during the day.

Second, a vendor-evaluation checklist that will survive contact with a sales call. Most AI vendor pitches collapse under five specific questions about data handling, model provenance, update cadence, audit logging, and offboarding. You'll have those five questions and know what good answers sound like.

Third, a rough build-versus-buy view. For some firms, a properly configured Microsoft or Google tenant with sensible policies is enough. For others — typically those with sensitive document corpora they can't send to a US cloud — an on-premise system makes more sense. I'll show you how I think about that decision rather than telling you the answer, because the answer depends on your data, your clients, and your regulator. If you want to see the on-premise option in detail, the workshop track of the Intelligence Brain is the system we use for the live demos.

How this fits with the wider Intelligence Brain work

The workshops are deliberately separate from product sales. People do come out of a workshop and ask about deployment, and that's fine, but I'd rather someone leave understanding the problem space than leave with a quote. If you want the full picture of what I'm building and why, the Intelligence Brain overview sets out the architecture, the verticals, and the thinking behind on-premise organisational intelligence for regulated firms in Ireland.

What the workshops give you that the website can't is the conversation. Half the value on workshop days is hearing what other attendees are wrestling with. A solicitor asking about privilege, an accountant asking about audit trail, a property manager asking about tenancy data — those questions overlap more than people expect, and listening to them being worked through in real time is, honestly, the bit attendees mention most when they email afterwards.

Where to start this week

If you've read this far and you're nodding, do one thing this week: write down the top three documents or workflows in your business where AI would be useful if it were trustworthy, and the top three where you'd never want it near. Bring that list to the workshop. We'll work through it on the day, and you'll leave with a plan that's specific to your firm rather than a generic one. If the list is hard to write, that itself is the signal you need the day. Email me through the site and I'll tell you when the next session is and whether it's a fit.

Book a 30-minute assessment

Direct with Michael. No charge. No pitch deck.

Pick a slot →