SATURDAY · 02 MAY 2026

Michael English

Clonmel · Co. Tipperary · Ireland
Essay

How to inject intelligence into a 50-person firm — the four-stage pattern

2026-05-02 · By Michael English

Most mid-market firms I meet have already bought the licences. Copilot is on, ChatGPT Enterprise is on, somebody on the leadership team has run a Claude trial, and yet the firm is no smarter than it was a year ago. The tools are everywhere. The intelligence is nowhere. What's missing isn't software — it's a method for turning a fifty-person firm into a single thinking organism, with shared memory, shared judgement, and a shared way of catching its own mistakes. We've now done this enough times to write the pattern down.

Why "more AI" doesn't make a firm smarter

The standard rollout in a mid-market firm goes like this. Someone in IT enables a model. A memo goes out. Two weeks later half the team is using it to rewrite emails, and a third of the team is quietly using it to do work they don't want their manager to know is being done by a model. Productivity goes up by some amount nobody can measure. The firm's actual decisions — pricing, hiring, what to build, which customer to fire — are made the same way they were before. With the same blind spots. By the same five people.

This is what I'd call AI-as-typewriter. It speeds up the output of individuals. It does nothing for the organisation. A firm gets smarter only when its collective memory, its policies, its history, its commercial context, and its decision rules are loaded into a system that can reason across them. That's the work. We call it intelligence injection, and it has four stages: ingest, structure, swarm, audit. Eight weeks, end to end, for a firm of about fifty.

Stage one: ingest

The first job is the most boring and the most decisive. You cannot get organisational AI to behave like an organisation if it has never read the organisation. So in week one we sit with the leadership team and list every place the firm's actual knowledge lives. That list is always longer and messier than they think.

That last point is the one most rollouts skip. A fifty-person firm typically has three to five people whose heads contain the firm. We interview them on camera, transcribe, and feed it in. That alone shifts the result from a chatbot that can quote the staff handbook to something that can reason like a member of the senior team.

Ingest is not a search index. We are not building a chatbot that retrieves the right paragraph. We are building a corpus the firm can think with. Two distinctions matter. First, we separate fact from opinion at ingest time — a board paper is opinion, a signed contract is fact, a Slack message is context. Second, we keep provenance on every chunk. Every later answer must be able to say where it came from.

Stage two: structure

Raw text is not a brain. The second stage, weeks two and three, turns the corpus into something the firm can navigate. This is where most internal AI projects fail quietly, because the team doing it treats it as a data engineering job. It isn't. It's an ontology job, and it has to be done by people who understand the business.

We sit with each function — sales, ops, finance, delivery, people — and draw the entities that matter to them. A customer. An account. A project. A risk. A supplier. A control. A commitment. A cost line. We give each entity a definition the firm will actually live with, and we link the corpus to the entities. By the end of structure, you can ask the system "what do we know about Account X" and get a real answer that pulls from contracts, tickets, the CRM, the last three QBR decks, and the email thread where the relationship manager flagged a churn risk in March.

This is the point at which an AI in mid-market firm starts to feel less like a search box and more like a colleague who's been there longer than anyone. It also tends to be the moment when the leadership team gets nervous, because they realise the system now knows things they had assumed were forgotten. That's a healthy fear, and it's the reason stage four exists.

Stage three: swarm

Stages one and two give you a brain. Stage three gives it hands. By weeks four and five we move from a single assistant to a small swarm of specialised agents, each with a narrow job, each with access to the structured corpus, each with the right to call the others.

For a typical fifty-person firm the swarm tends to look like this:

  1. A commercial agent that knows pricing, contracts, and the deal history.
  2. A delivery agent that knows projects, status, risks, and dependencies.
  3. A finance agent with read access to the ledger and the forecast.
  4. A people agent that knows the org chart, skills, and capacity.
  5. A research agent that brings in outside context on demand.
  6. A drafting agent that produces the email, memo, or deck once the others have agreed on the answer.

The trick is the orchestration layer. A question like "should we take this deal at the discount the customer is asking for" used to go to a pricing committee. In the swarm model it goes to the commercial agent, which pulls margin from finance, capacity from delivery, churn risk from people, and a comparable from research, then writes a one-page recommendation the human pricing committee can sign or reject. The committee still owns the decision. The agents own the assembly.

This is the stage where the corporate AI strategy stops being a slide and starts being a thing the firm actually runs on. It is also the stage where you discover which of your processes were never really processes — they were one person remembering. Those are the ones the swarm fixes by making them legible.

Stage four: audit

If you stop at swarm you have built a confident liar. Models hallucinate. Corpora go stale. People paste private data into prompts. Agents call each other in loops. Without an audit layer none of it is safe to put in front of a customer or a regulator, and in a fifty-person firm a regulator-grade incident is a firm-ending event.

So weeks six to eight are about the audit layer, and it has three parts.

Provenance on every answer

Every output the swarm produces carries a chain back to source. If the commercial agent says margin on this account is below threshold, the underlying ledger rows are one click away. If the people agent says we have capacity in Q3, the named individuals and their current allocations are one click away. No black boxes.

A red team of its own

We run a second, smaller swarm whose only job is to challenge the first. It samples answers, looks for citations that don't support the claim, flags drift when the corpus moves under the model's feet, and escalates anything that smells like a hallucination. This is the part that is hardest to sell to a finance director and the part that pays for itself the first time it catches a wrong number before it leaves the building.

A human review queue with teeth

Not every output goes through a human, but the high-stakes ones do — anything that touches a customer commitment, a financial number that will be reported, a hiring or firing decision, a legal position. The queue is short by design. If it's long, you've over-trusted the swarm, and the audit layer tells you that too.

What changes in the firm

By the end of week eight the fifty-person firm doesn't have "AI". It has a brain. The leadership team starts every Monday with a one-page brief assembled overnight by the swarm: what moved on accounts, what's at risk in delivery, what the cash position will be in six weeks under three scenarios, who's overloaded, who's underloaded, and which three decisions need to be made this week. The brief is wrong about something roughly every two weeks. The audit layer flags it. They get used to that.

The cultural shift is bigger than the productivity shift. People stop hoarding information, because the system rewards the ones who feed it. Meetings get shorter. The five-person bottleneck at the top of the firm widens, because the firm's reasoning is no longer running through five skulls. New hires ramp faster. Customers notice that answers come back the same day, with the working shown.

What to do this week

If you run a mid-market firm and you're considering an AI rollout SME-scale, do one thing on Monday: list every system and every senior head that contains the firm's real knowledge, and ask honestly which of them an AI you've already paid for has ever read. If the answer is none of them, you don't have organisational AI. You have a typewriter. At IMPT we are running this same four-stage pattern on ourselves as we add the AI-native booking agent to the platform — ingesting our own commercial history, structuring it around hotels and partners, letting a swarm of agents handle search, sustainability, and settlement, and putting an audit layer between the agents and the customer. If you want to compare notes on what's actually working, my address is at the foot of the page.

Letters from Clonmel

Quarterly long-form founder letters. Subscribe by email at mike@impt.io.

Subscribe