Most AI conversations in healthcare default to hospitals — radiology models, triage chatbots, big EHR vendors bolting on copilots. Allied health gets ignored. That's a mistake. A two-physio practice in Clonmel, a four-chair dental surgery in Cork, a small psychology clinic in Galway — these are the businesses doing the bulk of patient contact in Ireland, and they're drowning in admin that an on-premise intelligence layer can quietly remove. I've been building exactly that, and what follows is the engineering view of why allied health is a better fit for organisational AI than the hospital systems everyone keeps writing about.
Why allied-health clinics are a cleaner AI problem than hospitals
A hospital is a federation of departments, each with its own system of record, its own consultant politics, its own integration backlog. You can spend two years scoping an AI project before a single useful query runs. A physio clinic, a dental practice, or a small multidisciplinary clinic is the opposite — usually one practice management system (Cliniko, Dentally, Software of Excellence, Pabau, or similar), one shared drive of letters and reports, one diary, and a handful of staff who all know each other's names.
That tight scope matters technically. The data surface area is small enough to actually map. The document corpus — referral letters, treatment plans, consent forms, insurance correspondence, GDPR subject access logs — fits comfortably on a single machine. You don't need a cloud data warehouse, you need a well-structured index sitting on a box in the back office. That's the architectural difference between hospital AI and clinic AI, and it changes everything about cost, latency, and risk.
What an on-premise brain actually does for a physio practice
Take a physiotherapy clinic with three practitioners. The painful tasks are predictable: writing GP referral-back letters, drafting insurance reports for VHI or Laya, summarising a patient's history before a follow-up, chasing outstanding consent forms, reconciling DNA (did-not-attend) patterns. None of these need a frontier model with 200k context. They need a local model that has read every previous letter the practice has written, every treatment plan template, and the current patient's notes.
The technical pattern is straightforward. Documents get ingested into a local vector store. Practice management exports — anonymised where possible, kept inside the clinic where not — get parsed into a structured store. A retrieval layer pulls the right context for each query. A locally-hosted model (something in the 8B–70B range depending on hardware) generates the draft. The clinician edits and signs off. No patient data leaves the building. The reason this works for physio specifically is that the writing style across letters is highly consistent — once the system has seen forty of your discharge letters, the forty-first draft is genuinely close to your voice.
Dental: where structured data meets unstructured narrative
Dental is interesting because it has more structured data than physio — chart positions, treatment codes, radiograph metadata — but the narrative side is just as messy. Treatment plan letters, finance discussions, orthodontic progress updates, lab communication, complaint handling. The dental AI problem is really two problems welded together.
The structured side is a database query problem. "Show me every patient overdue for a recall who hasn't been contacted in the last six weeks." "List all crowns placed in the last year where the lab was X." That doesn't need a language model at all — it needs a clean semantic layer over the practice management database so a non-technical principal can ask questions in plain English and get a reliable answer. The unstructured side — drafting the letters that follow from those queries — is where the language model earns its keep.
The integration point most dental software vendors don't expose cleanly is the bridge between the two. A useful brain has to be able to run "find every patient who needs a perio review, draft a personalised recall letter referencing their last hygienist visit, and queue them for the front desk to send." That's three systems talking to each other, and on-premise it's a half-day of plumbing rather than a six-month vendor procurement.
The Irish clinic compliance picture, honestly
I'll be blunt: the regulatory anxiety around clinic AI is mostly misdirected. Practitioners worry about whether they're "allowed" to use AI. The real questions under GDPR and the Health Act are narrower and more answerable.
- Where does the patient data physically sit? If it sits on a machine in your practice, on a drive you own, processed by a model running in the same room, you are in a much stronger position than if you've pasted notes into a US-hosted chatbot. The latter is a transfer that needs a lawful basis and, in most clinic setups, doesn't have one.
- Is the processing necessary and proportionate? Drafting a letter the clinician then reviews and signs is processing for the original care purpose. Training a third party's foundation model on your patient corpus is not. Keep those two clearly separated in your data processing register.
- Can you produce an audit trail? If the Data Protection Commission asks how a particular letter was generated, you need to answer. On-premise systems can log every retrieval and every generation locally; SaaS chat tools generally cannot give you that log in a form you control.
- Have you done a DPIA? For any systematic processing of health data, yes, you need one. It's not a long document for a small clinic, but it has to exist.
None of this is a reason to avoid AI in an Irish clinic AI deployment. It's a reason to choose architecture deliberately. The on-premise route exists precisely because the SaaS route makes these four questions hard to answer well.
The small-clinic hardware reality
People assume on-premise means a server room. For an allied-health practice it usually means one well-specified workstation, sometimes two — one for the model, one for the index and application layer. A modern machine with a single consumer-grade GPU will run a quantised model in the 8B–13B range comfortably enough for letter drafting and retrieval-augmented Q&A at clinic-scale volumes. Step up to a 24GB or 48GB card and you're running 30B–70B class models locally with response times that feel snappy to a clinician between appointments.
The bottleneck is rarely the model. It's the ingestion pipeline — getting the existing letters, scanned forms, PDFs of insurance correspondence, and practice management exports cleaned and indexed in a way that retrieval actually works. That's where most clinic AI projects fail quietly: someone drops a folder of 4,000 PDFs into a vector store, retrieval returns noise, the clinician loses faith, and the project dies. The unglamorous work of document classification, OCR quality control, and chunking strategy is what separates a useful clinic brain from a demo.
For practices wanting a sense of how this is structured for healthcare specifically, I've written more about the medical-vertical setup at the Intelligence Brain for medical practices, including how the ingestion and retrieval layers are configured for clinical documents.
Where allied-health AI actually pays back
I'm wary of productivity claims with neat percentages attached, so I'll describe the pattern instead. The hours that come back to a clinic are not the headline clinical hours — they're the evening admin tail. The principal who stays back two nights a week writing reports. The practice manager who spends Friday afternoon chasing consent forms and reconciling no-shows. The associate who batches insurance letters into a Sunday slot because they can't face them mid-week.
Those hours are the realistic target. A well-configured on-premise system pulls a draft letter together in seconds with the right history retrieved, the right tone, and the right structured fields populated. The clinician reviews, edits, signs. The practice manager runs queries against the patient base in plain English instead of building spreadsheets. The compounding effect over a month is what people notice — not a single dramatic moment, but a gradual realisation that the evening admin queue isn't there any more.
This is the same pattern across physio AI, dental AI, and broader allied-health AI deployments — the work the model does best is the work practitioners least want to do, and almost all of it is text-shaped. If you want the wider architectural picture across verticals, the overview of the Intelligence Brain walks through how the same core engine is configured differently for medical, legal, accounting and other regulated settings.
Where to start this week
If you run an allied-health practice and you're trying to decide whether this is worth taking seriously, don't start with vendor demos. Start with an audit. Pick one week. Have every clinician note the admin tasks they did outside clinical contact time — letters, reports, recalls, summaries, chasing. Categorise them. Count them. That list is your specification. Most of it will be text-generation against existing patient context, and most of it will be a strong fit for an on-premise brain. The right question to ask any AI supplier — including me — is not "what can your system do" but "show me how your system would handle these twenty specific tasks from my list, with my data, in my building." If they can't answer that concretely, the project isn't ready yet. If they can, you're closer than you think to getting your evenings back.