Most school AI conversations start in the wrong place. They start with a tool — ChatGPT, Copilot, Gemini — and work backwards toward "how do we let staff use this safely". That ordering is what creates the GDPR headaches, the inconsistent lesson plans, the Board of Management questions nobody can answer cleanly. A working deployment runs the other way round: you start with what the school already knows, lock that down on premise, and only then expose it to teachers, SNAs, and admin in a controlled way. This article walks through how that actually looks in an Irish primary or post-primary setting — the wiring, the data boundaries, the failure modes, and what a principal needs to sign off before day one.
What an intelligence brain is in a school context
An intelligence brain, in plain terms, is a private retrieval and reasoning layer that sits on the school's own infrastructure (or a dedicated tenant) and indexes the documents the school already produces: schemes of work, Croke Park records, Code of Behaviour, SEN files, child protection records, board minutes, parent communications, inspection reports, policy documents, and the day-to-day Word and PDF sediment that builds up across a school year.
It's not a chatbot bolted onto Office 365. The distinction matters. A generic chatbot answers from its training data and whatever the user pastes in. A school intelligence brain answers from the school's own corpus, with citations back to the source document, and refuses to answer when the corpus doesn't support a claim. That refusal behaviour — "I can't find that in your policies" — is more important than any clever generation. It's what stops a teacher from getting a confidently wrong answer about, say, the school's anti-bullying procedure.
Architecturally there are three layers worth naming. A document ingestion pipeline that handles OCR, chunking, and embedding. A vector store plus metadata index that knows which documents belong to which permission scope. And a reasoning layer — typically a local or dedicated LLM — that's constrained to answer only from retrieved context, with the prompt engineered to cite and to refuse.
The data boundary problem, and why on-premise matters
An Irish school holds Article 9 special category data under GDPR: SEN assessments, medical information, child protection concerns, sometimes Tusla correspondence. The Data Protection Commission has been clear that schools are controllers and bear the responsibility for any processor arrangement, including cloud AI services. If you pipe SEN files into a public LLM endpoint to "summarise the IEP", you've made a transfer decision the Board probably hasn't sanctioned.
The on-premise (or sovereign-tenant) deployment closes that loop. The documents never leave the school's network perimeter. The embeddings are computed locally. The model that does the reasoning runs locally as well, or in a clearly ring-fenced Irish-or-EU tenant with a DPA that names the school as controller and excludes training use. Inputs and outputs don't become someone else's training corpus.
The practical implication for a small school: you need a box. Not a big one — a single workstation-class server with a modern GPU will serve a primary school of a few hundred pupils without strain. A post-primary school with full staff usage wants something a bit beefier, but we're still talking one piece of hardware in a comms cupboard, not a rack. The hardware sits behind the school's existing firewall and authenticates against whatever identity the school already runs (typically Microsoft 365 / Entra ID, sometimes Google Workspace).
Permission scopes: who can ask what
This is the part most generic deployments get wrong. A teacher should not be able to retrieve another teacher's appraisal notes by phrasing a clever question. The principal should be able to retrieve board minutes; the SET teacher should retrieve SEN files for pupils in their caseload; a subject teacher should retrieve schemes of work for their subject and year group, plus general school policy.
The way to enforce this is not at the prompt level. Prompt-based access control — "you are an assistant that only shows X to Y" — is theatre. It collapses under adversarial questioning. Permission has to be enforced at retrieval: the vector store filters candidates by the user's identity claims before any chunks reach the model. If the user doesn't have access to the document, the document is never considered, never embedded into the prompt, never available to leak.
In a school deployment that means mapping every document, at ingest, to a scope: all-staff, SLT-only, SEN-team, year-head-X, BoM, and so on. The mapping is usually inferable from the SharePoint or shared-drive folder the document came from, which is why the cleanest deployments mirror the school's existing folder permission structure rather than inventing a new taxonomy.
What teachers actually use it for
The use cases that survive contact with reality are duller than the demo-day fantasies. The high-value ones in an Irish school setting tend to be:
- Policy lookup under pressure. A teacher dealing with a behaviour incident at break can ask "what does our Code of Behaviour say about a third stage referral" and get the actual clause with a citation, not a half-remembered version.
- Scheme-of-work alignment. "Where in our second-year English scheme do we cover unseen poetry, and what resources have we used before?" The answer comes from the school's own documents, not a generic curriculum site.
- Parent communication drafts. Drafting in the school's existing tone, referencing the school's actual policies, with the teacher editing before send. The model never sends; it drafts into a textbox.
- SEN administrative drag. Pulling together the documentation trail for an NCSE application, summarising assessment reports a parent has just submitted, cross-referencing prior IEPs. This is where teachers get hours back.
- Inspection preparation. "What did the last WSE recommend on assessment for learning, and what have we done since?" The brain finds the report, the action plan, and the staff meeting minutes that show follow-through.
- New-staff onboarding. A new teacher in September can ask the school's own documents questions instead of cornering the deputy principal twenty times a day.
Notice what's not on that list: marking, generating pupil-facing content, anything that sits between a child and a model output. Those use cases are real but they belong to a different conversation about pedagogy and pupil data, and most schools are not ready to govern them yet.
The rollout sequence that actually works
I've seen deployments fail because they tried to go school-wide on day one. The pattern that works is narrower and slower.
Phase one: SLT and admin only, policy corpus only. Index the policy folder, the BoM minutes, the staff handbook, the inspection reports. Give access to the principal, deputy, and school secretary. Two or three weeks of real use surfaces every retrieval bug, every mis-scoped document, every chunking problem. You fix those before anyone else sees the system.
Phase two: a small teacher pilot. Four or five teachers across different subjects and year groups, with their schemes of work and shared subject resources added to the index. Weekly feedback session with the deputy principal. The goal isn't enthusiasm; it's to find the questions the system answers badly so you can tune retrieval and add documents that fill the gaps.
Phase three: staff-wide for non-pupil-data tasks. Whole-staff access, but with the SEN and child protection corpora still walled off to their respective teams. Most teachers need policy, scheme, and admin support — they don't need pupil-record access through the brain.
Phase four: SEN team and pastoral. The most sensitive corpus, deployed last, with the tightest permission scopes and the clearest audit logging. Every query and every retrieval is logged with user identity and timestamp. The DPO can run a report against that log at any time.
If you want a longer view of how this maps to the broader product, the education-vertical page goes into the data-handling specifics, and the overview of the intelligence brain explains the underlying architecture across sectors.
The governance the Board needs to see
A Board of Management approving a school AI deployment should be looking at five documents, not a sales deck. A Data Protection Impact Assessment that names the lawful basis for each processing purpose. A Record of Processing Activity entry. A processor agreement (if any third party is involved) that excludes training use and names the EU jurisdiction. An acceptable use policy for staff that's clear about what goes in, what doesn't, and what the human-in-the-loop expectation is. And an audit log policy specifying retention and review.
None of those are exotic. Schools already produce equivalent documentation for the MIS, for the cloud email provider, for the homework platform. The intelligence brain is a new processing activity, not a new species of governance.
Where to start this week
If you're a principal or deputy reading this and wondering how to move, do one thing this week: list the document folders your school maintains and mark each one with the smallest group of staff who legitimately need access. That single exercise — done honestly, on paper — tells you more about whether your school is ready for an intelligence brain than any vendor demo. If the folder permissions are a mess, fix that before you index anything. If they're clean, you already have the scaffold a deployment will sit on, and the next step is a scoped pilot with your SLT and your policy corpus, nothing wider, for the first month.