QAOA and grid optimisation — the honest case for irish renewables
Every few months somebody publishes a press release saying quantum will solve grid optimisation. Then a control-room engineer at EirGrid points out that unit commitment for the all-island system already runs in MILP solvers in production, with constraints that took two decades to refine, and asks the obvious question: what exactly is the quantum bit doing? That is the right question. This piece is the honest version of the answer — what QAOA can and cannot do for renewable dispatch on the Irish grid, where superconducting hardware genuinely helps, and where it is still cheaper to run CPLEX overnight.
What QAOA actually is, in one fair paragraph
The Quantum Approximate Optimisation Algorithm, introduced by Farhi, Goldstone and Gutmann in 2014, is a variational hybrid algorithm. You take a combinatorial problem expressed as a QUBO or an Ising Hamiltonian, encode the cost function as a problem operator H_C, choose a mixer H_M (usually transverse-field), and apply p alternating layers of exp(-iγH_C) and exp(-iβH_M) to a uniform superposition. A classical optimiser tunes the 2p angles to minimise the expectation value. Measure, repeat, sample bitstrings, take the best.
That is the whole algorithm. It is not magic. At p=1 it has provable performance bounds on MaxCut on 3-regular graphs and not much else. At higher p it approaches the adiabatic limit, which means in the infinite-depth limit it is exactly as good as quantum annealing — which is to say, sometimes useful, sometimes not. The interesting regime is finite p on hardware where the circuit fidelity has not collapsed, and where the problem structure happens to map well to the qubit topology.
Why grid optimisation is the natural target
Unit commitment, economic dispatch, transmission switching, and reserve allocation are all combinatorial-with-continuous-relaxation problems. The binary variables — is this generator on, is this line in service, is this storage asset charging — are the hard part. The linear power flow is the easy part. MILP handles this well today, but the binary tree blows up as you add more wind farms, more demand-side response aggregators, and more battery sites with state-of-charge dynamics across a 24- or 48-hour horizon.
The Irish system is a good stress test. We have the highest non-synchronous penetration limit in Europe, a constrained interconnector picture, and a wind fleet whose forecast error is the dominant source of intraday redispatch cost. The dispatch problem is not "find the cheapest generator stack" — it is "find the cheapest stack that respects rate-of-change-of-frequency limits, voltage support requirements, must-run constraints for system security, and a probability distribution over wind output for the next four hours". That last clause is where classical solvers start to grind, because robust and stochastic formulations explode the variable count.
Where QAOA might actually help — and where it won't
Be specific. QAOA is plausibly useful for the following sub-problems, in roughly increasing order of difficulty:
- Transmission switching subproblems. Selecting which lines to take out of service to relieve congestion is a pure binary problem with clean constraint structure. The graph maps reasonably to a heavy-hex topology if you are willing to spend SWAPs.
- Battery dispatch over short horizons with discretised charge/discharge states. Small enough to fit, structured enough to benefit from the QAOA ansatz.
- Reserve procurement across heterogeneous providers when the bid stack has combinatorial coupling (block bids, minimum activation volumes).
- Stochastic unit commitment with scenario reduction — and this is the one where, if quantum advantage ever materialises for grid problems, it will likely show up first.
Where QAOA is not the right tool: anything dominated by continuous variables, anything where the LP relaxation is already tight, and anything where the constraint structure is so dense that the QUBO penalty terms swamp the cost function. A lot of "quantum for power systems" papers fall into the third trap. They take a textbook OPF, jam it into a QUBO with quadratic penalties on every Kirchhoff constraint, and report that the result is worse than the classical baseline. It is worse because the encoding is bad, not because the algorithm is bad.
The hardware reality on superconducting transmons
Here is what happens when you run QAOA on real superconducting hardware. You compile the problem Hamiltonian into native two-qubit gates — usually CZ or ECR depending on the platform. Each ZZ term in H_C between non-adjacent qubits requires a SWAP chain. On a heavy-hex lattice, which is what Ireland Quantum 100 is built around, qubit connectivity is degree-2 or degree-3, so any non-local interaction in your problem graph costs you depth.
Depth is the enemy. Single-qubit gate fidelities on modern transmons are above 99.9%. Two-qubit gate fidelities are typically 99.0% to 99.7% on the better processors. Your circuit fidelity is roughly the product of all gate fidelities, so a circuit with 200 two-qubit gates at 99.5% fidelity is at 0.995^200 ≈ 0.37 — and that is before you add measurement error and decoherence during idle. The dilution refrigerator gets you to sub-15 mK, the qubits have T1 and T2 in the hundreds of microseconds on a good day, and your algorithm has to finish before the coherence runs out.
Practical implication: for a 100-qubit machine running QAOA on a grid problem, the useful problem size is not 100 binary variables. It is whatever fits inside a circuit depth that preserves enough signal to distinguish the optimum from noise. Today, on real hardware, that is in the range of 20 to 40 well-structured variables at p=2 or p=3. Error mitigation — zero-noise extrapolation, probabilistic error cancellation, dynamical decoupling — pushes that further. Surface-code error correction will eventually push it much further, but we are years away from that being practical for 100 logical qubits.
What changes with a sovereign machine
A lot of the quantum-for-grid literature assumes cloud access to a shared device. That is fine for proof-of-concept, but operational dispatch has constraints that cloud quantum cannot meet: you need deterministic latency, data residency for grid telemetry, and the ability to schedule jobs against real-time market gates. None of that works if you are queueing behind a thousand other users on a transatlantic API.
This is the operational case for putting hardware on the island. The Ireland Quantum 100 programme is a 100-physical-qubit superconducting transmon system being commissioned in Co. Tipperary over the next twelve months, with the climate-workload cohort getting access first. Grid optimisation is in that cohort because the relevant national institutions — system operators, the regulator, university energy groups — are all within an hour's drive and can run real workloads against real telemetry under proper data agreements. That matters more than peak qubit count.
For teams that want to start now, the work begins on the algorithmic side, not the hardware side. Most of the value over the next 18 months will come from formulating grid problems properly — choosing the right encoding, exploiting symmetry, using warm-starts from classical relaxations, and benchmarking honestly against the best available classical baseline. There is more on the chemistry-side workload mix in the broader climate workloads programme, and the optimisation patterns transfer across.
Honest benchmarks and what counts as a win
Anyone claiming quantum advantage on a grid problem today is either redefining "advantage" or comparing against a classical solver they have nobsled. The right benchmarks are: Gurobi or CPLEX with proper warm-starts and tuned cuts, simulated annealing with parallel tempering, and for stochastic formulations, progressive hedging or L-shaped decomposition. If your QAOA result is within an order of magnitude of those, that is genuinely interesting at this stage of hardware. If it beats them on a structured subproblem, that is a paper. If it beats them on a real EirGrid-scale unit commitment, please show your work.
What counts as a win in the next two years is more modest and more useful: demonstrate that QAOA on a 100-qubit superconducting machine can solve a meaningful sub-problem of Irish grid dispatch — say, intraday battery scheduling across a regional cluster, or contingency-constrained transmission switching for a defined zone — at a quality comparable to classical, with a path to improvement as hardware scales. That is a real outcome, defensible, and the foundation for everything that comes after.
Where to start this week
If you work in a system operator, a university energy group, or a renewable developer's optimisation team and you want to take this seriously: pick one well-defined dispatch sub-problem, write the QUBO formulation by hand, run it on a classical Ising solver first (D-Wave's Ocean tools or a CPU-based simulated annealer), and only then port to Qiskit or PennyLane for QAOA simulation up to 25 qubits on a laptop. That exercise — done properly, with honest benchmarking against your existing MILP — will tell you within a fortnight whether quantum is interesting for your problem or whether you are better off sharpening the classical solver. Either answer is useful. The work is the same either way, and the teams that do it now will be the ones ready to use the hardware when it lights up.