The difference between 100 physical qubits and 100 logical qubits
If you read the headlines, you would think a 100-qubit machine and a 1,000-qubit machine differ only by a factor of ten. They do not. The honest unit of quantum capability is the logical qubit, and right now the global field has, depending how generous you are with the definition, somewhere between zero and a small handful of them. Ireland Quantum 100 is a 100 physical qubit superconducting transmon machine. That distinction matters more than almost anything else I can tell you about the system, so this article is a plain explanation of what physical and logical qubits are, why the gap between them is the central problem in the field, and what an honest 100-physical-qubit machine is actually good for on day one.
What a physical qubit actually is
A superconducting transmon qubit is a small piece of patterned aluminium on a silicon substrate, shaped into a non-linear LC oscillator with a Josephson junction acting as the non-linear inductor. Cooled below roughly 15 millikelvin in a dilution refrigerator, the lowest two energy levels of that oscillator become well-separated enough from the rest that you can address them with shaped microwave pulses and treat them as a two-level system: |0⟩ and |1⟩.
That is the qubit. Everything else — the readout resonators, the Purcell filters, the flux lines, the coaxial cabling, the attenuators at each temperature stage, the travelling-wave parametric amplifiers — exists to let you prepare, manipulate and measure that two-level system without destroying its coherence in the process.
A physical qubit is fragile in ways that take some getting used to. Coherence times T1 and T2 for state-of-the-art transmons sit in the range of a couple hundred microseconds on a good day. Single-qubit gate errors are in the low 10⁻⁴ range. Two-qubit gate errors — the ones that matter for entanglement, which is where quantum advantage lives — sit closer to 10⁻³ on the best published platforms, and worse on most. Readout fidelity is its own bucket of problems. Crosstalk, leakage to the |2⟩ state, TLS defects in the oxide layer, cosmic-ray-induced correlated errors — every one of those is a real engineering problem we have to solve at the chip, package, control-electronics and software-calibration layers.
So when somebody tells you their machine has 100 physical qubits, the correct first question is: at what gate fidelity, with what connectivity, and over what coherence window?
What a logical qubit actually is
A logical qubit is a single quantum bit's worth of information protected by quantum error correction spread across many physical qubits. The favoured scheme on superconducting hardware is the surface code, a topological code that lives on a 2D grid of qubits with nearest-neighbour interactions — which is exactly why heavy-hex and square-lattice topologies dominate the industry.
The surface code works by repeatedly measuring stabiliser operators — products of Pauli X or Z on small groups of neighbouring qubits — without measuring the encoded information itself. The pattern of stabiliser outcomes ("syndromes") tells a classical decoder where errors have likely occurred, and the decoder applies corrections in software. The code distance d is the minimum number of physical errors needed to cause a logical error. A distance-d surface code uses roughly 2d² − 1 physical qubits per logical qubit, and the logical error rate scales as (p/p_th)^(d/2), where p is the physical error rate and p_th is the code's threshold, around 1% for the surface code under standard noise models.
Work that out. To get a logical error rate of 10⁻¹⁰ — the kind of number you need to run Shor's algorithm on a useful problem, or to do millions of clean gate operations on a chemistry simulation — with physical errors around 10⁻³, you need code distance somewhere in the range of 20 to 30. That is roughly 1,000 to 2,000 physical qubits per logical qubit, plus magic-state distillation factories on top, plus the classical decoder running in real time at microsecond latency.
That is the gap. One hundred logical qubits — the kind you would need to factorise RSA-2048 or fully simulate FeMoco for nitrogen fixation — is a machine of perhaps 100,000 to several million physical qubits running a real-time error-corrected stack. Nobody on the planet has that. Nobody is close.
Why 100 physical qubits is still useful
If the logical-qubit threshold is years away, why build a 100-physical-qubit machine at all? Because the most interesting science between now and full fault tolerance happens in the NISQ-plus regime — Noisy Intermediate-Scale Quantum hardware running variational and hybrid algorithms that tolerate noise by design.
The honest list of what a well-calibrated 100-physical-qubit machine can do today:
- Variational Quantum Eigensolver (VQE) on small-to-medium molecular Hamiltonians, with active-space reductions and noise-aware ansätze. Useful for catalyst discovery in carbon-capture chemistry, where you do not need full chemical accuracy on every electron — you need to rank candidate molecules and find ones worth synthesising in a wet lab.
- Quantum Approximate Optimization Algorithm (QAOA) on combinatorial problems with structure that maps cleanly to the chip topology. Grid-balancing and renewable-dispatch problems are good candidates.
- Quantum machine learning kernels for small datasets where the feature map has provable expressivity advantages, particularly in protein-folding sub-problems relevant to climate-resilient agriculture.
- Dynamics simulation of strongly correlated electron systems and lattice models, using Trotterised circuits that are short enough to fit inside coherence.
- Benchmarking and code research — running small surface-code patches at distance 3 and 5, characterising the noise, building the decoder pipeline that will scale to the next machine.
None of that is "useful quantum advantage" in the strong sense — beating the best classical method on a commercially relevant problem. It is the work that gets you to that point. Anybody who tells you their NISQ machine is already delivering classical-beating advantage on real problems is, with very narrow exceptions, selling you something.
The honest roadmap from physical to logical
The path from a 100-physical-qubit machine to a 100-logical-qubit machine has four engineering tracks running in parallel, and you have to make progress on all of them or none of them matter:
1. Physical-qubit fidelity
Every factor-of-two improvement in two-qubit gate error reduces the physical-qubit overhead per logical qubit by a meaningful chunk. Better materials, better junction fabrication, better pulse-shaping with optimal control, better calibration loops.
2. Scale and yield
You need to fabricate, package and wire up tens of thousands of qubits with reasonable uptime. That is a packaging and cryogenic-engineering problem as much as a quantum one. 3D integration, through-silicon vias, multiplexed readout, and modular interconnect between fridges are all active areas.
3. Real-time decoding
Surface-code syndromes have to be decoded faster than they arrive, or the backlog grows forever. That means FPGA or ASIC decoders sitting next to the fridge, with the algorithm — minimum-weight perfect matching, union-find, or neural-network decoders — implemented in hardware.
4. The software stack
OpenQASM 3, Qiskit, PennyLane, Cirq — the SDK ecosystem already exists for circuit-level work. The fault-tolerant compilation stack — lattice surgery, magic-state distillation scheduling, logical-circuit synthesis — is much less mature, and that is where a lot of the next decade's software work lives.
What this means for Ireland Quantum 100
The machine we are delivering in Co. Tipperary is honest about what it is: 100 physical transmons in a heavy-hex topology, sub-15 mK in a dilution fridge, accessible via a standard Qiskit and OpenQASM 3 interface, prioritised for climate-science workloads. It is not a fault-tolerant machine. It will not factorise anything interesting. It will not break cryptography.
What it will do is give Irish and European climate researchers — the carbon-capture chemists, the photovoltaic-materials groups, the battery-electrolyte teams, the grid-optimisation groups — sovereign access to a real superconducting machine without a transatlantic API call and without their workloads queueing behind whoever else is paying more that week. It will run small surface-code experiments. It will be the platform on which the next machine is designed. And the chemistry results that come off it feed directly into the supplier-evaluation work we do at IMPT, which is the whole reason the climate cohort got prioritised in the first place. The wider context of our quantum programme sits inside that climate-first frame.
If you want the deeper detail on the hardware build itself — the cryostat, the control electronics, the chip topology — the architecture page has the engineering specifics as they get locked down through site fit-out and cryostat install.
Where to start this week
If you are a researcher or engineer trying to get useful work done in this space and you want to be ready when sovereign Irish access comes online, do three things this week. Install Qiskit or PennyLane and run the VQE tutorial on H₂ and LiH on the simulator end-to-end — not the demo, the wh