Plastic-degrading enzymes — the quantum folding case
The plastic problem is a chemistry problem, and chemistry problems are what quantum computers are actually good at. PETase — the enzyme first isolated from Ideonella sakaiensis that breaks the ester bonds in polyethylene terephthalate — is a working proof that nature already evolved a solution. The engineering question is whether we can make a faster, more thermostable variant before another decade of bottles ends up in the Atlantic. That question has so far been gated by classical compute. It does not have to be.
Why PETase is a quantum-shaped problem
PETase is small by enzyme standards — around 290 residues — but the catalytic mechanism is not. The active site uses a Ser-His-Asp triad to perform nucleophilic attack on the ester carbonyl of PET. The transition state is a tetrahedral intermediate stabilised by an oxyanion hole, and the energetics of that stabilisation depend on electronic structure that classical molecular mechanics force fields cannot describe. You need quantum chemistry to model bond-breaking and bond-forming. You need it specifically because the rate-limiting step is the proton-transfer geometry, and protons are quantum objects that classical MD treats as classical balls.
That is the wedge. The folding of the rest of the protein — the scaffold that holds the triad in place — can be approximated classically, and AlphaFold has done remarkable work there. But the catalytic pocket, the electron-density redistribution during the transition state, and the hydrogen-bonding network that decides whether a mutation helps or hurts — that's a quantum subproblem. It is the kind of problem you solve with a hybrid pipeline: classical for the bulk fold, quantum for the active site.
What a 100-qubit transmon machine actually does here
Be careful with the qubit count. One hundred physical superconducting transmons running at the bottom of a dilution refrigerator at sub-15 mK is not enough to do fault-tolerant quantum chemistry on a full enzyme. Anyone telling you otherwise is selling something. What 100 physical qubits — heavy-hex topology, native two-qubit gates, no surface-code overhead yet — can do is the active-site fragment.
The pattern is fragmentation. You isolate the catalytic residues plus the substrate ester group plus the first solvation shell — typically 30 to 60 atoms depending on how aggressively you cut. You map that fragment onto a second-quantised Hamiltonian, reduce it with active-space selection (CASSCF-style, freezing core orbitals), and run a Variational Quantum Eigensolver or, more usefully now, a Sample-based Quantum Diagonalisation. The 100 physical qubits give you headroom for an active space in the 30-50 spin-orbital range, which is exactly the regime where classical CCSD(T) starts to choke and DFT functionals start to disagree with each other.
The output you care about is a relative energy: the activation barrier of the wild-type triad versus a mutant. You don't need chemical accuracy on absolute energies. You need consistent error bars across a series of candidate mutations, because what protein engineers actually want is a ranked list.
The hybrid pipeline, end to end
Here is what a working PETase quantum pipeline looks like, in the order operations actually run:
- Sequence generation. Classical ML — ESM-style protein language models or directed-evolution priors — proposes a library of candidate mutants. Tens of thousands of sequences. This is cheap.
- Structure prediction. AlphaFold or a successor folds each candidate. Discard anything that destabilises the global fold. You're left with hundreds of plausible structures.
- Classical MD pre-screen. Run short trajectories with the substrate docked. Throw out the ones where the substrate doesn't sit in the pocket properly. You're now down to tens.
- Active-site extraction. Carve out the QM region — catalytic triad, oxyanion hole residues, substrate fragment, key waters. Cap dangling bonds with link atoms.
- Quantum chemistry on the fragment. Run VQE or SQD on the transmon hardware to estimate the transition-state energy. Use the QPU only where it earns its keep — the electron-correlation-heavy region.
- Ranking and synthesis. Rank the surviving candidates by predicted barrier reduction and thermostability proxy. Hand the top few to a wet lab for expression and assay.
The whole pipeline is bottle-necked at step five today. Classical multireference methods can do it but get expensive fast. A near-term QPU doesn't beat classical hardware on every active site, but on the strongly-correlated cases — and the PETase transition state is one of them, because of the proton-transfer character — it has a real shot.
Error rates, decoherence, and what we measure
The honest constraint on this work is gate fidelity. Two-qubit gate errors on superconducting transmons sit in the low-tenths-of-a-percent range on the best public hardware. Single-qubit errors are an order of magnitude better. For a chemistry circuit with a few hundred two-qubit gates — which is roughly what a moderate active space needs in a hardware-efficient ansatz — total error compounds quickly. You don't get one clean answer. You get a noisy distribution.
The mitigations are real and they work. Zero-noise extrapolation runs the same circuit at deliberately amplified noise levels and extrapolates back to zero. Probabilistic error cancellation inverts a learned noise model. Readout error mitigation handles the measurement step. None of this is fault tolerance. None of it requires the surface-code overhead that pushes you to thousands of physical qubits per logical qubit. It is what you do in the NISQ era, and it is what the Ireland machine will do in its first operational year while the surface-code roadmap matures behind it.
For PETase specifically, the question we measure is not "what is the absolute energy" — error mitigation can't deliver that — but "is the energy ordering of mutant A versus mutant B robust under noise?" That's a much weaker requirement, and one current hardware can plausibly meet for fragments in the right size range.
Why this sits in the climate cohort
Plastic degradation is climate work, even if it doesn't get the same press as carbon capture. Roughly 400 million tonnes of plastic are produced every year. PET alone is a meaningful slice. Mechanical recycling degrades the polymer with each cycle; chemical recycling via enzymatic depolymerisation gives you back monomers — terephthalic acid and ethylene glycol — that are indistinguishable from virgin feedstock. That breaks the linear extraction loop. It turns landfill and ocean plastic into raw material.
The current generation of engineered PETase variants — work done by groups in France, the UK, and South Korea, all in the public literature — has already pushed activity orders of magnitude above wild type. The frontier now is thermostability above 70°C (where PET's glass transition makes the polymer accessible to the enzyme), broader substrate range (mixed plastics, coloured PET, contaminated streams), and turnover number. Each of those targets is an active-site engineering problem. Each is exactly the kind of workload Ireland Quantum 100 is being built for.
From IMPT's perspective the integration is direct. An offset stack that includes plastic-to-monomer chemistry as a verified line item is more credible than one built only on tree planting and direct air capture. Plastic depolymerisation has a measurable mass balance: kilograms of polymer in, kilograms of monomer out. You can audit it. That matters for the next generation of carbon and circular-economy accounting.
What's hard, honestly
Three things are hard and worth naming. First, the QM/MM boundary — where the quantum region meets the classical one — is where most errors creep in. Bad cap atoms, bad electrostatic embedding, and you've made the QPU compute a beautifully accurate answer to the wrong question. Second, sampling. Enzymes are not static. The relevant transition state is an ensemble property, and you need many configurations, which means many QPU runs, which means queue time and cost. Third, the wet-lab loop. Predictions are predictions until somebody expresses the protein and runs the assay. The pipeline is only as fast as that feedback. Quantum advantage in chemistry is not a single benchmark; it is a tighter, faster design cycle, and that cycle includes biology that doesn't care how fast your cryostat cools.
None of this is reason not to do the work. It is reason to be precise about what the QPU contributes and what it does not. For more on how we structure these workloads on the Tipperary system, see our climate workloads roadmap.
Where to start this week
If you're a protein engineer or computational chemist who wants to be ready when sovereign quantum capacity comes online in Ireland: pick one enzyme, build the classical pipeline now. AlphaFold for structure, GROMACS or AMBER for MD, PySCF or Psi4 for the QM region, and Qiskit Nature or PennyLane to wire the active-site fragment to a quantum backend — IBM's free tier is enough to learn the workflow. Get fluent in active-space selection. That is the skill that will matter when the hardware is in the room. The plastic in the ocean is not waiting for fault tolerance, and neither should you.