
Overview of Nu Quantum’s architecture & technology approach
Nu Quantum’s roadmap provides a unique and predictable path to Utility-Scale Quantum Computers. It fundamentally shifts the challenge from attempting high-risk monolithic scaling to millions of physical qubits to a model of distributed compute-nodes woven into Utility-Scale Quantum Computer.
Nu Quantum has developed an architecture - based on photonic quantum networking - that scales to an arbitrarily large size and exploits near-term available qubits from a choice of modalities. The architecture delivers an Entanglement Fabric that is applied to a datacenter scale system and orchestrates the availability of Logical Qubits for quantum computation. The networking stack pioneered by Nu Quantum is analogous to the networking stack in classical computer networking, pioneered by companies such as Cisco.


The Nu Quantum technical approach combines innovation at these 3 points in the stack:
- Distributed-Quantum Error Correction (D-QEC)
Our QEC theory team have developed a robust model of the impact of remote (‘distant’) entanglement when applied to a system of numerous, relatively small QC cores. This has proven that highly efficient EC codes can be accessed, and that EC can be implemented at a high level in the stack - thus ‘Distributed-QEC’.
- Quantum Networking Units (QNUs)
The Dist-QEC work proves the value of being able to construct and maintain a complex hypergraph of entanglement between small compute nodes. Nu Quantum is developing the hardware platforms to create Quantum Networking Units (QNUs) that orchestrate distributing entanglement. These contain custom Photonic Integrated Circuits (PICs) that meet the stringent requirements of low loss, high switching speed and near-unity photon detection efficiency.
- Qubit-Photon Interface (QPI)
The QPI bridges the critical gap between qubits (optically-active Trapped Ion and Cold Atom modality initially) and the network. The efficiency of this interface directly informs the performance of the distributed quantum computer. Nu Quantum deploys high finesse, actively controlled microcavities within a small QC node to offer high-efficiency coupling from matter to light. The QPI is thus integrated within a QPU (Quantum Processing Unit).
The combination of the three technologies above allow the network to effectively abstract the physical qubits and create a logical qubit fabric for computation.

How do we create an Entanglement Fabric?
- Inside each QPU, qubits are already entangled locally between them. The Entanglement Fabric is the extension of this local entanglement using non-local entanglement created by the network, between QPUs
- The QPI is a device that sits adjacent to the QPUs, where there is a ‘network qubit’ which emits a photon. The emitted photon is entangled with the network qubit that emitted
- This ‘qubit-photon entanglement’ happens in two QPUs at the same time
- The photons travel and arrive at the same time into a QNU, via a switch, and into a Bell State Analyser which interferes and measures the photons
- Result: the two network qubits are now entangled !
- If we scale this up, we can weave a fabric of entangled links across a whole Datacenter of QPUs
- This Entanglement Fabric has a topology that is designed to create logical qubits and enable the use of certain QEC codes
- QEC is applied across the entire Entanglement Fabric, hence we call it ‘Distributed QEC’
- Result: a logical qubit surface is delivered to the software & application layer, ready to be used!
A unique property of the architecture is that it is modular: once each system element or Tessera (made of a particular configuration of QNU, QPI & QPU) reaches a well-identified level of performance and scale (informed by the Dist-QEC architecture), the system can be scaled to an arbitrary size of rendered logical qubits, by increasing the number of interconnected Tesserae. This scaling-by-tessellating moves the overall risk of deployment away from a dependence on fundamental science breakthroughs and towards pragmatic engineering, supply chain management and capital ($) procurement.
To our knowledge, this is the first viable architecture for scaling: not only is it truly modular, but it is also achievable in the near-term: the performance and scale targets for each Tessera are within reach - some have already been demonstrated.