Other Approaches to Scaling to Fault-Tolerance

Monolithic scaling of quantum processors is reaching its limit, with several quantum computing companies already highlighting the need for modular scaling of quantum processors to achieve the qubit-counts required to build fault-tolerant quantum computing systems.

Nu Quantum proposes datacentre-scale quantum computing scale out, enabled by photonic quantum networking technology. However, other companies offer their own approaches to scaling hardware, which we delve into here grouped by modality. There are very few proposals out there on how to build large Distributed Fault Tolerant Quantum Computers able to reach thousands of logical qubits with low enough errors to carry out commercially-useful operations. Here we review the industry’s proposals - all in all, our opinion is that whilst theoretically viable, they will be very hard to build in real life without networking, due to different physical and engineering blockers.

For a link to each company's roadmap, please see the page Commercial Strategy >The Scaling Chasm for Quantum Computing > Qubit Company Roadmaps.

Superconducting circuits: IBM, Google, Rigetti

Chiplet approach

A path to scaling superconducting qubits is via a ‘chiplet’ approach, which introduces interconnects between chips inside a single quantum processing unit, enabling smaller chips to be stitched together to form a processor with a higher qubit count. Several research proposals on chiplet strategies for superconducting qubits have been published, including this technical paper from researchers at University of California Santa Barbara and Cisco Research’s Quantum Lab.

From the hardware side, this is an active area of research for IBM, who recognise modularity as crucial for scaling. Their IBM Quantum System Two, unveiled in December 2023, runs on three interconnected chips (Heron). Their roadmap targets two further demonstrations of interconnected chips over varying distances (Crossbill and Flamingo) by the end of this year. In the next four years, they aim to develop couplers that can enable more complex connectivity geometries within the same chip, necessary to apply more sophisticated error-correction codes by the end of the decade.

A 2021 resource estimation paper by Google used a 2D qubit layout (see Figure 10), suggesting a chiplet approach. This is also implied in Google’s quantum error correction milestone of a “tileable module”.

However, the chiplet approach is significantly hindered in its scalability because the interconnects are local and two-dimensional.

  • Local connections are only possible between adjacent nearest-neighbour chips.
    ‍Chips must be housed inside the same vacuum and cryogenic unit, meaning that they suffer from similar physical limitations to scaling as monolithic processors, as this is imposed by the size of the fridge, maximum number of microwave control wires or laser lines that fit, available cooling power, and other control plane requirements.
    scalability limited by how many chips physically fit inside a single vacuum chamber
  • Chiplets and connections between them are fixed and two-dimensional, which has implications for what algorithms and QEC codes can ultimately be run.

To overcome the limitations to qubit headcount posed by the size of a single cryogenic vacuum chamber, it will be necessary to either very dramatically increase vacuum system size (room-sized or warehouse-sized vacuum systems seems practically unlikely), or connect multiple separate superconducting QPUs into a distributed machine.

Cold bridges

Building this distributed architecture without recurring photonic networking requires the use of so-called ‘cold bridges’, or microwave lines connecting multiple mK stages together across individual vacuum/cryostats. Rigetti has published this paper discussing how introducing a coherent link between two superconducting modules to enable teleportation could be used to scale up.

However, microwave interconnects for any architecture are limited to short-range connectivity due to high losses (dB/m) accrued by microwave photons travelling through wires. As such, long-range connectivity first requires transduction of microwave photons into optical photons, which are then fed into a photonic network supporting long-range connectivity. Google Quantum AI is actively funding research into transduction.

On the algorithms side, IBM proposes using qLDPC codes, which cannot be implemented with planar connectivity; every qubit needs an out-of-plane long range connection. However, this is not compatible with a planar superconducting chip. This implies a multi-chip architecture, but the very high density of long-range connections implies a very high density of communications resources, potentially greater than 1:1. The combination of requirements may exceed the cooling and hosting capacity of a single dilution fridge for even the smallest code, requiring the use of cold bridges or MW-to-optical transduction and networking to enable multi-fridge connectivity, before the simplest demonstration can be performed.

Trapped ions: Quantinuum, Universal Q, IonQ

Chiplets

The chiplet approach is also being pursued by trapped ion companies. In this modality, the approach is to place many ion traps adjacent to each other to form a larger 2D structure within the same vacuum system. This enables ions to physically move between adjacent ion traps. Ion shuttling on a 2D grid has recently been demonstrated by Quantinuum in this paper, and shuttling of a single ion between two adjacent traps was reported in early 2023 in this paper by researchers at Universal Quantum and the University of Sussex, UK (from which Universal Quantum spun out).

In this approach, adequate control voltage signals are used to move a single ion back and forth between two ion traps on adjacent microchips, situated 10 micrometres apart. The quantum superposition state of the ion has been shown to be resilient to travel, although questions remain on  the true overhead after shuttling across microchips. This is because ions must be cooled back to their ground state to be useful qubits. The cooling overhead incurred from the shuttling procedure has not been reported, and computations using this approach have also not yet been demonstrated.

Ion shuttling requires alignment between adjacent chips housed within the same vacuum chamber. With this approach, N ions would be arranged on a 2D grid of size √N X √N . In such a 2D system, scaling will be restricted by the time it takes quantum information to traverse from one side of the 2D plane to the other. This means that, in order to use high-rate error correction codes at utility scale, the clock cycle would also increase as √N. While this scaling may permit small proof-of-concept implementations of these codes, it is likely to make it very challenging to operate a chiplet-based trapped-ion quantum computer at <1ms clock-cycle beyond a few hundred physical qubits.

While large, highly-connected entangled states could be created with large 2D systems, either physically by ion shuffling or virtually by entanglement swapping, this is already rate-limiting in current machines and will become a critical problem in reaching utility scale. From an algorithms perspective, this implies using surface code using physically abutted ion surface traps and using shuttling ions to move quantum information between tiles. Nu Quantum’s photonic links to create long-range entanglement in hardware can short-circuit the 2D scaling restriction, allowing us to circumvent this constraint to build in the required connectivity for error correction with the ability to run utility-scale algorithms in a reasonable time.

As with the superconducting modality, the chiplet approach is largely limited by the size of the ultra-high vacuum chambers that host these chips, which is still built and controlled in a monolithic way, even if ions themselves are able to move across chips. Without access to fast long-range gates, they may also be unable to use high performance qLDPC codes, and the reported results for ion shuttling suggest very high heating rates that would limit the rate at which the error correction process can be performed.

All in all, chiplet strategies are incredibly useful to create multi-core quantum processors with qubit counts in the 100-1,000s, but present practical constraints that prevent them from scaling further. These modular computers will have to be interlinked amongst each other via a photonic network to achieve the hundreds of thousands to millions of physical qubits necessary to run fault-tolerant quantum algorithms that unlock the true potential (and trillion dollar market value applications) of quantum computing.

Photonic Interconnects and Networking

IonQ are pursuing a photonic scaling approach and are developing optical interconnects to network multiple QPUs. They have recently demonstrated ion-ion interconnect entanglement between two ions in separate traps, which is a first step towards distributing entanglement across multiple QPUs. Scaling beyond a proof-of-principle demonstration of an two-point interconnect into distributing entanglement across two QPUs, followed by distributing entanglement across multiple nodes to enter a truly networked regime, will require further efforts and is underpinned by the technical breakthroughs and commercial offerings that Nu Quantum is developing.

Neutral Atoms: Pasqal, QuEra, Atom, Infleqtion

Neutral atom-based quantum computers follow similar operational principles to trapped ion qubits, and are also limited by the size of the ultra-high vacuum system chambers that host the qubits. For atoms, it is possible to reach high qubit counts of 1k+ in a single optical lattice, but there are significant technical challenges scaling beyond several thousands using this monolithic approach. Scale-out using photonic networking is the preferred approach indicated by the companies in this modality, which is in clear alignment with Nu Quantum’s technology roadmap and commercial positioning as the market category-creator for quantum networks for distributed computing.

Both Pasqal and QuEra have indicated that they plan to use photonic interconnects to scale beyond this chasm, but have not publicly revealed technological milestones to this effect. Pasqal has announced a collaboration with the quantum memory company Welinq to work towards developing networked quantum computers based on their processors (currenly at 1k physical qubits per QPU). Since Welinq’s work on quantum memories is complementary to Nu Quantum’s work on Qubit-Photon Interfaces, Quantum Networking Units and Distributed QEC protocols, partnering with Pasqal to accelerate their roadmap with our networking solutions remains a possibility. QuEra’s current roadmap does explicitly refer to photonic interconnects, and targets 10k physical qubits in 2026 (currently at >256). Infleqtion does not disclose their scaling approach, and their roadmap 40k physical qubits by 2028 (currently at 1.6k). Atom does not at this time have a public roadmap for scaling (currently at 1k).

Photonic Qubits: PsiQ, Quandela, ORCA, Photonic Inc

PsiQuantum, Quandela, ORCA Computing and Photonic Inc each offer different approaches to buildling fault-tolerant quantum computers based on photonic qubits.

Purely photonic approach

PsiQuantum and ORCA Computing are pursuing purely photonic architectures, where all quantum information transmission, storage and processing is done with photons as qubits.

The main technical challenges of these approaches are keeping optical losses ultra-low when scaling up, developing fast control electronics to react to probabilistic measurement results, and maintaining indistinguishability across many thousands of photon sources - something that has never been demonstrated before. However, these approaches can offer much higher clock speeds than schemes involving matter qubits, limited by the reaction time of classical electronic hardware.

PsiQuantum seeks to construct a bank of resource state generators, to run QEC surface codes over resource state fusion. Their architectural approaches are outlined in this paper and this paper. Resource states will be arranged into time-steps, and each at each step photons from resource state N will be entangled with and measured out alongside photons from resource state N-1, perpetuating a fusion network capable of performing error correction codes. The fusion process is very loss tolerant - with only around 71% transmission required to succeed, or -1.4dB. This however includes all loss from source to detector -- it is a very stringent target.

ORCA are also pursuing a similar approach to PsiQ by generating Greenberger-Horne-Zeilinger (GHZ) photon states (this paper), and achieving fault-tolerance through entangled resource states (this paper). They are also developing a quantum memory that can store photonic states in an atomic gas and release them on demand (this paper, this patent- in fact their company is named after this 'Off-Resonant Cascaded Absorption' quantum memory approach).

Light-matter interaction approach

Quandela's quantum computers are built around semiconducting quantum dots, which emit single photons with high optical coherence. They are able to fabricate deterministic photon sources with high emission rates and single-photon purity (which can be used to generate photonic GHZ states in a purely photonic approach as detailed in this paper). Their proposition towards fault-tolerance (described in this paper) is based on a change to an architecture that utilises "Spin-Optical" Quantum Computing (SPOQC), based on the spin property of the quantum dots, which can be encoded into the emitted photons. This approach brings in new technical challenges as the spin coherence times of the quantum dots also become an important figure-of-merit that may need to be improved, which presents a major materials science and solid-state semiconductor engineering challenge.

Photonic Inc proposes using a light-emitting lattice defect in silicon (the 'T-centre'), as their qubit, and implementing qLDPC codes on small qubit nodes interconnected by a photonic network. In many ways, their hardware approach (proposed in this paper) is the most similar to ours, except that we propose using different codes (i.e. not qLDPC because they require larger minimum systems and denser interconnects), and we our Qubit-Photon Interfaces enable our networking architecture to be compatible with multiple qubit modalities. The limitation here will also be to get photonic indistinguishability high enough to make the proposition work, particularly when there are very well-known challenges preventing solid state materials such as silicon from being good hosts for optical qubits. This is still a very early-stage and high-risk quantum computing proposition, but if it works it may have big advantages.

Silicon Spins: Quantum Motion, Silicon Quantum Computing, Diraq

Silicon spin qubits offer a completely different approach to computing. Lithography techniques are used to pattern silicon wafers with electrodes, which form very local electrostatic potential wells in which individual electrons are trapped, forming a ‘quantum dot’. These quantum dots are used as qubits, with the quantum information encoded in the spin property of the electron forming the dot.

These quantum circuits are fabricated using semiconductor and lithography techniques, which presents an attractive path to scaling up fabrication, as millions of spin qubits could be hypothetically hosted on each processing chip. However, the technical challenge here arises from the need to address each of these individually and achieve high fidelity gate operations. Scaling up will not just require increasing the qubit count on the chip but also significant additions to the hardware required to address each one, with the need to maintain extremely low crosstalk across these complex control wiring systems. Additionally, maintaining high fidelity performance across qubits when scaling may also present a roadblock to scaling up a technology hosted in a solid-state material. Impurities in the silicon host act as sources of noise and decoherence affecting qubit performance, and achieving uniformity across an active area may be very challenging as this area increases with qubit count. Research into isotopically pure silicon substrates with no sources of spin noise is ongoing, with some encouraging results from researchers at the University of Manchester and the University of Melbourne in this paper. However, realisation of spin qubits in these substrates has not yet been demonstrated.

Currently the largest silicon quantum processor, made by Silicon Quantum Computing, has 4 qubits (they target 100 qubits by 2028). Further details are available in this paper. As this modality is still in its early stages, it is difficult to fully predict where and when scaling bottlenecks will arise.

This page was last updated on October 17th, 2024.

No items found.

10:00