After DiVincenzo’s 2000 paper last week, let’s go over something within this decade, also with DiVincenzo! This one is a particularly interesting paper because it provides the road-map for what direction IBM research in quantum computing will take in the future. It’s especially well written, providing an excellent introduction to the field and explaining the motivation behind current research methods.

### Sources of Error

The paper starts by describing the basis for quantum computing, and quickly transitions to how qubits can be realized physically. A good reference to DiVincenzo’s criteria is found on the bottom of page 2, along with a great explanation of the decoherence time. It describes as the characteristic time over which the phase relationship between the complex variables are maintained. In other words, how long the qubit can last without randomly changing states. However, the paper argues that calculating alone is insufficient to actually capture decoherence errors. Instead, a few additional errors are introduced:

- Off-on ratio – Essentially testing the number of
**false positives**present in the statement. If nothing is being done to the qubit, could something actually be happening? Seems trivial in classical computing, but since quantum gates are controlled by tuning interactions, this process can be more complex. An optimal value of would be close to 0. - Gate fidelity – How often does your qubit actually
**do what you intend?**This error can include a large number of different error sources, from decoherence to random coupling, and should be close to 1. - Measurement fidelity – How well can your system
**read the truth?**In other words, when you perform a measurement, how often are you extracting correct information? I’m a bit confused about how this value is determined for mixed states, since it seems like it would be hard to separate the previous two errors from this error. For example, suppose that I have a qubit that is initialized to 0, allowed to rest for one time period, acted on by an X gate, and then measured. If I measure 0 instead of the expected 1, is that because of or ? Even worse, how can you measure this for mixed states, even those that are very carefully prepared?

### Qubit Architectures

After describing the various possible errors, the authors proceed to describe the large variety of qubit architectures available at that time. Just for completeness for myself in the future, these are as follows:

- Silicon-based nuclear spins
- Trapped ions
- Cavity quantum electrodynamics
- Nuclear spins
- Electron spins in quantum dots
- Superconducting loops and Josephson junctions,
- Liquid state nuclear magnetic resonance,
- Electrons suspended above the surface of liquid helium

As an interesting side note, the papers referenced for each of these technologies [21-29 in the report] are **all **published between 1995 and 1999. That’s surprising to me, partially because I didn’t think that a few of these technologies were really mature until the early 2000s, but also that all of these technologies were exploding around the same time. Thinking about the history of quantum computing, it makes sense that there was a boom immediately after Shor’s 1995 paper, but I didn’t expect it to be so big!

Moving on, we see some of the history of IBM’s involvement in quantum computing. Their first plan was to create the liquid state NMR quantum computer, led by Chuang (who is also an author on this paper …. so OP). Using this architecture, IBM was able to implement Shor’s factoring algorithm to factor 15 in 2001, again led by Chuang.

However, the authors noted that NMRQC began pushing “close to some natural limits” at the dawn of the 21st century, although they do not specify exactly what those limits are. In the previous DiVincenzo paper, I believe references were made to NMR issues being unable to implement the initialization of qubits in an efficient manner, thus removing it from consideration as a scalable QC. Since that paper was written in 2000 by DiVincenzo who was at IBM at that time, and that specific claim was backed by earlier work by Chuang in 1998, I’ll be willing to bet my hat that this is the reason.

### An Introduction to Superconducting Qubits

The remainder of this paper is mostly dedicated to describing the superconducting qubits that IBM then focused on. It begins with a review of a typical RLC circuit, which has a harmonic potential Hamiltonian. The reason for this Hamiltonian structure is because the inductor acts as a quantum energy storage, where latex E_C = frac{Q^2}{2C}$. In this structure, charge is similar to momentum, and capacitance is similar to the particle mass. You can derive these equations by examining the differential equations that govern the RLC circuit, and solving the second order differential equation that results from basic circuit element rules.

However, this alone is unsuitable for use as a qubit. Even though a harmonic potential will create energy spacings, each of these energy spacings are evenly spaced apart. Therefore, up to a certain potential difference, the energy spacings between the ground state and the 1st excited state is indistinguishable from the energy spacing between the 1st excited state and the second excited state. To change the Hamiltonian, a new quantum device is introduced – the Josephson junction.

The **Josephson junction** is a circuit element that consists of two superconductors separated by an insulator. In that regard, it is somewhat similar to a transistor, which features two conductors separated by an insulator. In the practical examples that I have seen in the past, this can be implemented by having two pieces of superconducting metal separated by a very small air gap. The energy expression of the Josephson junction is , where is a quantum phase that is proportional to the magnetic flux , after normalization.

What this creates is a **anharmonic oscillator**. When the quantum phase is close to zero, as is true at the ground state, there is very little contribution by the Josephson junction. However, at higher energy levels, the quantum phase is larger, and the higher energy levels have different (not sure if smaller or larger?) energy spacings. The authors claim that this frequency splitting varies between 1 to 10% of the fundamental frequency, which corresponds to the 1-10GHz range for frequency control. This high frequency control also tends to be larger than the expected clock times, meaning that it satisfies DiVincenzo criteria 3, the ability to perform many gate operations before decoherence. One of the additional benefits in the Superconducting Quantum Computing (SCQC) architecture is that the higher levels in the harmonic oscillator are still able to be exploited! For instance, Reed 2013 implements the second and third excited states of the superconducting qubit to create a Toffoli (CCNOT) gate with a much shorter gate time than a conventional two qubit implementation, exploiting the “avoided crossings” found in this architecture.

The paper also mentions about the use of low-loss cavity resonators to act as a memory device for the superconducting qubits. I believe that these resonators are connected to the qubits by a transmission line, creating a coupled system with a coupled Hamiltonian. I think that Gabby presented a paper on using the connected cavity as a QEC method, but to be fully honest, I don’t know enough about how that cavity state is able to treated as a qubit yet!

One interesting aside – the paper mentions the necessity that a necessary condition for the qubit is that . To me, this feels like a restriction on the Fermi energy of the superconductor. I believe superconductivity is explained by BCS theory, where the creation of a Cooper pair of electrons requires the material to be at a temperature lower than the required Fermi temperature. However, I don’t exactly understand why the total energy of the qubit needs to exceed the Fermi temperature in order to initialize the qubit.

### Superconducting Qubit Challenges

From here, the paper begins exploring some of the research topics to improve the superconducting qubit for use in the future, primarily in reducing the amount of error present in the qubit.

The first issue explained is **dielectric loss**, or the noise that is introduced by the capacitors in the circuit. For one thing, if the insulator is not made perfectly smooth, it could leak energy and limit the qubit coherence. In addition, it can be difficult to characterize dielectric loss, as the authors explain that dielectric loss at low temperatures is not linear and cannot be extrapolated from high energy measurements. In fact, a difference by a factor of 1000 has been witnessed, by the Martinis group. One of the possible solutions to do this is to use the Josephson juncture’s self-capacitance as the capacitor for the circuit, growing the superconducting materials out of a “**crystalline material** instead of an amorphous oxide”. I may be wrong, but I think that the Schoelkopf group is pursuing this in part – I remember discussions and demonstrations of the creation of the Josephson junctures on pieces of nanofabricated sapphire crystal. I’m not sure if that’s a specialty of the Schoelkopf lab, or if that is simply now a commonly used technique in the 6(!) years since 2012.

Next, **flux noise** is explained and characterized as noise introduced by tuning the magnetic flux, which can limit the coherence time. The authors mention that typically, there is a “sweet spot” for flux, which allows the resonance frequency to not be sensitive to changes in magnetic flux. This section is much shorter, and I don’t understand it well :(

### Current IBM SCQC Results

For a sub-architecture, IBM chose to focus on the **flux qubit design**, as shown in the below figure. This uses three JJ’s in series, with a shunting capacitorÂ around the third, larger qubit. While this is the explanation of the circuit design, I think in practice it looks quite different! I would love to see an entire picture of the qubit in the future :) The authors explain that their design is similar to the flux qubit, but that the use of the shunting capacitor (C-SHUNT) makes it also similar to the transmon and phase qubits.

For using such a qubit, they place the qubit within a dilution refrigerator and couple it to a superconducting resonator, via a transmission lab I believe. They can then determine states by distinguishing between two possible resonant frequencies, which can either be measured via amplitude or phase.

One of the research directions that the team has taken is in creating a new type of two-qubit gate by tuning the interaction between two weakly coupled qubits, simulating a CNOT gate, using microwave excitations. I think this is related to the earlier discussion of tuning the magnetic flux to the “sweet spot” for limiting flux noise, but I could be badly mistaken!

At that, the paper begins discussing the future. I’m a bit sad about one sentence in the paper:

A little extrapolation suggests that the reliability of our quantum devices will routine exceed the threshold for fault-tolerance computing within the next five years.

This seems especially sad with a recent quote from Jay Gambetta, a current quantum info/computation project manager from IBM Research, at this year’s QConference:

Over the next few years, we’re not going to have fault tolerance, but we’re going to have something that should be more powerful than classical – and how we understand that is the difficulty.

While still optimistic, it isn’t quite as happy as the picture that Steffen, DiVincenzo, Chow, Theis, and Ketchen painted in 2012. Oh well – not much more to do but continue working!