Month: March 2020

QCJC: Pino 2020

To start, a quick disclaimer that all of the opinions on this website written by me (Chunyang Ding) are mine and mine alone; my views do not represent the views of my employer, IonQ, or any past educational institutions!

Alright, let’s get down to talking about Honeywell’s latest paper then, on their QCCD architecture. For context, this is the most recent paper published by Honeywell about their latest system, which they claim has a quantum volume of 16, and have also released a press release announcing a promise to achieve a QV of 64 in the near future (compared to the 32 QV system that IBM’s Raleigh currently has). I have to admit, there does not seem to be a ton of substance in this paper, for fairly good reason; it seems somewhat reasonable that Honeywell doesn’t want to leak any information about their system to their competitors in both industry and in academia. However, that does force this discussion to be a bit light on the details, with more focus on the rough outlines of what their system looks like.

To start, Honeywell does use the ion Ytterbium 171+ as qubits, but also implement sympathetic cooling through Barium 138+. Sympathetic cooling[1] is a when two different species of ions are brought in close proximity, such that the Coulomb force couples between them. Typical laser cooling turns on some resonance of the ion, exciting optical transitions, which brings down the motional state of the ion in exchange for creating more noise on the qubit states. However, sympathetic cooling does not directly cool the qubit ion, resulting in a more coherent ion. While stationary ions have somewhat manageable heating rates, this technique is all but required when you want to preserve qubit states while moving the ions to physically different locations on a chip trap, a process typically referred to as shuttling.

This Honeywell system is implementing merge/split using their in-house built chip trap, which seems to have 198 DC electrodes to do the task. On their chip trap, there were several designated regions as “gate zones”, where quantum operations would be carried out, as well as “transport zones”, where the ion would be shuttled. I’m not entirely sure if those zones are purely virtual zones for ease of programming the FPGA, or if there is a different density of DC electrodes that would be able to more smoothly generate waveforms that would allow transfer of ions from one region to the next. Also, perhaps there are high-fidelity and low-fidelity regions of control on the chip trap, in order to optimize between having a reasonable number of control boards while having good electronic control. It would also be interesting to understand how their laser system is working behind the scenes – whether they still have a large DOE to target all of the potential regions of ions, or if they just have something like 4 targetted beams at 4 locations, and need to move the ions to one of those locations to do the necessary gate operation.

The authors note that Ytterbium and Barium ions are always paired in these circuits, as either Ba-Yb-Yb-Ba or Yb-Ba-Ba-Yb configurations, but the authors do not remark on if there is any significant difference between these two configurations, nor does it seem like they mention about how they choose which configuration to load. The authors do say that they always move Yb-Ba pairs together, and are able to split a 4-ion crystal into a 2-ion crystal. I’m sure that there must be interesting work being done to confirm that one of these configurations are properly loaded, but the details seem pretty skimpy here. One interesting aspect that the paper does mention is in using a physical sqap of the qubits to avoid logical swaps. I’m not entirely sure what this entails, because it seems to me that all of the ions in their chains should be all-to-all connected. Therefore, it doesn’t make sense to have logical overhead to swap a qubit from one ion to another. That process does seem like it would take up logical overhead, but I’m just not seeing the application, unless if they are admitting that their connectivity map is weaker at the edges than at the center. They note that doing these transports will on average “add less than two quanta of heating axially”, although again do not really show any figures or document any procedures to measure this. Also, since it wasn’t clear to me on my first read through, it appears that there is at minimum two of these four-ion chains that are being loaded in the system. They say that there are two gate-zones that each have one of the four-ion crystals, although they don’t specify which zones they are using (my guess is the central two zones). That puts a minimum of four beams for cooling Yb, four beams for cooling Barium, and four beams for Yb gates.

While it is somewhat buried in the paper, it is interesting to note that Honeywell is also doing all of these operations inside a cryostat at 12.6K, meaning that they have probably engineered something to reduce the dampening of the compressor on top of the cryostat, or that this steady vibration seems to not have affected the performance of the system significantly. This paper notes that going to cryo does help in suppressing anomalous heating, although there are no plots of any measurements of that either. Another thing that seems very quickly breezed through is that it seems like they are using a microwave antenna/horn to do … something. While microwave horns have been proposed since the 90s to do spin-spin interactions, it appears that they are using it to suppress memory errors through “dynamic decoupling” in this system. They apply paired pulses with opposite phases during cooling, possibly similar to something like a Hahn echo that is applied globally.

The paper makes an interesting note about state preparation and measurement errors, commonly referred to as SPAM errors. The interesting part is how quickly the brush it aside, saying that their theory model bounds state prep errors at 10^-4, while their solid angle of the photon detectors limits the measurement errors to 10^-3. This seems… odd, putting either bounds on their imaging lens numerical aperture, or that their discrimination between their bright and dark states is not particularly strong. I’m not really sure how to interpret this.

Finally, we get to some of the conclusions that the paper presents, in terms of benchmarking the system. Right now, there is quite a proliferation of different benchmarking schemes available throughout quantum computing, from Google’s linear cross-entropy benchmarking fidelity [2], to using single- and two-qubit gate fidelities, to directly measuring SPAM errors, to using quantum algorithms with known results (such as Bernstein-Vazirani and Hidden Shift). This paper uses a combination of Randomized Benchmarking (RB), which executes n-1 random Clifford gates, along with one final Clifford gate that would effectively “undo” all of the previous rotations, as well as IBM’s Quantum Volume (QV) method [3]. This QV measurement seems to be rather similar to the RB method, in terms of randomly executing gates, although I think it is not limited to Clifford gates, and the paper notes that for n=4, there were always exactly 24 two-qubit gates. To obtain a QV of 2^n, the system must run O(n) gates on n qubits, and correspond to a classically simulated version of those circuits greater than 66% of the time, with a 2-sigma confidence level. This raises a bunch of questions, such as: What happens when n is so large that it is no longer possibly to do the classical simulation? What kind of gates are typically run for this? How do you determine how many tests to run total? (The Honeywell team runs 100 random circuits here, without much discussion of why this is chosen.)

Similarly confusing is in the plots that they show in Fig 5 here, where the distribution of of each run is shown on the y-axis (essentially a frequency plot rotated 90 degrees, and also made symmetric for some reason. There is a paragraph discussing that they do theoretical simulations based on the depolarizing noise measured with RB, and the two look like they overlap surprisingly well. I’m not sure how this is reasonable, and if the different distributions point towards how strongly the choice of the random circuit makes in the overall QV number.

Overall, interesting paper with lots of newly developed ideas, but very little discussion about the implementation of those techniques. Definitely something to watch for in the future!

[1] Larson et al, Sympathetic Cooling of Trapped Ions: A Laser-Cooled Two-Species Nonneutral Ion Plasma. Phys Rev Lett. 57 (1) 70-73 (1986)
[2] Arute et al, Quantum supremacy using a programmable superconducting processor. Nature 574, 505-510 (2020)
[3] Cross, et al. Validating quantum computers using randomized model circuits. arXiv, 1811.12926 (2019)
Source: Pino et al, Demonstration of the QCCD trapped-ion quantum computer architecture arXiv, 2003.01293 (2020)

QCJC: Bilnov 2004

I received my introduction to quantum computing through the two “bibles” of ion trapping – a very careful study of Leibfried, Blatt, Monroe, and Wineland’s “Quantum dynamics of single trapped ions”, and a much less careful study of Wineland et al’s “Experimental Issues in Coherent Quantum_State Manipulation of Trapped Atomic Ions”. Those are not necessarily light and easy to read – they function more as giant review articles with detailed derivations. It doesn’t seem particularly practical to try to do any kind of journal club writeup of those papers, just because they are so large and unweildly. Instead, I think I’m going to use a series of articles from the “early days” of Ytterbium ion trapping to focus on individual topics, piece by piece. Also, I will point out that Jameson has already written about trapped ions here and here – I would especially recommend the Linke writeup for a wonderful comparison of trapped ion based qubits with superconducting qubits.

Bilnov’s 2004 review article features many of the same people from those other two tomes, but seems to be presented in a much more accessible way. The authors begin their discussion on hyperfine qubits, which are so named because of their usage of the hyperfine ground states of a single trapped ion. These separations, of order GHz, typically have incredibly stable line widths. This means that the spacing between these states are extremely well defined, to the point where the hyperfine splitting of the Cesium atom currently serves as the functional definition of the SI second. They also have very long radiative lifetimes, which mean that once you go to an excited state, it is less likely for the qubit to naturally decay back to the ground state, reducing decoherence. Some ions that have a non-zero nuclear spin include 9Be+, 25Mg+, 43Ca+, 87Sr+, 137Ba+, 173Yb+, and 199Hg+.

Before we can get anywhere with these trapped ions, we must first trap them. Earnshaw’s theorem limits us from using purely static (dc) forces to trap charged particles, so we need to use a (Paul) trap. This implementation uses a radio frequency (rf) potential, which is a very rapidly time-varying electromagnetic field creating a pseudopotential, along with dc electrodes to act as the end caps. This type of linear trap typically provides a reasonable harmonic well, so when the ions are cooled and trapped, they are confined to a single location. When multiple ions are loaded in, the Coulomb force forces the ions into a linear array.

At this point, the paper discusses some of the more fundamental limitations of an ion trap. For instance, it notes that each additional ion introduced to this chain will also bring along three more vibrational modes. These additional vibrational modes, which must be isolated to perform gates, create a denser “forest” of vibrational modes. Heisenberg uncertainty provides a fundamental limit on the tradeoff between frequency and time. Therefore, having small separations in frequency space require longer gates to achieve the same fidelities, making it difficult to scale up ion trapping. However, this is not an impossible problem. For instance, most of the time a quantum algorithm only needs to act on a few qubits at a time. Therefore, if there was a way to split of “computational” qubits from “storage” qubits, and merge them back together later, then we might be able to perform large algorithms with a smaller maximum ion chain length.

After ions are trapped, the experimenters are able to perform optical pumping in order to bring the ions to either the excited or the ground spin state, which would then be able to represent the 1 or 0 in the qubit. Afterwards, there is a circularly polarized laser that is resonant with one of the transitions of the Cadmium ion that they use. This transition will only allow for photons to be scattered when they are in the excited spin state; however, it is possible for the camera or photomultiplier tube used to see dark counts because of randomly scattered light in the chamber, or because of poor quantum efficiency from the detector. Therefore, it is useful to both have a very good imaging objective to collect the maximum amount of light, as well as to integrate the collection time for some extended amount of time (in this case, 0.2ms). Therefore, by defining some photon “cutoff” between the 0 and 1 states, the authors are able to achieve detection efficiency of >99.7%, as shown in Figure 3.

One thing to note is that the wavelength of the imaging transition seems to be around 214.5nm, which is incredibly far into the ultraviolet regime. That might be one of the issues in using Cadmium ions, as that wavelength seems like it would be both difficult to source high power lasers, as well as difficult to find optical coatings that would not have issues handing that type of power needed to control long ion chains.

I think I will bump discussion about gate control to another paper, but it is useful to note that the paper demonstrates their success using the Cirac-Zoller gate, which is a CNOT gate that is a predecessor of the more modern Molmer-Sorensen gate that is more commonly used today. I am interested in bettern understanding the difference between these two gates, and especially why it seems to have so strongly shifted towards using MS gates today.

Reference: Bilnov, B. B., Leibfried, D., Monroe, C., Wineland, D. J., Quantum Computing with Trapped Ion Hyperfine Qubits Quantum Information Processing 3, 45-59 (2004)