QCJC: Pino 2020

To start, a quick disclaimer that all of the opinions on this website written by me (Chunyang Ding) are mine and mine alone; my views do not represent the views of my employer, IonQ, or any past educational institutions!

Alright, let’s get down to talking about Honeywell’s latest paper then, on their QCCD architecture. For context, this is the most recent paper published by Honeywell about their latest system, which they claim has a quantum volume of 16, and have also released a press release announcing a promise to achieve a QV of 64 in the near future (compared to the 32 QV system that IBM’s Raleigh currently has). I have to admit, there does not seem to be a ton of substance in this paper, for fairly good reason; it seems somewhat reasonable that Honeywell doesn’t want to leak any information about their system to their competitors in both industry and in academia. However, that does force this discussion to be a bit light on the details, with more focus on the rough outlines of what their system looks like.

To start, Honeywell does use the ion Ytterbium 171+ as qubits, but also implement sympathetic cooling through Barium 138+. Sympathetic cooling[1] is a when two different species of ions are brought in close proximity, such that the Coulomb force couples between them. Typical laser cooling turns on some resonance of the ion, exciting optical transitions, which brings down the motional state of the ion in exchange for creating more noise on the qubit states. However, sympathetic cooling does not directly cool the qubit ion, resulting in a more coherent ion. While stationary ions have somewhat manageable heating rates, this technique is all but required when you want to preserve qubit states while moving the ions to physically different locations on a chip trap, a process typically referred to as shuttling.

This Honeywell system is implementing merge/split using their in-house built chip trap, which seems to have 198 DC electrodes to do the task. On their chip trap, there were several designated regions as “gate zones”, where quantum operations would be carried out, as well as “transport zones”, where the ion would be shuttled. I’m not entirely sure if those zones are purely virtual zones for ease of programming the FPGA, or if there is a different density of DC electrodes that would be able to more smoothly generate waveforms that would allow transfer of ions from one region to the next. Also, perhaps there are high-fidelity and low-fidelity regions of control on the chip trap, in order to optimize between having a reasonable number of control boards while having good electronic control. It would also be interesting to understand how their laser system is working behind the scenes – whether they still have a large DOE to target all of the potential regions of ions, or if they just have something like 4 targetted beams at 4 locations, and need to move the ions to one of those locations to do the necessary gate operation.

The authors note that Ytterbium and Barium ions are always paired in these circuits, as either Ba-Yb-Yb-Ba or Yb-Ba-Ba-Yb configurations, but the authors do not remark on if there is any significant difference between these two configurations, nor does it seem like they mention about how they choose which configuration to load. The authors do say that they always move Yb-Ba pairs together, and are able to split a 4-ion crystal into a 2-ion crystal. I’m sure that there must be interesting work being done to confirm that one of these configurations are properly loaded, but the details seem pretty skimpy here. One interesting aspect that the paper does mention is in using a physical sqap of the qubits to avoid logical swaps. I’m not entirely sure what this entails, because it seems to me that all of the ions in their chains should be all-to-all connected. Therefore, it doesn’t make sense to have logical overhead to swap a qubit from one ion to another. That process does seem like it would take up logical overhead, but I’m just not seeing the application, unless if they are admitting that their connectivity map is weaker at the edges than at the center. They note that doing these transports will on average “add less than two quanta of heating axially”, although again do not really show any figures or document any procedures to measure this. Also, since it wasn’t clear to me on my first read through, it appears that there is at minimum two of these four-ion chains that are being loaded in the system. They say that there are two gate-zones that each have one of the four-ion crystals, although they don’t specify which zones they are using (my guess is the central two zones). That puts a minimum of four beams for cooling Yb, four beams for cooling Barium, and four beams for Yb gates.

While it is somewhat buried in the paper, it is interesting to note that Honeywell is also doing all of these operations inside a cryostat at 12.6K, meaning that they have probably engineered something to reduce the dampening of the compressor on top of the cryostat, or that this steady vibration seems to not have affected the performance of the system significantly. This paper notes that going to cryo does help in suppressing anomalous heating, although there are no plots of any measurements of that either. Another thing that seems very quickly breezed through is that it seems like they are using a microwave antenna/horn to do … something. While microwave horns have been proposed since the 90s to do spin-spin interactions, it appears that they are using it to suppress memory errors through “dynamic decoupling” in this system. They apply paired pulses with opposite phases during cooling, possibly similar to something like a Hahn echo that is applied globally.

The paper makes an interesting note about state preparation and measurement errors, commonly referred to as SPAM errors. The interesting part is how quickly the brush it aside, saying that their theory model bounds state prep errors at 10^-4, while their solid angle of the photon detectors limits the measurement errors to 10^-3. This seems… odd, putting either bounds on their imaging lens numerical aperture, or that their discrimination between their bright and dark states is not particularly strong. I’m not really sure how to interpret this.

Finally, we get to some of the conclusions that the paper presents, in terms of benchmarking the system. Right now, there is quite a proliferation of different benchmarking schemes available throughout quantum computing, from Google’s linear cross-entropy benchmarking fidelity [2], to using single- and two-qubit gate fidelities, to directly measuring SPAM errors, to using quantum algorithms with known results (such as Bernstein-Vazirani and Hidden Shift). This paper uses a combination of Randomized Benchmarking (RB), which executes n-1 random Clifford gates, along with one final Clifford gate that would effectively “undo” all of the previous rotations, as well as IBM’s Quantum Volume (QV) method [3]. This QV measurement seems to be rather similar to the RB method, in terms of randomly executing gates, although I think it is not limited to Clifford gates, and the paper notes that for n=4, there were always exactly 24 two-qubit gates. To obtain a QV of 2^n, the system must run O(n) gates on n qubits, and correspond to a classically simulated version of those circuits greater than 66% of the time, with a 2-sigma confidence level. This raises a bunch of questions, such as: What happens when n is so large that it is no longer possibly to do the classical simulation? What kind of gates are typically run for this? How do you determine how many tests to run total? (The Honeywell team runs 100 random circuits here, without much discussion of why this is chosen.)

Similarly confusing is in the plots that they show in Fig 5 here, where the distribution of of each run is shown on the y-axis (essentially a frequency plot rotated 90 degrees, and also made symmetric for some reason. There is a paragraph discussing that they do theoretical simulations based on the depolarizing noise measured with RB, and the two look like they overlap surprisingly well. I’m not sure how this is reasonable, and if the different distributions point towards how strongly the choice of the random circuit makes in the overall QV number.

Overall, interesting paper with lots of newly developed ideas, but very little discussion about the implementation of those techniques. Definitely something to watch for in the future!

[1] Larson et al, Sympathetic Cooling of Trapped Ions: A Laser-Cooled Two-Species Nonneutral Ion Plasma. Phys Rev Lett. 57 (1) 70-73 (1986)
[2] Arute et al, Quantum supremacy using a programmable superconducting processor. Nature 574, 505-510 (2020)
[3] Cross, et al. Validating quantum computers using randomized model circuits. arXiv, 1811.12926 (2019)
Source: Pino et al, Demonstration of the QCCD trapped-ion quantum computer architecture arXiv, 2003.01293 (2020)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s