Author: seattlechunny

QCJC: Kannan 2020

I’m not sure if you can tell, but I’ve been recently starting to move away from reading “seminal” papers, and instead working through the new, hot papers of today. I suppose that one danger of doing so is that it’s harder to see if the individual papers that I read will pan out to be an actual game changer or not, whereas the papers from a decade ago have already (partially) withstood the test of time. However, it is life on the cutting edge that is more exciting – more area to explore, question, and discover!

This paper from the Oliver group at MIT explores a new topological way to use superconducting qubits for quantum simulations. Although we are still squarely in the superconducting space, the title, as well as much of this paper, takes inspiration from the atomic nature of these superconducting qubits. Primarily, the fact that superconducting qubits in a microwave waveguide resonator act very similarly to atomic qubits interacting with some wavelength mode. Typically, we see the atom as being much smaller than the wavelength that it is interacting with. However, for these “giant atoms” (which I cannot tell if they are actual things in AMO physics or not?), they are so large that they interact with multiple peaks of the wave at the same time. In this superconducting landscape, exactly where these qubits interact with the waveguide can then be carefully controlled, creating this beautiful braiding pattern shown in Figure 1c, d, e. Instead of the qubit being strictly interacting with the waveguide at a single junction, there are multiple ports to consider, which these authors claim can be used and tuned to protect the qubits even further.

The key idea that is brought up in this paper is that the photons in the waveguide cavity will experience a phase shift that depends on the resonant frequency of the qubit. Since this group is using tunable transmon qubits here, this can be a fairly easily adjustable parameter. The theory then supports that the relaxation rate of the qubit strongly depends on the phase of that photon, leading to the plot in Fig. 2b. The authors argue that when \phi = \frac{\pi}{2}, the qubits are maximally decoupled from relaxation into the waveguide. However, instead of the usual corresponding trade-off of having slower gates, there is still “strong physical couplings to the continuum of modes in a waveguide”.

The justification for this seems to primarily be in the braided design of these giant atoms. In a typical small atom, the interaction strength is proportional to \sin(\theta / 2), where \theta is the phase delay of a photon from one qubit’s coupling point to another qubit’s coupling point. However, in the braided configuration, there are multiple coupling points. Given that the two qubits have uniform coupling strengths to the waveguide, the expression simplifies to that of Eq. 5. At the minimal decoherence point of setting \phi = \frac{\pi}{2}, the interaction strength is directly proportional to \gamma.

This seems to pose a few interesting questions – namely, what would the interaction strength look like for braided giant atoms that are not symmetric in this way? Could you engineer a different interaction strength relationship in that case, and if so, could you actually amplify the signal in some way? However, the symmetry of the current system does seem to make it become much more simple to analyze, which has all of the benefits associated with that.

In fact, the authors go on to engineer two slightly detuned qubits, such that the decoherence-free frequency for each qubit was separated by 720 MHz, allowing the qubits to not only be protected from the environment, but also from each other. It’s a little less clear to me the purpose here – are they planning on tuning the qubits to then move closer to each other only during swapping gates, or just accept that there will be slower exchange rates? It does appear that they are able to achieve exchange excitation rates of 735 kHz, but I don’t really have a good intuition as to if this is good or bad.

In general, I think the bigger picture of this idea – of using giant atoms that are braided together – sounds like it has a vast amount of stuff to be explored. Right now, the two qubits are braided together in the most simple configuration. But what about three qubits? Or four? The number of knots that can be formed – especially if you use some kind of 2.5d geometry – seems incredibly vast. If there is a different error syndrome that can be prevented for each braiding pattern, then weaving together different braided atoms might create a fairly robust structure.

I think my main continued question at this point is how the qubits are different from normal transmon qubits, if there is any difference at all. It seemed to me from Figure 1 that they are just using normal transmons and hooked additional 50-Ohm terminated resonators on each side. Is there some limit to the number of points that a single transmon can be connected to as a giant atom? Is there more sensitivity to asymmetric flux noise from either of the two pathways?

Source: Kannan et. al Waveguide Quantum Electrodynamics with Superconducting Artificial Giant Atoms Nature 583 775 (2020)

QCJC: Kono 2020

This was a fairly interesting paper that focuses on the introduction of what the authors call a Josephson Quantum Filter, which they claim to reduce qubit decoherence while not suffering from the conventional trade-off of requiring a stronger Rabi drive for longer gates. While I don’t think I really clearly follow along with their explanation of how this JQF is created and used, it sums like the basic ideas of this device would be a very interesting tool to be used in the future.

The basic working idea of the JQF looks very similar to a conventional flux qubit, where there is a small loop with Josephson junctions that is tunable to be resonant at some frequency, In this experiment, the authors set the JQF frequency to be very close to that of the qubit (transmon) that they are using , and separated from the qubit by a half wavelength length of a resonator/transmission line. Control signals will be sent out such that they must pass through the JQF before they reach the qubit. The primary purpose of the JQF, however, does not seem to exactly filter out noise that comes in from the signal line, but to prevent the qubit from spontaneously decaying. I think that the mental picture that I had was that the JQF was a piece of one way glass, where only when the “light” is turned on from the outside is there some kind of transmitted signal. To me, this almost sounds like a saturated band stop filter, so perhaps there is some way to use conventional electrical engineering concepts here as well? In any case, Fig. 1 is quite useful for seeing some of the basic illustrations of how this JQF works.

The authors here show quite a bit of theory explaining how the excited states of the qubit are coupled to dark states of the JQF, but to be honest, I was almost completely unable to follow their formalism in their theory section, especially when they discussed the “strongly asymmetric external coupling to the control lines”. This seemed like a really significant part about why the JQF works, but I was not able to follow how “maximizing the correlated decay” leads to this behavior.

However, taking everything at face value, it appears that the JQF is able to act as a reflecting barrier to prevent the qubit from spontaneously decaying, yet becomes transparent when there’s a pulse of energy sent through at its resonant frequency. This allows for control signals to go through and execute faithful Rabi flops on the qubit, making it an advantageous tool to use. However, what does that mean if you want to drive a signal off resonance on the qubit? This … might be an embarrassing shortcoming in my own knowledge of gate drives for transmon qubits, but I thought slightly off-resonant transitions (decoupled by a few MHz?) are needed for various complex operations. Is that not true? Am I mixing trapped ion knowledge with superconducting knowledge here? Is there some bandwidth of region where the JQF still allows signals to pass through? The authors do note that when the JQF is far-detuned from the qubit, the qubit acts as if the JQF does not exist, which also doesn’t quite make sense.

Intriguingly, one of the principal drawbacks of the JQF is that it might do its job too effectively. The authors note that using the JQF leads to the thermal population of the qubit to be increased by almost a factor of 8. This is not because the JQF introduces noise, but instead prevents the cooler temperature of the dilution refrigerator to cooldown the qubit. Despite this higher thermal population, the authors here report a relaxation and coherence time improvement by about a factor of 4.

One of the more confusing sections of this paper is where the authors describe replacing the “transmon JQF with a two-level JQF with the same parameters”. I have no clue what they mean here. What are the same parameters? Did they swap out the JQF loop for something else? Does it just not have Josephson Junctions in the JQF? What’s the point of doing this? I am quite unclear about the purpose of this, and also how it happens either.

Overall, this paper does seem to introduce a new tool that might be useful in the future, but I am a bit doubtful of the scalability of this JQF. Is it then necessary to have a JQF, with half wavelength resonator, attached to each qubit that you want to produce? That sounds like there is a lot of room for error, especially when you are trying to drive multi-qubit systems.

Source: Kono et. al Breaking the Trade-Off between Fast Control and Long Lifetime of a Superconducting Qubit Nature Comm. 11, 3683 (2020)

QCJC: Barends 2020

There is always a tradeoff between gate speed and control of qubits. At the most fundamental limit, there is Heisenberg uncertainty, \Delta E \Delta t \geq \frac{\hbar}{2}. However, before we even reach that limit, we tend to be bound by other types of leakages. When we try to move a qubit state too rapidly, as it passes a resonance of a higher, unwanted level, there is an increased likelihood that there would be leakage into those levels. Since that higher level is not controlled for, this would likely show up as loss in fidelity of the qubit, and even worse, having some part of the quantum state in those higher level can lead to cascading errors down the line. This is the motivation for many common features in superconducting qubits, such as the implementation of resonator cavities and slow(er), adiabatic[1] gates of the past.

Here, Google’s team uses the advantages of having a frequency-tunable transmon qubit to execute faster, dibatic gates through some clever manipulation of the resonant energies of the qubit. They argue that their capacitatively coupled two-qubit system typically exhibits 6 states: four computational states (|00>, |01>, |10>, |11>) as well as two noncomputational (ie, garbage) states: |02> and |20>. They don’t seem to offer a full energy diagram or two-tone spectroscopy here, so it’s hard to tell if there are even higher states that they are just ignoring, or why there are not states like |12>, |21>, |22> that also might come into effect. However, my assumption will be that those remaining states are very different in energy, and therefore are much less likely to come in resonance with the transition of the swap gates.

The gate that this team tries to implement is the SWAP gate, where the population in |01> will transfer with the population in |10>. However, during this gate, any population that is in the |11> state would have the chance to interact with the state |\psi_b\rangle = \frac{ |20\rangle + |02\rangle}{\sqrt{2}}. The complimentary Bell-like state, |\psi_d\rangle = \frac{ |02\rangle - |20\rangle }{\sqrt{2}} is decoupled and does not interact. In equation 2, the authors express the probability that there would be a transfer of population from the |11> state to the |\psi_b\rangle state, finding that there exists a null point at which there would be no transfer. That null point then implies a certain relationship between the interqubit coupling and the nonlinearity of the qubits, as shown in equation 3. Both of those parameters are able to be controlled – the coupling strength by the size of the capacitor, and the nonlinearity/anharmonicity of the qubit through the ratio of \frac{E_J}{E_C}[2] of the qubit.

However, there remains the question if such a relationship will hold for non-ideal SWAP gates, where instead of a perfect square pulse, there is some ramp-up and ramp-down slopes. In addition, I think there is also a concern here that about frequency dependent couplings that I do not fully follow. In particular, the authors identify in equation 3 an integer n that needs to be satisfied, and in Fig 1(c), plot curves in frequency/hold space of that integer. I’m not sure what it actually represents, and what the choice of n=4 means physically.

There is an interesting discussion here about how this SWAP gate is actually implemented. First, they drive flux pulses to the two qubits so that they meet at the interaction frequency, allowing the two states to interact. However, in order to drive that swap transition, it would be similar to rotating the state vector of the system by angle \pi about the initial Bloch vector, m(0). However, this initial Bloch vector is not exactly aligned with the x-axis of the experiment, due to the large z component of the magnetic field that controls frequency detuning. Therefore, a single pulse is insufficient to fully execute the SWAP gate, and would require some “overshoot” of the control pulse, on the order of a few MHz. Several of the remaining plots – especially those in Fig. 3, focus on using the parameter of this overshoot \Delta to tune this gate. This overshoot parameter, as well as a hold time parameter, can be identified from 1D microwave scans.

There was one line that seemed a bit confusing – the authors note that, in Fig 2(c), “the minima of the leakage channel dip down to different values – a consequence of the qubits having dissimilar nonlinearities”. It’s unclear to me what the effect of the different minima is, but the “dissimilar nonlinearities” are very close already – less than 17 MHz difference. In fact, in the earlier paragraph, the authors even note these “nearly constant nonlinearities \eta / 2\pi of 223 and 240 MHz. Therefore, if they really need constant dips, then is that able to be engineered?

Much of the remaining paper focuses on the benchmarking of their qubits and gates, especially using their own cross-entropy benchmarking scheme. I think I will pass on discussing that for now. Overall, it seems like this is an interesting gate scheme, even if it requires the frequency dependent qubit in order to properly operate.

[1] that is, gates that move sufficiently slowly that there is no energy exchange with the environment. I typically think of the picture presented in Griffiths QM – if you transport a moving pendulum sufficiently slowly, it will keep the same oscillation, but if you jerk it around, it will be affected. Similarly, if you have a particle in the ground state of the infinite square well and very slowly expand the walls, the particle will remain in the ground state.
[2] See D. I. Schuster’s thesis, Circuit Quantum Electrodynamics, 4.3.2

Source: Barends et. al Diabatic Gates for Frequency-Tunable Superconducting Qubits Phys. Rev. Lett. 123, 210501 (2019)

QCJC: Pino 2020

To start, a quick disclaimer that all of the opinions on this website written by me (Chunyang Ding) are mine and mine alone; my views do not represent the views of my employer, IonQ, or any past educational institutions!

Alright, let’s get down to talking about Honeywell’s latest paper then, on their QCCD architecture. For context, this is the most recent paper published by Honeywell about their latest system, which they claim has a quantum volume of 16, and have also released a press release announcing a promise to achieve a QV of 64 in the near future (compared to the 32 QV system that IBM’s Raleigh currently has). I have to admit, there does not seem to be a ton of substance in this paper, for fairly good reason; it seems somewhat reasonable that Honeywell doesn’t want to leak any information about their system to their competitors in both industry and in academia. However, that does force this discussion to be a bit light on the details, with more focus on the rough outlines of what their system looks like.

To start, Honeywell does use the ion Ytterbium 171+ as qubits, but also implement sympathetic cooling through Barium 138+. Sympathetic cooling[1] is a when two different species of ions are brought in close proximity, such that the Coulomb force couples between them. Typical laser cooling turns on some resonance of the ion, exciting optical transitions, which brings down the motional state of the ion in exchange for creating more noise on the qubit states. However, sympathetic cooling does not directly cool the qubit ion, resulting in a more coherent ion. While stationary ions have somewhat manageable heating rates, this technique is all but required when you want to preserve qubit states while moving the ions to physically different locations on a chip trap, a process typically referred to as shuttling.

This Honeywell system is implementing merge/split using their in-house built chip trap, which seems to have 198 DC electrodes to do the task. On their chip trap, there were several designated regions as “gate zones”, where quantum operations would be carried out, as well as “transport zones”, where the ion would be shuttled. I’m not entirely sure if those zones are purely virtual zones for ease of programming the FPGA, or if there is a different density of DC electrodes that would be able to more smoothly generate waveforms that would allow transfer of ions from one region to the next. Also, perhaps there are high-fidelity and low-fidelity regions of control on the chip trap, in order to optimize between having a reasonable number of control boards while having good electronic control. It would also be interesting to understand how their laser system is working behind the scenes – whether they still have a large DOE to target all of the potential regions of ions, or if they just have something like 4 targetted beams at 4 locations, and need to move the ions to one of those locations to do the necessary gate operation.

The authors note that Ytterbium and Barium ions are always paired in these circuits, as either Ba-Yb-Yb-Ba or Yb-Ba-Ba-Yb configurations, but the authors do not remark on if there is any significant difference between these two configurations, nor does it seem like they mention about how they choose which configuration to load. The authors do say that they always move Yb-Ba pairs together, and are able to split a 4-ion crystal into a 2-ion crystal. I’m sure that there must be interesting work being done to confirm that one of these configurations are properly loaded, but the details seem pretty skimpy here. One interesting aspect that the paper does mention is in using a physical sqap of the qubits to avoid logical swaps. I’m not entirely sure what this entails, because it seems to me that all of the ions in their chains should be all-to-all connected. Therefore, it doesn’t make sense to have logical overhead to swap a qubit from one ion to another. That process does seem like it would take up logical overhead, but I’m just not seeing the application, unless if they are admitting that their connectivity map is weaker at the edges than at the center. They note that doing these transports will on average “add less than two quanta of heating axially”, although again do not really show any figures or document any procedures to measure this. Also, since it wasn’t clear to me on my first read through, it appears that there is at minimum two of these four-ion chains that are being loaded in the system. They say that there are two gate-zones that each have one of the four-ion crystals, although they don’t specify which zones they are using (my guess is the central two zones). That puts a minimum of four beams for cooling Yb, four beams for cooling Barium, and four beams for Yb gates.

While it is somewhat buried in the paper, it is interesting to note that Honeywell is also doing all of these operations inside a cryostat at 12.6K, meaning that they have probably engineered something to reduce the dampening of the compressor on top of the cryostat, or that this steady vibration seems to not have affected the performance of the system significantly. This paper notes that going to cryo does help in suppressing anomalous heating, although there are no plots of any measurements of that either. Another thing that seems very quickly breezed through is that it seems like they are using a microwave antenna/horn to do … something. While microwave horns have been proposed since the 90s to do spin-spin interactions, it appears that they are using it to suppress memory errors through “dynamic decoupling” in this system. They apply paired pulses with opposite phases during cooling, possibly similar to something like a Hahn echo that is applied globally.

The paper makes an interesting note about state preparation and measurement errors, commonly referred to as SPAM errors. The interesting part is how quickly the brush it aside, saying that their theory model bounds state prep errors at 10^-4, while their solid angle of the photon detectors limits the measurement errors to 10^-3. This seems… odd, putting either bounds on their imaging lens numerical aperture, or that their discrimination between their bright and dark states is not particularly strong. I’m not really sure how to interpret this.

Finally, we get to some of the conclusions that the paper presents, in terms of benchmarking the system. Right now, there is quite a proliferation of different benchmarking schemes available throughout quantum computing, from Google’s linear cross-entropy benchmarking fidelity [2], to using single- and two-qubit gate fidelities, to directly measuring SPAM errors, to using quantum algorithms with known results (such as Bernstein-Vazirani and Hidden Shift). This paper uses a combination of Randomized Benchmarking (RB), which executes n-1 random Clifford gates, along with one final Clifford gate that would effectively “undo” all of the previous rotations, as well as IBM’s Quantum Volume (QV) method [3]. This QV measurement seems to be rather similar to the RB method, in terms of randomly executing gates, although I think it is not limited to Clifford gates, and the paper notes that for n=4, there were always exactly 24 two-qubit gates. To obtain a QV of 2^n, the system must run O(n) gates on n qubits, and correspond to a classically simulated version of those circuits greater than 66% of the time, with a 2-sigma confidence level. This raises a bunch of questions, such as: What happens when n is so large that it is no longer possibly to do the classical simulation? What kind of gates are typically run for this? How do you determine how many tests to run total? (The Honeywell team runs 100 random circuits here, without much discussion of why this is chosen.)

Similarly confusing is in the plots that they show in Fig 5 here, where the distribution of of each run is shown on the y-axis (essentially a frequency plot rotated 90 degrees, and also made symmetric for some reason. There is a paragraph discussing that they do theoretical simulations based on the depolarizing noise measured with RB, and the two look like they overlap surprisingly well. I’m not sure how this is reasonable, and if the different distributions point towards how strongly the choice of the random circuit makes in the overall QV number.

Overall, interesting paper with lots of newly developed ideas, but very little discussion about the implementation of those techniques. Definitely something to watch for in the future!

[1] Larson et al, Sympathetic Cooling of Trapped Ions: A Laser-Cooled Two-Species Nonneutral Ion Plasma. Phys Rev Lett. 57 (1) 70-73 (1986)
[2] Arute et al, Quantum supremacy using a programmable superconducting processor. Nature 574, 505-510 (2020)
[3] Cross, et al. Validating quantum computers using randomized model circuits. arXiv, 1811.12926 (2019)
Source: Pino et al, Demonstration of the QCCD trapped-ion quantum computer architecture arXiv, 2003.01293 (2020)

QCJC: Bilnov 2004

I received my introduction to quantum computing through the two “bibles” of ion trapping – a very careful study of Leibfried, Blatt, Monroe, and Wineland’s “Quantum dynamics of single trapped ions”, and a much less careful study of Wineland et al’s “Experimental Issues in Coherent Quantum_State Manipulation of Trapped Atomic Ions”. Those are not necessarily light and easy to read – they function more as giant review articles with detailed derivations. It doesn’t seem particularly practical to try to do any kind of journal club writeup of those papers, just because they are so large and unweildly. Instead, I think I’m going to use a series of articles from the “early days” of Ytterbium ion trapping to focus on individual topics, piece by piece. Also, I will point out that Jameson has already written about trapped ions here and here – I would especially recommend the Linke writeup for a wonderful comparison of trapped ion based qubits with superconducting qubits.

Bilnov’s 2004 review article features many of the same people from those other two tomes, but seems to be presented in a much more accessible way. The authors begin their discussion on hyperfine qubits, which are so named because of their usage of the hyperfine ground states of a single trapped ion. These separations, of order GHz, typically have incredibly stable line widths. This means that the spacing between these states are extremely well defined, to the point where the hyperfine splitting of the Cesium atom currently serves as the functional definition of the SI second. They also have very long radiative lifetimes, which mean that once you go to an excited state, it is less likely for the qubit to naturally decay back to the ground state, reducing decoherence. Some ions that have a non-zero nuclear spin include 9Be+, 25Mg+, 43Ca+, 87Sr+, 137Ba+, 173Yb+, and 199Hg+.

Before we can get anywhere with these trapped ions, we must first trap them. Earnshaw’s theorem limits us from using purely static (dc) forces to trap charged particles, so we need to use a (Paul) trap. This implementation uses a radio frequency (rf) potential, which is a very rapidly time-varying electromagnetic field creating a pseudopotential, along with dc electrodes to act as the end caps. This type of linear trap typically provides a reasonable harmonic well, so when the ions are cooled and trapped, they are confined to a single location. When multiple ions are loaded in, the Coulomb force forces the ions into a linear array.

At this point, the paper discusses some of the more fundamental limitations of an ion trap. For instance, it notes that each additional ion introduced to this chain will also bring along three more vibrational modes. These additional vibrational modes, which must be isolated to perform gates, create a denser “forest” of vibrational modes. Heisenberg uncertainty provides a fundamental limit on the tradeoff between frequency and time. Therefore, having small separations in frequency space require longer gates to achieve the same fidelities, making it difficult to scale up ion trapping. However, this is not an impossible problem. For instance, most of the time a quantum algorithm only needs to act on a few qubits at a time. Therefore, if there was a way to split of “computational” qubits from “storage” qubits, and merge them back together later, then we might be able to perform large algorithms with a smaller maximum ion chain length.

After ions are trapped, the experimenters are able to perform optical pumping in order to bring the ions to either the excited or the ground spin state, which would then be able to represent the 1 or 0 in the qubit. Afterwards, there is a circularly polarized laser that is resonant with one of the transitions of the Cadmium ion that they use. This transition will only allow for photons to be scattered when they are in the excited spin state; however, it is possible for the camera or photomultiplier tube used to see dark counts because of randomly scattered light in the chamber, or because of poor quantum efficiency from the detector. Therefore, it is useful to both have a very good imaging objective to collect the maximum amount of light, as well as to integrate the collection time for some extended amount of time (in this case, 0.2ms). Therefore, by defining some photon “cutoff” between the 0 and 1 states, the authors are able to achieve detection efficiency of >99.7%, as shown in Figure 3.

One thing to note is that the wavelength of the imaging transition seems to be around 214.5nm, which is incredibly far into the ultraviolet regime. That might be one of the issues in using Cadmium ions, as that wavelength seems like it would be both difficult to source high power lasers, as well as difficult to find optical coatings that would not have issues handing that type of power needed to control long ion chains.

I think I will bump discussion about gate control to another paper, but it is useful to note that the paper demonstrates their success using the Cirac-Zoller gate, which is a CNOT gate that is a predecessor of the more modern Molmer-Sorensen gate that is more commonly used today. I am interested in bettern understanding the difference between these two gates, and especially why it seems to have so strongly shifted towards using MS gates today.

Reference: Bilnov, B. B., Leibfried, D., Monroe, C., Wineland, D. J., Quantum Computing with Trapped Ion Hyperfine Qubits Quantum Information Processing 3, 45-59 (2004)

QCJC: Knoernschild 2009

Trapped ion systems require lots of individual laser beams to address the ions, which is a fairly costly system. For instance, consider addressing a chain of 200 qubits, with just 20 milliwatts of power for each ion. When factoring in losses throughout the path (let’s say a factor of 2), we would already require a 8 watt laser. For even larger systems, it seems like it would be both challenging and expensive to obtain even more powerful lasers. In addition, that amount of power of light can be damaging for optical surfaces, such as mirrors and lenses, causing them to burn and reduce efficiency. It might be possible to have multiple smaller lasers and recombining their individual beams to address the ions, but then you enter a new realm of challenges regarding the alignment and stability of the different beam fans with respect to each other as well as to the ions.

However, is it really necessary to split a beam to that extent, such that every ion is individually addressed at all times? In fact, most algorithms do not require simultaneous gates on all available qubits. Currently, to achieve this, most trapped ion groups will use a diffractive optical element, or a DOE, to create the split. You can think of the DOE as some special diffraction grating, where the pattern printed on the surface is specially curved to create telecentric beams in the output. There might be limitations to how many beams a single DOE is able to output as well, but I’m not super clear on those numbers. Instead of a DOE, if we addressed ions only when they were needed in an algorithm, we could possibly get away with much fewer splittings of the initial laser beam, as well as more intrinsically stable systems.

One possible solution for this is explored by Kim’s group at Duke, using microelectromechanical systems (MEMS) to control the steering of a beam with very very small, very very fast mirrors. For example, consider initially splitting the input beam into 2 individual beams, so that you could operate 2-qubit gates. Now, when you want to entangle ions 2 and 9 together, you want it such that the two initial beams can be steered to point at those locations. You would twist and turn those MEMS mirrors so that only those two targets are hit, and then enable your gate.

The system described by Knoernschild seeks to achieve this kind of adaptability across any location on a 5 by 5 grid, where each grid location is 10 um apart from each other. The initial lasers they use have waists of waists of 5 um at the ion plane, and they seek to achieve a settling time of the mirrors within 4 microseconds. Why does the beam need to settle so quickly? The speed at which the beam is able to move would then set the minimum time between running gates. You would have to wait that amount of time between individual quantum gates for the system to be settled, so that you don’t just accidentally sweep a laser across all of your ions and messing up all of your coherence.

In order to create such devices, the authors give very careful thought to the micromechanical properties of the mirrors, constructing them to be just large enough to not ‘clip’ any of the beams and introducing intensity aberrations, while being small enough to have a minimal amount of mechanical resonance. They control the system to be critically damped for the kinds of motion, and note that there are only a few spots that the mirrors need to hit, thus reducing the complexity of the system.

Knoerschild2009Fig1b
Fig 1 – notice the double ray bounce in 1(b)

One interesting solution that this paper proposes is to use something that looks like a double-pass configuration with the MEMS mirrors. That is, the light doesn’t just bounce off of each MEMS mirror once, but instead, twice. See Fig 1b, c for the way that they implement this. Their reason for this additional fold is to get a factor of two magnification for each change in MEMS mirror angle. By having a double bounce, a small angle shift of the MEMS corresponds to a greater movement of the actual beam targeting on the ion. However, this process does lead to larger aberrations, especially when the beam is imaged through the relay telescope. The authors also note that, since mirrors are not 100% reflectively coated, this leads to some amount of uniform loss in intensity as well. This is their stated reason for not going to even more elaborate folding schemes (although imo it would also be incredibly difficult to engineer even more folds into such a small space).

Knoerschild2009Fig3

Furthermore, to accomplish individual steering of the beams, the authors create two pairs of MEMS mirrors – one pair to address each input beam. This allows for completely decoupled motion, but the vertical separation between the pairs of mirrors does introduce new kinds of aberrations. To analyze this, the authors do a Zemax analysis where they study how the peak intensity varies for different vertical separations. The authors reference generic “Seidel Aberrations” as the cause of this, although that seems to refer to five very common aberration modes: spherical, coma, astigmatism, curvature of field, or distortion, and I’m not entirely clear what kind of aberrations are actually dominant here (or where they even arise from). The result of this analysis in section 4 is that with an offset of 2mm between the MEMS mirrors, there is a peak intensity variation of 12%, while an offset of 0.25mm results in a variation of 3.5%. One possible solution to these aberrations, as the authors shown in Fig 3 with an array of lovely spot diagrams, is to put in custom compensation lenses directly before the MEMS mirrors. This seems to have greatly resolved this issue, which might be quite promising for future work. The authors do not actually implement this lens, as it does seem to be a Zemax automatically optimized lens, and probably would have cost a ridiculous amount to actually machine, but the point still stands.

Knoerschild2009Fig5

They then implement this device, and study the intensity plots of the resultant beam. The camera images of the beam seem quite curious – Fig 5a and b show very odd aberration patterns that look somewhat like speckle, although my eyes are not nearly trained enough to tell. There also seems to be some kind of streaking occurring with the “D” shaped electrode, which is not really explained. The authors also do a test of the speed of settling for the mirrors, pointing the laser beam at a fast PSD and observing the transient responses. I would be quite interested to also take a more detailed look at how stable these MEMS mirrors are, and if they are possibly introducing any fast noise even during the “stable” period.

All in all, a very interesting technique for the future of ion trapping!

Reference: Knoernschild, C. et al. Multiplexed broadband beam steering system utilizing high speed MEMS mirrors. Optics Express 17 7233-7244 (2009)

QCJC: Hacker 2018

It has been quite some time since your regularly scheduled QCJC posts, but they shall persist! Now, instead of having chosen this article myself, I wanted to present an article that we discussed in the RSL journal club presentation, in the vain hope that having gone to an actual journal club with actual graduate students, my understanding of the paper would have increased slightly. Not convinced? Neither am I.

A bit of background – Jameson mentioned that trapped ion architecture in this post, and there are indeed a number of different architectures beyond the superconducting ones that I tend to be fond of. One of these use trapped neutral atoms for quantum computation. The primary difference between trapped ions and trapped atoms is, obviously, that trapped atoms have no charge. Therefore, it is easier to put trapped atoms in a cavity and not have spurious electromagnetic effects. However, why would you want to put atoms in a cavity in the first place?

The primary crux of this paper is that the authors were able to achieve the entanglement of a stationary qubit (the trapped atom-in-a-cavity qubit with different energy states) with a flying qubit (a photonic qubit with spin), creating a cat state in the process. This process is governed by a photon passing through a cavity and being at a frequency that may or may not be resonant with the cavity. If the photon is on resonance with the cavity, it would gain a phase shift for its spin, and emerge with an extra \pi of phase. The cavities resonance can be affected by the energy state of the trapped atom inside the cavity – when the atom is in an excited energy state, the cavity resonant frequency will be different.

The goal of these researchers was to create a “cat state”, or a state that is superimposed between a |1> and a |0> state. These states have been studied extensively and well characterized, and have been shown to be especially interesting for creating good phase-error tolerant codes. That means that we are looking towards scalable computers, the fundamental logical qubit may one day have lots of these cat states living inside it. For this trapped atom architecture, the cat state was achieved by placing the trapped atom in a very high quality-factor cavity (Q = 6E5), and allowing it to interact. Only when the Rubidium atom was in its ground state would the cavity be off-resonant from the incoming photon. Therefore, if the atom was in a superposition of its ground and excited states, the incoming photon would also pick up a cat state. Typically, these states can either be referred to as even or odd cat states, which makes more sense in the microwave region (where even and odd just refer to the number of photons in the cavity).

Afterwards, all one needs to do is to make a measurement of the photon. As this is primarily a trapped atom lab, they use some interesting Acousto-optical modulators/deflectors to make sure that the signal input lines are clean. The photon eventually ends up in a homodyne detector setup, as a Fabry-Perot detector. Interestingly, the amount of noise contributed by this detection mechanism seem very low, at only 6% total for the detection circuit. The total loss, which primarily comes from loss in the cavity and coherence-reducing effects, comes out to 46%, which is below the threshold of 50% needed to see some interesting effects.

I thought it was interesting how these different quantum properties – energy levels for the atom, and spin states for the photon – were able to be mixed together. Of course, the interaction that governs this is difficult to engineer, but on paper, the Hamilton looks straight-forwards, like any other Jaynes-Cummings Hamiltonian or any other mixing Hamiltonian in general.

Afterwards, the paper goes on to measure the Wigner functions of the photon, in order to demonstrate the cat states of the quantum system. I really need to carefully analyze exactly what Wigner functions are, as they are used quite often for this purpose in this field! One of the interesting side-discussions that the graduate students had was on the presence of a clearly identifiable negative region in the Wigner functions in Fig 1.a, which seemed to be contrary to their expectation. What this actually means is a mystery to me!

Finally, the paper goes on to propose how this could be used as some kind of quantum gate, where the state of the atom would affect the state of the photon. However, the actual effect of a quantum gate does not seem to be demonstrated clearly in the body of this paper, as instead of controlling the atom, they are passively making measurements to show that the logic table of the two systems is analogous to what would happen in a two-qubit gate.

It’s an interesting look at a different quantum computing architecture, one that many groups are still pursuing! Still not as many as those who are pursuing circuit QED, as I hope I will have the chance to do so soon…

Reference: Hacker, B. et al. Deterministic creation of entangled atom-light Schrodinger-cat states. Nature Photonics 13 110-115 (2019)

QCJC: Chou 2018

A key component of quantum computation is the ability for modularity in the construction of data and communication qubits. That is, it’s rather useless to have to reinvent the wheel for every single new qubit that is introduced to the system. It would require an obscene amount of engineering to program new types of quantum gates for each different architecture. That is why the authors of this article so strongly tout the idea of using quantum teleportation to enable communication between multiple qubits.

To begin with, we need to understand how the quantum system in question operates. As with most systems in RSL, we have a transmon qubit that is strongly coupled to a seamless microwave cavity. As per the paper, the cavity operates as a “data qubit” that is able to store information for a long period of time. Part of the rationale for using a circuit QED architecture is that cavities can be designed with a very high quality factor, ranging in the upper millions of Q. That means that after a certain number of photons have entered the cavity, they would not easily leak out. In addition, there are “communication qubits” that are transmon circuits. These circuits are able to be controlled by microwave pulses and are coupled to the microwave cavities, such that an action on one will be felt by the other, and vice versa. These transmons are also coupled to a low-Q stripline resonator, which acts as a “quantum bus” for information to travel. Physically, this bus is a cavity mode within the resonator, but we can conceptualize it as some way that allows different “communication” qubits to talk to each other, or transfer data from one place to the other.

A diagram of the different qubits discussed above. From Chou 2018, Fig 1(d).

The primary accomplishment of this paper is demonstrating how the two communication qubits along with the quantum bus were able to enact a CNOT gate between the two data qubits. In addition, they remark that this was conducted on “logical” states in the qubits – where the “data” qubits were placed in a “first-level bosonic binomial quantum code” that allows for basic quantum error correction. This is characterized by the Wigner function, which is a probability distribution that describes quantum states. As show in Fig 2(b), it is really quite obvious when a qubit goes from one state to another!

In order to satisfy the requirements that this would fulfill some kind of future modular architecture, and thus have qubits that are far away from each other, the paper makes the following argument. First, they argue that the two logical qubits would not be able to interfere with each other – they have an “immeasurably small direct coupling” that is smaller than the smallest decay rate in the system. Therefore, we are confident that results are truly because of actions through the quantum bus and not through some mere accidental effect through proximity. Next, the paper justifies that their CNOT gate works through the generation of an entangled Bell state, allowing for the two communication gates to be spatially separated at far distances. Finally, the paper argues that the implementation of feed-forward operations, where the measurements of the communication qubits and use of classical information is incorporated into the CNOT gate, allows for the maintenance of a deterministic operation that always works, rather than some kind of probabilistic operation that had been demonstrated in the past.

And the results are really pretty! Here it is – just a standard truth table, but made to be much more fancy :) You can see that the gate has an effect – the pictures from the output state look decidedly less “sharp” than the input state. What this actually means, I’m hesitant to say, but it does seem to say something…

The logic table for the CNOT gate using the quantum bus. From Chou 2018 Fig 2(b).

The rest of the paper seems primarily focused on characterizing the CNOT gate. Which it seems to do well!

The main thing that I want to still learn more about here is how this differs from other uses of the quantum bus. I mean, the technology has been around since… 2014, right? So what were the uses of it prior to this? Wasn’t the purpose of the quantum bus to do exactly this? I want to chat with some of the authors and figure this out more soon!

Reference: Chou, K. S. et al. Deterministic Teleportation of a Quantum Gate between Two Logical Qubits Nature (2018)

QCJC: Devoret and Schoelkopf 2000

Here is an interesting article from the early days of the lab!

This was a review article published by Devoret and Schoelkopf in 2000, just as charge qubits were becoming more mainstream. The charge qubit, created from island Josephson Junctions, were first beginning to be realized as possible two-state qubit systems. However, one challenge was that such charge qubits have very small amounts of charge – on the order of single electrons. It is very difficult to detect single electrons, especially in noisy channels. To overcome this, a lot of investigation was put into Single Electron Transistors (SETs).

This review article begins with a explanation of the commonly used Field-Effect Transistors (FETs), which are often realized as MOSFETs. Such devices have two conducting sites, a source and a drain, and a semiconducting region in the center. That semiconducting region will act as a potential barrier that can be controlled by some amount of external voltage, changing the number of charge carriers that would exist in that island region. When there is no voltage, the potential barrier would block all flow of electrons, and prevent current from flowing, while the application of voltage would lower the barrier.

While the FET is primarily classical in nature, the SET explicitly uses the quantum tunneling of single electrons across an island with barriers on either side. These barriers are small, such that electrons are able to tunnel across. However, certain transitions are forbidden, when there are a certain number of electrons in the island. Specifically, at low temperatures, when there are an integer number of electrons in the island, no current will flow. However, when there are a half number of electrons, tunnel events are able to take place. As a result, the SET would act as a charge amplifier – it would be able to introduce gain that can be measured as current.

The majority of the remainder of the paper is dealing with characterizing noise in quantum amplifiers. They calculate how quantum noise is introduced through different mechanisms, including the back-action of the amplifier, as well as noise impedance. Through the remainder of the paper, the authors argue that the SET is able to reach the quantum limit of noise, as determined by the Heisenberg uncertainty principle. There is always some amount of noise introduced, but it is close to the quantum limit.

Finally, the paper authors theorize that the SET can therefore be used as a measurement device for the charge qubit, otherwise known as the Cooper-pair box. However, such a measurement requires a continuous measurement, which decohered the qubit. One possible improvement was the radio-frequency controlled SET, or the rf-SET, which was invented by Schoelkopf a few years earlier. At that time, the paper authors were trying to determine if the rf-SET would be practical as a true quantum computer read-out mechanism. Yet just 4 years later, the strong coupling from transmon to cavity would be discovered. While we no longer rely on rf-SET in our experiments (to my knowledge), the same language of quantum amplifiers is still often used in terms of the Josephson Parametric Amplifier and the SNAIL Parametric Amplifiers.

Citation: Devoret, M. H. and Schoelkopf, R. J. Amplifying Quantum Signals with the Single-Electron Transistor Nature 406 1039-1046 (2000)

QCJC: Rosenblum 2018

Alright, fault tolerant detection of a quantum error! Let’s dive right into it :)

At the core of this paper is a simple concept. All error correction schemes rely on ancilla qubits to detect errors on qubits containing quantum information. Those ancilla qubits need to be interacting with those info qubits, making quantum nondemolition measurements to detect error syndromes, or results that indicate the existence of errors. This is very different from trying to make a direct measurement on info qubits, as that would collapse their state, or try to otherwise directly gain information on errors.

This paper uses the circuit quantum electrodynamics model that is commonly found in RSL and Qlabs, where a microwave photon cavity is coupled to a transmon qubit. This coupling, as described in the Wallraff 2004 paper, is significant because the transmon qubit can be easily controlled and read, while the microwave photon cavity has a very long lifetime. Therefore, the qubit is able to be “stored” in the cavity, while read and controlled using the circuit. Note that the cavity is primarily considered by numbers of photons, while the transmon qubit can be described as different energy levels. The three lowest energy levels in the transmon are |g>, |e>, and |f>.

In this kind of system, a basic form of error correction uses a “cat” state, where the cavity has two different coherent states superposed on each other. The main significance of this is that the two logical states, 0 and 1, both correspond to an even number of photons in the cavity. Therefore, since the dominant kind of error in the cavity is single photon loss, if there was a detection of an odd number of photons in the cavity, then the experimenters can correct for such an error. The measurement of the parity of photons in the cavity is therefore the error syndrome for the loss of a photon. In fact, there is a coupling between the cavity and the transmon, such that if there is an odd number of photons in the cavity, the transmon will experience a rotation.

To prepare this, start with a qubit in a ground state of |g> and rotate it to |g> + |e>. Then, if there is an odd number of photons in the cavity, the transmon will experience a pi rotation to go from |g> + |e> to |g> – |e>. Otherwise, if there is an even number of photons in the cavity, the transmon will remain in the |g> + |e>. After the period of error syndrome measurement is complete, the transmon qubit is rotated, such that |g> + |e> is rotated to |e>, and |g> – |e> is rotated to |g>. Finally, a measurement of the transmon qubit will be able to determine the parity of the cavity, and then be used to reset the transmon for the next iteration.

The process sounds pretty great, except for one small problem: the cavity isn’t the only place where there could be a quantum error! While single photon losses can be found in the cavity when the cavity a photon “leaks” out of the cavity, errors can also happen in the excited states of the transmon qubit. And when errors occur on the transmon, bad things can happen in the cavity. See, the interaction that leads to the detection of cavity errors can flow the opposite way: a different state in the transmon qubit can also affect the cavity. In fact, the photons in the cavity will oscillate at different frequencies depending on the state of the transmon. We can refer to each of these frequency of oscillations as f(|g>), f(|e>), and f(|f>).

Typically, qubit errors can occur in two forms: A dephasing error, where the sign on the qubit gets turned around, from |g> + |e> to |g> – |e>, or a relaxation error, where the excited state falls down into the ground state. One can tell that the dephasing error will actually not affect the cavity state. That is, the frequency in the cavity will not change between the |g> + |e> state and the |g> – |e> states, which is great! It means that this dephasing error would essentially be “transparent”, and the cavity does not change because of a dephasing error.

However, the relaxation error is a greater concern. When that happens, you can see that the |g> + |e> state will fall into the |g> state. This induces a frequency change in the cavity. Furthermore, since this decay can happen at any point of time during the error correction process, it introduces a “random” period of time where this error has occurred, completely scrambling the information contained within the cavity! How might we solve this?

The crucial idea introduced in this paper is to exploit the higher states of the transmon qubit. See, the transmon qubit is often used because it acts as a two level system, with different energy transitions between each of the energy states. However, just because we commonly use two levels, doesn’t mean we don’t have access to the higher levels. In fact, if we use the third excited level, the |f> level, we can do some interesting things…

By the laws of quantum mechanics, there is no relaxation that allows the transmon qubit to go from |f> to |g>. This is referred to as a forbidden transition. The only kind of relaxation that could be present here would be a relaxation from |f> to |e>.

But wait, wouldn’t that be just as bad? After all, the cavity would view the transmon state at |f> as different from |e>, right? And herein lies the second crucial insight of the paper: you can change how the cavity qubit is affected by the transmon state! In other words, there is a way to introduce an additional drive such that f(|f>) = f(|e>). Then, such a relaxation error in the transmon is once again “transparent” relative to the cavity!

How does such a change occur? The primary idea is that an additional off-resonant sideband drive is introduced that causes an additional phase to be picked up by the cavity. This phase/frequency that is picked up is like a geometric phase that is dependent on two main factors: the detuning strength of the interaction and the dephasing frequency.

Given that the detuning strength is fixed, the experimenters were able to choose a dephasing frequency that introduced the exact amount of additional phase such that f(|e>) = f(|f>)!

Alright. So what does that mean?

The key is that with the new system, no longer would relaxation errors in the ancilla directly affect the cavity. Is this perfect? Not exactly – those relaxation and dephasing errors still mess up the cavity state, and could cause incorrect error correction to occur. However, it does mean that the cavity will be protected from simple errors in the transmon. This directly translates to a 5 times increase in lifetime of the transmon!

Eventually, given enough error correction improvements, we would hope to see fault-tolerant quantum computing, so that we can implement a full quantum computer! My words are getting more and more fuzzy, so I’m going to just go ahead and post this up :)

Reference: Rosenblum, S. et al. Fault-tolerant Detection of a Quantum Error Science 361  266-270 (2018)