Month: May 2017

QCJC: Bennett 1992

Alright, I’m back from vacation! And it’s time to dive into a paper on yet another completely different field! (I swear I’ll stop doing this soon :) )

This was actually one of my favorite concepts from my Physics 344 course, on the topic of Quantum Key Distribution (QKD). It’s the idea of securely transmitting information that could theoretically be proven to not be eavesdropped on, due to the fundamental physics that the key distribution is based on. So, let’s dive in!

The most difficult part of QKD for me initially is the idea that you aren’t actually distributing any message through quantum technologies. Instead, QKD is focused on distributing a singular key that you can use to encrypt and decrypt a message. Instead of trying to keep the message as hidden as you can, you make the file easily accessible for anyone, but only the person with the right key can decrypt the message. Having a secure key is very important. For instance, if you have a person physically carry a paper with a long string of random numbers and letters on it from one computer to another, then you would be fairly certain that noone is reading that key. If someone did manage to mug your messenger, then your messenger could at least tell you that she was mugged, and that the key is no longer useful! (Of course, if your messenger was actually a secret double agent and betrayed you… but let’s not go into that scenario!)

In any case, distributing a key can be very difficult because you never know who might be trying to steal the keys. And because the internet makes key distribution so easy, you can be worried if someone has copied your key without you knowing. Then, even though you believe that your file is properly locked away, someone else is actually snooping on all of your information!

QKD is one way to combat that kind of snooping. Instead of relying on very difficult problems in computing and mathematics, it relies on physical properties of photons to prevent an intruder from snooping without giving away that they are snooping. That is, the eavesdropper can definitely attempt to spy on the key that is being distributed, but if they do so, the intended recipients can see the eavesdropper’s actions. If eavesdropping is detected, a new key can be created and used.

One early method that was introduced was a EPR source, or an Einstein-Podolsky-Rwhatever source. (jk it’s Rosen) These three scientists published a paper in 1935 regarding two entangled particles that could be sent in different directions. The entangled particle is prepared in a state that requires both particles to be in the same state, either both up or both down: 1/√2 (|00> + |11>). Then, when the participants Albert and Boris measure the particles in different locations, they should expect to get the same result. If anyone tries to eavesdrop in the middle, they disrupt the state and the result can be detected.

To implement this encryption, start with some random source that produces entangled particles. Then, allow both Albert and Boris to make measurements in random directions, without sharing those directions. After they make their measurements, they publicly announce to each other which axes they used – either the X axis or the Z axis. For convenience, let’s suppose that they make a total of 1000 measurements, and only 500 of those measurements have instances where both Albert and Boris measured along the same axis – either both X or both Z. Let’s call these measurements S. For those measurements, they are certain that, if there was no eavesdropper, their measurements are exactly the same. So, when we look at S without any eavesdropper, we are certain that Albert and Boris have exactly the same copy of the key.

However, this has not yet been checked yet to see whether the eavesdropper was actually there or not. Therefore, to validate the key, we create a 250 character subset S’ for testing. We essentially broadcast our measurements within S’ for the world to see, and check if they are exactly the same. If they all match up, then the remainder of the unbroadcasted set in S should be secure and not spied upon.

There is a good reason for using a test algorithm that is a large portion of S. Suppose that we have a fairly clever eavesdropper, who doesn’t try to listen in on the entire signal but just on 10% of the signal. If Albert and Boris only test a very small portion of their data set, then the eavesdropper might still be able to slip through. However, by testing a majority of their total signal, Albert and Boris are reasonably certain that no eavesdropper was listening in, and that the remaining ~250 bits are an exact key that can be used.

This specific 1992 paper is a follow up of Bennett and Brassard’s 1984 paper that proposes a similar but simpler method of performing QKD. (Hence, the name “BB84”). The paper shows that a slightly more advanced way of eavesdropping, by manually creating a false EPR source and trying to fool Alice and Bob, does not work for EPR or BB84, and that BB84 is analogous to EPR encryption as implemented by Ekert.

First, the paper shows that an adversary, Nathan, cannot create a false source to spy on Albert and Boris, who are using the EPR scheme, as implemented by Ekert. The fear here is that perhaps Nathan can create a thrice entangled source and send it out to Albert and Boris, and then listen in on channel three. Something that might look like 1/√2 (|000> + |111>), where by listening on channel 3, Nathan would get the same information as Albert and Boris.

The most general way of representing this mystery qubit is through the following:

|Φ> = |00>|A> + |01>|B> + |10>|C> + |11>|D>

But now, in order for Nathan to fool Albert and Boris in the X basis, Nathan is limited in what |Φ> could be, to become:

|Φ> = |01>|B> + |10>|C>

But then! To fool Albert and Boris in the Z basis, Nathan is further limited to make his qubit to become

|Φ> = (|01>-|10>)|C>

meaning that |C> is completely decoupled from the qubits that are sent to Albert and Boris. Regardless of how tricky Nathan can be, he gets no information from creating and sending out |Φ>, since the information he gets from |C> is decoupled from the information sent to Alice and Bob.

Therefore, this completes the proof that Nathan can still do no harm!

Next, we move on to showing the BB84 scheme, and how it is the same to the EPR/Ekert scheme. The primary difference between BB84 and Ekert is that BB84 allows Alice to prepare the states beforehand and then send them to Bob, instead of Albert and Boris simultaneously observing the pair of entangled particle. Therefore, Alice is able to prepare her state in a random fashion, and Bob should still make measurements in a random fashion along either the X or Z axis. Afterwards, they carry out the same scheme as above, where they publicly release the axis for measurements, and then compare some subset to validate against eavesdropping.

This is shown to be the same as the EPR state because any eavesdropper needs to make a measurement that leaves the particle that goes to Bob undisturbed. But, “any measurement which fails to disturb nonorthogonal states also yields no information about them.” Therefore, Eve cannot create any kind of unitary operator U that is able to eavesdrop without detection.

Through this paper, BB84 is demonstrated to be as strong as EPR, and that both are secured against eavesdroppers. Quite amazing stuff!

Reference: Bennett, C. H., et al. Quantum Cryptography without Bell’s Theorem Phys. Rev. Lett. 68 557-559 (1992)

QCJC: DiCarlo 2009

Alright, as you might have noticed that my updates are … sporadic … at best right now. To that I say:

¯\_(ツ)_/¯

I am currently on vacation in Oregon! We’re spending 10 days in this state, and so far it has been beautiful. But being the primary driver on a 1800 mile trip is a bit tiring, so my updates are a bit slower than usual :)

But anyways, while I have time, let’s try to digest DiCarlo’s 2009 paper! This seems to be an early paper from the Schoelkopf lab, as it was published only 2 years after the transmon qubit was actually implemented. The overall idea is using a cavity bus to perform two-qubit interactions, conducting a proof-of-concept demonstration with the Grover search algorithm and the Deutsch-Jozsa algorithm, which allows for the determination of either a balanced or unbalanced function.

The paper begins by defining what the quantum bus architecture is. The bus uses a transmission line cavity to couple, control, and measure qubits. They are able to perform tomography on the two transmon qubit states to determine the final results. In order to entangle the qubits, the qubit frequencies themselves are tuned and adjusted.

I am not entirely sure what it means to tune the qubit frequencies themselves, since our quantum class primarily discussed how to use an external magnetic field to interact with the resonances of spins. However, from the Reed 2012 paper, it appears that transmon qubits, and other superconducting qubits, have characteristic frequencies for each of their excited states. These states can then be adjusted and tuned to different frequencies. With this bus architecture, it appears that there are microwave pulses at resonant frequencies to perform the standard x and y rotations, while another pulsed measurement performs the readout. Now, there are two additional ports that create “local magnetic fields that tune the qubit transition frequency.” I think this has to do more with electrical engineering, where the Josephson junctions are controlled by the voltages.

The paper spends some time discussing several different regions where they can tune the qubits, such that the qubits become more or less coupled to the cavity. For instance, at one point the qubits are detuned from the cavity and detuned from each other. That means that each qubit does not strongly interact with any part of their environment. The paper remarks that this state is especially good for state preparation, which makes sense. If you have a state where there is little interaction, that would likely mean that you have more control in staying at a stable state.

The paper then implements the Controlled Phase, or CPhase, gate using the avoided crossing. As I had previously discussed with the Reed 2012 paper, avoided crossings are basically magic. But, the basic idea is that as the two qubits are reaching a degenerate frequency, they are prevented from directly interacting, and instead “swap’ states. This allows for some special entangling procedure to occur. The CPhase gate is very important, because using the CPhase gate, we are able to implement a CNOT gate, one of the gates needed (along with X/Z rotations) to complete a universal set of gates for quantum computing.

This paper is mostly experimental, so after they implement the CPhase and implementation gates, they perform quantum tomography to demonstrate that the qubits are in the proper states. And they are in the proper states! The group then implement both the two-qubit Grover search algorithm, and the two-qubit Deutsch-Jozsa algorithm, with between 80% and 90% fidelity. I think I will need another blog post (perhaps the original Grover and/or Deutcsh-Jozsa papers?!) to explain these two algorithms in depth.

I think the most interesting part of this paper is the ability to demonstrate several quantum algorithms with experiments, not just through theory. I need to read more papers, but it feels like this is a fairly early demonstration of quantum algorithms with superconducting qubits, opening the way to further algorithm implementations today. By demonstrating the results step by step through use of quantum tomography, and showing that the results are much better than random chance, it shows that the quantum algorithms are working!

Reference: DiCarlo, L., Chow, J. M., et al. Demonstrations of Two-Qubit Algorithms with a Superconducting Quantum Processor Nature 460 240-244 (2012)

QCJC: Bloch 2012

… in which we take a wild left turn into the mysterious world of quantum simulations!

This article is a lengthy review article on a topic that I have no clue about, so I’m going to do my best to digest it bit by bit. The topic is on using ultracold quantum gases to precisely simulate an assembly of quantum particles. The reason why there is interest in performing quantum simulations, rather than quantum computations, for these particles is because the cost of describing quantum particles increases exponentially with additional particles. Therefore, quantum physics that is primarily interesting on the large scale, such as high temperature superconductors, are very difficult to do a full computation of. It is sort of similar to doing a numerical integration, where rather than trying to get the full analytical result, you compute small steps of the function and add them together.

This paper discusses the ability for quantum gases to be finely controlled to simulate complex quantum states. There are three basic requirements for quantum simulations to work properly:

  1. The quantum simulator should map to the proper Hamiltonian of the gas in question
  2. The simulator should be able to be set into an well-identified state
  3. The simulator needs to be able to carry out measurements with high precision

If all three of these requirements are met, then quantum simulations can be valid. After laying out the requirements, the paper begins to analyze why quantum gases meet these requirements. They analyze the use of quantum gases under three different regimes: Feshbach resonances, control of energy landscape, and control of the topology in which the quantum fluid evolves. The first of these corresponds to ultracold Fermi gases, the second corresponds to optical lattices, and the last corresponds to artificial gauge fields. I realize that I understand none of these, so let’s try to at least grasp the first of these topics, especially since it will be relevant to my research this summer!

The Feshbach resonances are linked to ultracold Fermi gases, or near 0K clouds of fermions. Fermions are particles with spin 1/2 that obey Fermi-Dirac statistics, as compared to integer spin Bosons that obey Bose-Einstein statistics. Some familiar principles that apply to fermions include Pauli’s exclusion principle, which prevents identical fermions from occupying the same quantum state, which leads to degeneracy pressure in the middle of neutron stars. Ultracold fermi gases cannot form Bose-Einstein Condensates, because BECs require bosons that can collapse to the same quantum state, while fermions cannot.

The paper discusses the use of Feshbach resonances to “tune the strength of interactions between atoms over several orders of magnitude by means of an external magnetic field.” I’m not entirely sure how this mechanism is able to work, but it appears that the ultracold Fermi gases can be characterized with just a few parameters, one of which is their scattering length. When the strength of interactions is adjusted through the Feshbach resonance, this parameter is able to be adjusted.

For the weakly attractive interactions, the gas is able to be understood through the Bardeen-Cooper-Schrieffer theory, or BCS. BCS is applied because the particles are able to weakly pair into Cooper pairs, a quantum phenomenon that is important for understanding superconductivity. Cooper pairs have opposite spin and velocity, but the actual gas particles are not constrained to be close together. Instead, they can have some distance between them and are bound weakly. On the other hand, for very strongly attractive interactions, the Fermi particles are able to pair together to form a pseudo-bosonic particle which then acts as a BEC. Since there are different behaviors at either extreme, there is also a crossover region in the middle, called the BEC-BCS crossover. In this region, it is very difficult to get analytical results. Therefore, it is easier to create an ultracold Fermi gas, tune the magnetic field to affect the interaction strength through the Feshbach resonance, and then create a state that can then be measured.

One such measurement that can be made is the proportionality constant that relates the chemical potential with the Fermi energy. Because this ratio is used in many other contexts beyond any specific Fermi gas, it is important to determine. However, it is rather difficult to compute analytically. Therefore, it is better to just measure the constant within the gas, instead of predicting it from something else. In regions especially close to the unitary limit, where the BEC-BCS crossover occurs, it can be much better to use a quantum simulator than using a quantum computer.

That concludes this portion on ultracold Fermi gases, and I don’t think I can really move on to the optical lattices and gauge fields! I might return to this review article on another date, and get more into it :)

Reference: Bloch, I. et al. Quantum Simulations with Ultracold Quantum Gases Nature Phys. 8 267-276 (2012)

 

QCJC: Shor 1995

To be perfectly honest, I was a bit surprised to find that Peter Shor is still alive and researching when I came upon the 1995 paper and probed further! I think that really speaks both to how young the field of quantum computing is – the factoring algorithm was only discovered in 1994 – and also to my blatant ignorance of the field :)

The start of this article explores the basic idea of quantum computing, with a humble note that “at least theoretically, quantum computing may be much faster than classical computation for solving certain problems…”, citing his own 1994 work as proof. But, Shor quickly transitions to the importance of decoherence as a limiting factor in future quantum computing. The state of quantum computing seemed to previously assume that decoherence was able to corrupt the entire computation. However, Shor suggests that there is a way to reduce that decoherence, through the use of quantum error correction.

The paper looks specifically at a nine-qubit error correction code, where if one of the qubit decoheres, the state can still be recovered. Quantum error correction, at its core, is similar to classical error correcting over noisy channels. However, the key difference is in the quantum no-cloning theorem. This theorem states that a single qubit cannot be duplicated in its exact state. While a classical bit can be duplicated many times, and then use majority rule to reduce the effect of noise in the bits, a quantum bit cannot rely on such a simple system. Instead, the quantum error correction must have some additional entanglement to guard against noise.

Shor starts by defining the prerequisites to the QEC: one must be able to do measurements on qubits, be able to entangle qubits, and be able to initialize the computer to some known state. Then, the encoding begins. Shor suggests a method of encoding, where |0> = 1/22 (|000>+|111>)(|000>+|111>)(|000>+|111>), and |1> = 1/22 (|000>-|111>)(|000>-|111>)(|000>-|111>). The paper first suggests that measuring all three sets of qubits in the Bell basis would definitely be able to restore the qubit to its state prior to the decoherence effects. This is because of a crucial assumption: only one independent error will occur. Therefore, even if one of the qubits is wrong, the other two qubits would have the same state and would be able to restore the proper answer.

However, the problem with that method is that now you have measured all of the qubits, meaning that you have collapsed all of the wavefunctions. We want to have some error correcting code that avoids that process. Here, Shor begins by explaining what the decoherence would do to a single qubit. The paper explains that, supposing the first qubit decoheres, it becomes some random superposition of |0> and |1> afterwards. When you express a three-qubit group in terms of the Bell basis after decoherence occurs, you are able to identify the original state.

This part seems confusing to me, but I can’t really tell why the difference is able to be detected. Shor notes that the important thing is that “the sate of the environment is the same for corresponding vectors from the decoherence of the two quantum states encoding 0 and encoding 1.” But what exactly does that mean, and how can that be detected? Shor uses several additional ancillary qubits to track the changes in each of these three-qubit groupings, where the ancillary qubit would be able to detect whether a single three-qubit system has flipped, and where the error occurred. His explanation for how this is possible is in the duplication, from 1 qubit to 9, such that the ancillary qubits are able to make the measurement without destroying the qubit.

However, Shor admits that the main limitation of this work is that it only allows for a single qubit decoherence to be caught. If multiple qubits decohered at the same time, then this QEC would fail. However, the probability of this decreases exponentially. If the probability that a single qubit decoheres is p, the the probability that 2 or more qubits decohere in a nine-qubit system is 1 – (1+8p) * (1-p)^8, or roughly 36p^2. Therefore, the probability that a grouping of 9k qubits are able to be recovered is approximately (1-36p^2)^k, establishing a bound for what the decoherence of a single qubit can be.

From what I can understand, it seems this paper does more in establishing the basis for QEC than really describing a good method itself. It seems rather vague in describing how to correct the error states, or even how the ancillary qubits are connected to the data qubits in detecting the error. However, it does propose a very early form of quantum error correction that mimics the form of classical repetition codes.

Reference: Shor, P. W. Scheme for Reducing Decoherence in Quantum Computer Memory Phys. Rev. A 52 R2493-R2496 (1995)

QCJC: Kelly 2015

Alright, this is the first official blog post type where I will break down a quantum computing article in terms that I can remember and understand. This may be a bit all over the place, but that sounds okay to me!

This is the article by J. Kelly, “State Preservation by Repetitive Error Detection in a Superconducting Quantum Circuit”, from the John Martinis group in USCB. The overall idea of this paper is to demonstrate the use of repeated quantum error correction through multiple cycles, rather than considering just a single cycle of error correction.

It’s obvious that quantum error correction is a fundamentally needed aspect for practical quantum computing, because of the inherent residences present in quantum systems. The number of errors that can happen at one time is related to the nth-order fault tolerance level, which allows for n errors to occur before the code breaks down. Furthermore, quantum error correction needs to be repeated over n cycles, not just in a single round. This kind of robust system requires quantum non-demolition (QND) parity measurements, where certain measurement qubits are able to detect errors in actual data qubits without disturbing the system. Observing the pattern of measurement qubits leads to determining the error and then correcting it.

This paper looks into a simple one-dimensional chain of qubits, a predecessor to a more complex two-dimensional surface code. For a 2D code, a checkerboard with (4n+1)^2 qubits is nth order fault tolerant, but a 1D chain of 9 qubits can go up to second-order fault tolerance. The Martinis group focuses on transmon superconducting qubits as well, which they use as a series of Xmon transmon qubits on a sapphire substrate. In their nine qubit code, they are using five data qubits and four measurement qubits in an alternating pattern, such that the measurement qubits are sandwiched in between data qubits. Instead of using the actual state of each measurement qubit as a way to determine errors, they use the relative flip of each measurement qubit over a cycle as a detection event. In other words, the researchers do not assume that there is no error in the measurement qubits in themselves.

In figure 2 of the paper, we have a very complex looking graph that shows many different cases where errors could occur. We see that the measurement qubit is the only qubit that is being measured upon, but regardless of the error, whether it’s an error on the measurement qubit or in the data qubit, the error is able to be caught over at most two cycles. By observing if the measurement qubit has flipped compared to its previous state, the original error is able to be deduced.

The paper makes several remarks on this note that I don’t fully understand. First, the paper refers frequently to minimum-weight perfect matching to decode the pattern of detection events. But it seems that the errors are fairly straightforwards to decode. Furthermore, there doesn’t actually appear to be an error correcting stage in this QEC path. For instance, in the standard five qubit QEC, there is a stage where the output from the measurement qubits are used to determine whether an X flip is needed. In a three qubit QEC, a CCNOT gate is used to correct arbitrary phase rotations. However, there is none of that in this system. The paper alludes to using post-processing as a way to correct the errors, but that seems to imply that the errors are corrected long after the system. How is the system able to identify the error and then process the entire system through with all the needed amplitudes afterwards? It seems that any arbitrary rotation will mess this up beyond belief!

Alright, getting back to the paper. I thought that the most interesting part of this paper was in the experimental studies that these researchers conducted. They begin with either a five-qubit or a nine-qubit code, and then iterate it over 8 cycles. For each number of cycles, they run the code 90,000 times. After each cycle, they compare the error corrected result to the actual state of the system, which is done by quantum state tomography. They show these results in Figure 4, where a five qubit code is able to be 2.7 times better than the non error-corrected fidelity, and the nine qubit code is 8.5 times better than the non corrected code. They see that the errors are able to be corrected in a non-exponential decay, which is explained by the increasing error rate with cycle number.

I think that the interesting aspect of this paper is their ability to do experiments and validate their QEC code under k cycles, up to 8 cycles. But I’m still unclear about how this QEC is actually done, and how the group is implementing their post-processing methodology.

Reference: Kelly, J. et al. State Preservation by Repetitive Error Detection in a Superconducting Quantum Circuit Nature 519 66-69 (2015)

QCJC: Reed 2012

This post is a little bit different from the future posts… it was written as part of a final presentation for a course on quantum and nanoscale physics. Meaning, this will be the only post typeset in LaTeX for the foreseeable future :) If I can figure out a way to get WordPress to accept LaTeX, well, that would change things…

See the writeup here!

Reference: Reed, M. D. et al. Realization of Three-Qubit Quantum Error Correction with Superconducting Circuits Nature 482 382-385 (2012)

A Digital Quantum Computing Journal Club

May 13th, 2017: Reed, M. D., et al. Realization of Three-Qubit Quantum Error Correction with Superconducting Circuits Nature 482 382-385 (2012)

May 14th, 2017: Kelly, J., et al. State Preservation by Repetitive Error Detection in a Superconducting Quantum Circuit Nature 519 66-69 (2015)

May 15th, 2017: Shor, P. W. Scheme for Reducing Decoherence in Quantum Computer Memory Phys. Rev. A 52 R2493-R2496 (1995)

May 17th, 2017: Bloch, I. et al. Quantum Simulations with Ultracold Quantum Gases Nature Phys. 8 267-276 (2012)

May 21st, 2017: DiCarlo, L., Chow, J. M., et al. Demonstrations of Two-Qubit Algorithms with a Superconducting Quantum Processor Nature 460 240-244 (2012)

May 30, 2017: Bennett, C. H., et al. Quantum Cryptography without Bell’s Theorem Phys. Rev. Lett. 68 557-559 (1992)

June 01 2017: Aspect, A., et al. Experimental Tests of Realistic Local Theories via Bell’s Theorem Phys. Rev. Lett. 47 460-463 (1983)

June 16 2017: Yin, J. et al. Satellite-based Entanglement Distribution over 1200 Kilometers Science 356 1140-1144 (2017)

December 31, 2017: Harrow, A. et al. Quantum Algorithms for Linear Systems of Equations Phys. Rev. Lett. 103 210504 (2009)

January 5th, 2018: Cai, X. D., et al. Experimental Quantum Computing to Solve Systems of Linear Equations Phys. Rev. Lett. 110 230501 (2013)

January 6th, 2018: DiVincenzo, D. P. The Physical Implementation of Quantum Computing (2000)

January 7th, 2018: Steffen, M et al. Quantum Computing: An IBM Perspective IBM J. Res. & Dev. 55 No. 5 (2011)

January 11th, 2018: Meyer, D. A. Quantum Strategies Phys. Rev. Lett. 82 S0031-9008(98)08225-8 (1999)

January 13th, 2018: Martinis, J. M., et al. Decoherence in Josephson Qubits from Dielectric Loss Phys. Rev. Lett. 95 210503 (2005)

June XXth, 2018: Wallraff, A. et al. Strong Couping of a Single Photon to a Superconducting Qubit using Circuit Quantum Electrodynamics Nature 431 162-167 (2004)

June XXth, 2018: Devoret, M. H. et al. Circuit-QED: How strong can the coupling between a Josephson junction atom and a transmission line resonator be? Annalen der Physik 16 769-779 (2007)

June XX, 2018: Vion, D., et al. Manipulating the Quantum State of an Electrical Circuit Science 296 886-889 (2002)

June XX, 2018: Devoret, M. H. and Schoelkopf, R. J. Applying Quantum Signals with the Single-Electron Transistor Nature 406 1039-1046 (2000)

June XX, 2018: DeMille, D. Quantum Computation with Trapped Polar Molecules Phys. Rev. Lett. 88 067901 (2002)

Do you have any suggestions for any papers that I should read and discuss? Please comment below, or feel free to reach out to me via email (chunyang.ding (at) yale.edu) or via twitter @seattlechunny. Hope to see you soon!