QCJC: Shor 1995

To be perfectly honest, I was a bit surprised to find that Peter Shor is still alive and researching when I came upon the 1995 paper and probed further! I think that really speaks both to how young the field of quantum computing is – the factoring algorithm was only discovered in 1994 – and also to my blatant ignorance of the field :)

The start of this article explores the basic idea of quantum computing, with a humble note that “at least theoretically, quantum computing may be much faster than classical computation for solving certain problems…”, citing his own 1994 work as proof. But, Shor quickly transitions to the importance of decoherence as a limiting factor in future quantum computing. The state of quantum computing seemed to previously assume that decoherence was able to corrupt the entire computation. However, Shor suggests that there is a way to reduce that decoherence, through the use of quantum error correction.

The paper looks specifically at a nine-qubit error correction code, where if one of the qubit decoheres, the state can still be recovered. Quantum error correction, at its core, is similar to classical error correcting over noisy channels. However, the key difference is in the quantum no-cloning theorem. This theorem states that a single qubit cannot be duplicated in its exact state. While a classical bit can be duplicated many times, and then use majority rule to reduce the effect of noise in the bits, a quantum bit cannot rely on such a simple system. Instead, the quantum error correction must have some additional entanglement to guard against noise.

Shor starts by defining the prerequisites to the QEC: one must be able to do measurements on qubits, be able to entangle qubits, and be able to initialize the computer to some known state. Then, the encoding begins. Shor suggests a method of encoding, where |0> = 1/22 (|000>+|111>)(|000>+|111>)(|000>+|111>), and |1> = 1/22 (|000>-|111>)(|000>-|111>)(|000>-|111>). The paper first suggests that measuring all three sets of qubits in the Bell basis would definitely be able to restore the qubit to its state prior to the decoherence effects. This is because of a crucial assumption: only one independent error will occur. Therefore, even if one of the qubits is wrong, the other two qubits would have the same state and would be able to restore the proper answer.

However, the problem with that method is that now you have measured all of the qubits, meaning that you have collapsed all of the wavefunctions. We want to have some error correcting code that avoids that process. Here, Shor begins by explaining what the decoherence would do to a single qubit. The paper explains that, supposing the first qubit decoheres, it becomes some random superposition of |0> and |1> afterwards. When you express a three-qubit group in terms of the Bell basis after decoherence occurs, you are able to identify the original state.

This part seems confusing to me, but I can’t really tell why the difference is able to be detected. Shor notes that the important thing is that “the sate of the environment is the same for corresponding vectors from the decoherence of the two quantum states encoding 0 and encoding 1.” But what exactly does that mean, and how can that be detected? Shor uses several additional ancillary qubits to track the changes in each of these three-qubit groupings, where the ancillary qubit would be able to detect whether a single three-qubit system has flipped, and where the error occurred. His explanation for how this is possible is in the duplication, from 1 qubit to 9, such that the ancillary qubits are able to make the measurement without destroying the qubit.

However, Shor admits that the main limitation of this work is that it only allows for a single qubit decoherence to be caught. If multiple qubits decohered at the same time, then this QEC would fail. However, the probability of this decreases exponentially. If the probability that a single qubit decoheres is p, the the probability that 2 or more qubits decohere in a nine-qubit system is 1 – (1+8p) * (1-p)^8, or roughly 36p^2. Therefore, the probability that a grouping of 9k qubits are able to be recovered is approximately (1-36p^2)^k, establishing a bound for what the decoherence of a single qubit can be.

From what I can understand, it seems this paper does more in establishing the basis for QEC than really describing a good method itself. It seems rather vague in describing how to correct the error states, or even how the ancillary qubits are connected to the data qubits in detecting the error. However, it does propose a very early form of quantum error correction that mimics the form of classical repetition codes.

Reference: Shor, P. W. Scheme for Reducing Decoherence in Quantum Computer Memory Phys. Rev. A 52 R2493-R2496 (1995)

One thought on “QCJC: Shor 1995

  1. This was very interesting to read and think about! I’m not exactly sure, but here is why it might be nice that the state of the environment is the same after error and recovery regardless of what our encoded qubit was. Shor considers that the first qubit in the Shor code decoheres. Regardless of whether the encoded qubit was |0> or |1> the state of the environment would be the same for a given error on the first qubit.

    As you said measurement isn’t explained explicitly in the paper, but let’s say that it was detected by measurement on ancillary qubits which of the four error scenarios the first qubit fell into: none, phase, flip or both. After decoherence on the first qubit, the post-decoherence state can be expressed as a linear combination of these parts, along with environment states. When we know which error occurred on the first qubit, we can look at the state of the environment for the corresponding error and ignore the rest. The state of the environment does not depend on the initially encoded information, so we will not gain information on the encoded information even after we gain information on the error. It seems that after the recovery, the state of the environment and the qubit in use will be separable. This is because both the encoded 1 and the encoded 0 are matched with the same state for the environment and we can pick out the part corresponding to the environment from the overall wavefunction, leaving only our qubits as a factor. Then, the part in the wavefunction corresponding to our qubits will be back to their initial superposition.

    Had the state of the environment not been symmetric across encoded 0 and 1, the overall wavefunction would not be separable into environment and our qubits, since the 0 would carry a different environment factor from 1 in a superposition. If the wavefunction is not separable, then we do not get the initial encoded information.

    This also seems to suggest that if we were not interested in correcting superpositions of 1 and 0, it might be okay even if the state of the environment was different. If the correction is still possible in this case, the environment could still be separated from the resultant wavefunction since there would only be 1 or 0 and there is no need to match. We definitely do need to correct superpositions, so it’s nice that the environment states do not differ!

    Nielsen and Chuang’s Quantum Computation and Quantum Information approaches the Shor code a bit differently, not referring to the state of the environment in decoherence and characterizing error. I found that very helpful and you might also find it interesting!

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s