To be perfectly honest, I was a bit surprised to find that Peter Shor is still alive and researching when I came upon the 1995 paper and probed further! I think that really speaks both to how young the field of quantum computing is – the factoring algorithm was only discovered in 1994 – and also to my blatant ignorance of the field :)
The start of this article explores the basic idea of quantum computing, with a humble note that “at least theoretically, quantum computing may be much faster than classical computation for solving certain problems…”, citing his own 1994 work as proof. But, Shor quickly transitions to the importance of decoherence as a limiting factor in future quantum computing. The state of quantum computing seemed to previously assume that decoherence was able to corrupt the entire computation. However, Shor suggests that there is a way to reduce that decoherence, through the use of quantum error correction.
The paper looks specifically at a nine-qubit error correction code, where if one of the qubit decoheres, the state can still be recovered. Quantum error correction, at its core, is similar to classical error correcting over noisy channels. However, the key difference is in the quantum no-cloning theorem. This theorem states that a single qubit cannot be duplicated in its exact state. While a classical bit can be duplicated many times, and then use majority rule to reduce the effect of noise in the bits, a quantum bit cannot rely on such a simple system. Instead, the quantum error correction must have some additional entanglement to guard against noise.
Shor starts by defining the prerequisites to the QEC: one must be able to do measurements on qubits, be able to entangle qubits, and be able to initialize the computer to some known state. Then, the encoding begins. Shor suggests a method of encoding, where |0> = 1/2√2 (|000>+|111>)(|000>+|111>)(|000>+|111>), and |1> = 1/2√2 (|000>-|111>)(|000>-|111>)(|000>-|111>). The paper first suggests that measuring all three sets of qubits in the Bell basis would definitely be able to restore the qubit to its state prior to the decoherence effects. This is because of a crucial assumption: only one independent error will occur. Therefore, even if one of the qubits is wrong, the other two qubits would have the same state and would be able to restore the proper answer.
However, the problem with that method is that now you have measured all of the qubits, meaning that you have collapsed all of the wavefunctions. We want to have some error correcting code that avoids that process. Here, Shor begins by explaining what the decoherence would do to a single qubit. The paper explains that, supposing the first qubit decoheres, it becomes some random superposition of |0> and |1> afterwards. When you express a three-qubit group in terms of the Bell basis after decoherence occurs, you are able to identify the original state.
This part seems confusing to me, but I can’t really tell why the difference is able to be detected. Shor notes that the important thing is that “the sate of the environment is the same for corresponding vectors from the decoherence of the two quantum states encoding 0 and encoding 1.” But what exactly does that mean, and how can that be detected? Shor uses several additional ancillary qubits to track the changes in each of these three-qubit groupings, where the ancillary qubit would be able to detect whether a single three-qubit system has flipped, and where the error occurred. His explanation for how this is possible is in the duplication, from 1 qubit to 9, such that the ancillary qubits are able to make the measurement without destroying the qubit.
However, Shor admits that the main limitation of this work is that it only allows for a single qubit decoherence to be caught. If multiple qubits decohered at the same time, then this QEC would fail. However, the probability of this decreases exponentially. If the probability that a single qubit decoheres is p, the the probability that 2 or more qubits decohere in a nine-qubit system is 1 – (1+8p) * (1-p)^8, or roughly 36p^2. Therefore, the probability that a grouping of 9k qubits are able to be recovered is approximately (1-36p^2)^k, establishing a bound for what the decoherence of a single qubit can be.
From what I can understand, it seems this paper does more in establishing the basis for QEC than really describing a good method itself. It seems rather vague in describing how to correct the error states, or even how the ancillary qubits are connected to the data qubits in detecting the error. However, it does propose a very early form of quantum error correction that mimics the form of classical repetition codes.
Reference: Shor, P. W. Scheme for Reducing Decoherence in Quantum Computer Memory Phys. Rev. A 52 R2493-R2496 (1995)