QCJC: DiVincenzo 2000

A bit of a breather from systems of linear equations – let’s go back to the the basics! We’ll be looking at DiVincenzo’s 2000 paper, titled The Physical Implementation of Quantum Computation, but I think this paper is more commonly known as DiVincenzo’s criteria. In this paper, he outlines 5 + 2 criteria that a quantum computing architecture would need in order to be considered mature, or at the very least, usable. I think this is one of the most widely cited articles in the field of quantum computing, with 2118 citations, since most hardware architecture papers tend to put in a citation to argue why their new research satisfies multiple criteria. (Other notable papers: Girvin and Schoelkopf’s 2004 paper introducing superconducting circuits as quantum computing architecture has 2915 citations, and Shor’s 1995 factoring paper has 7059 (!) citations. In contrast, the combined published work that I’ve made has something south of 3 citations, all of which are self-citations.)

The paper is written remarkably conversationally, but packs in a great deal of examples from pre-2000 works in the field. To start, in section II, DiVincenzo does an excellent job arguing for the need of quantum computing without making it sound like a magic bullet. Indeed, quantum computers do not necessarily improve all possible computations or algorithms. Instead, there are limited classes of problems that can be solved exponentially faster with quantum computing. However, a strong argument for the use of quantum computing in the future is that nature itself is fundamentally quantum, not classical. The classical effects that we see are typically a large scale approximation, or a high-energy limit, of intrinsically quantum effects. Therefore, if we truly want to understand how the universe works, it might make sense to use fundamentally quantum computers.

I find this argument to be particularly elegant, although I’m not as convinced of the next line, that quantum machines can only have greater computational power than classical ones. Just because something uses a language that is more fundamental doesn’t always mean that it will be faster. I think a reasonable analogy might be to compare the classical programming languages C and Python. Python is often considered to be a high-leveled scripting language, which C is a more machine level language that can execute functions faster. Both have their purposes, and there is a choice in the choice of language, just as there is a choice between quantum and classical computing. For problems that do not need quantum computing, the requirements of quantum computing are too stringent now to warrant the cost, but that’s not to say that the same costs will apply in the future.

Two amusing points: the first response to the question of “Why QC?” is simply “Why not?”. Which I absolutely love :)

The second is that one of the applications of QC that DiVincenzo discusses is somewhat obliquely put as

… And for some games, winning strategies become possible with the use of quantum resources that are not available otherwise.

Which sounded crazy to me! I tracked down the two references that this sentence sites, and my initial suspicion was confirmed – one of the citations was merely referring to a manner of quantum communication to beat classical communication errors. Sure, it’s really great stuff, but wasn’t completely new – after all, my QM teacher has been bringing up that example since last year :) But the other citation looked interesting – Meyer 1999 on Quantum Strategies and considering game theory (!) from the views of quantum computing. I think I found my article for tomorrow!

Alright, so what are these criteria anyways? Let’s write them out first, and then revisit each one of them.

  1. A scalable physical system with well characterized qubits
  2. The ability to initialize the state of the qubits to a simple fiducial state, such as |000…>
  3. Long relevant decoherence times, much longer than the gate operation time
  4. A universal set of quantum gates
  5. A qubit specific measurement capacity
  6. The ability to interconvert stationary and flying qubits
  7. The ability to faithfully transmit flying qubits between specified locations.

Note that criteria 6 and 7 are what DiVincenzo calls “Desiderata for Quantum Communication”, as flying qubits (or, more usually, a form of a photonic qubit) are typically used in quantum communications over long distances. While these are fascinating, we will skip these two for now.

The other 5 criteria seem remarkably reasonable for computational standards, but they do have some interesting caveats to consider.

The first criteria is simply that you need qubits in order to have a quantum computer, just as you need bits for a classical computer. The additional requirement that these qubits are well characterized is interesting. For DiVincenzo, this means that the following should be true:

  • The qubits are permitted to be entangled, that is, for more than one qubit, the overall state of the qubits cannot be simply decomposed into some product of individual states.
  • Physical parameters, including the internal Hamiltonian, of the qubit are known,
  • Presence of other states beyond a quantum two-level system are characterized, and interactions are understood,
  • Interactions with other qubits are understood, and
  • Coupling with external fields are understood

The reason for so many requirements is perhaps that it is fairly easy to see a two-level system in quantum mechanics, but it can be difficult to characterize those systems to only be a two-level system that satisfy these criteria. DiVincenzo raises an interesting aside about how quantum dots cannot be called a two-qubit system, despite their surface similarities to having a single electron shared between two different dots. Despite the apparent superposition of a single electron in multiple positions, the selection rules prevent additional entanglement from occurring.

At this point DiVincenzo briefly goes over the existing systems at the time of publication, including ion-traps, NMR, neutral-atoms, impurities or quantum dots , and superconducting devices. Since that time, I think superconducting devices has evolved into superconducting circuits, impurities have evolved into diamond NV centers, and topological qubits have emerged into the scene as well.

The second criteria is to have an easy to access initial state. My QM professor would often joke about going to the Quantum Mechanical hardware store and picking up a whole bunch of states that were all initialized to the same 0 position. However, in an actual implementation, those states need to be created efficiently. This is because a 0 state qubit is especially useful for quantum error correcting codes. Not only is a 0 state needed, but a continuous supply of 0 states is needed, meaning that one of two things must happen: either there is a fast (short decoherence time) way of generating new 0 states, or that there is a way to “store” 0 state qubits and then “move” them in place on a conveyor belt when they are needed.

For the first option, DiVincenzo proposes two ways to reset qubits. One is to allow the qubit to naturally fall to its ground state, which is most typically used as the 0 state of a qubit anyways. While this is a natural system, the time it takes for a qubit to return to its ground state can be quite long, and not be shorter than the decoherence time of the entire system. Therefore, if you need to wait for a single qubit to return to 0, then within that same amount of time, all the qubits will have been given a chance to fall back to 0. Another option is to measure the qubits and therefore project them into a state that can be further manipulated by rotations if necessary. As to how that measurement process is actually handled, that is a question for criteria five.

Interestingly, DiVincenzo uses this criteria to rule out the possibility of NMR as a QC architecture. I’m not sure how the field has changed within the last 17 years to prove him wrong/right!

The third criteria is in extending the relative decoherence time of the qubits. In general, decoherence corresponds with the lifetime of the qubit, or how long it takes before outside noise interferes with the delicate state of the qubit. Note that not every single aspect of the qubits has to have a long decoherence time. Instead, only the properties of the qubit that are being represented in the basis state of the logical qubit need to have a long decoherence time. But unfortunately, there is usually a correspondence between the raw decoherence time of a qubit with its gate time, as both quantities are related to the interaction strength between the qubit and external factors.

The main motivation for this criteria is found in quantum error correcting (QEC). A typical quantum algorithm will likely take longer than the lifetime of its individual qubits to complete, which seems rather bad. If you were trying to send a message to someone, but the delivery of the message would take 4 centuries to get to its destination, by which time the original receiver would be dead, why bother sending the message? However, QEC is a solution – it frequently “revitalizes” the qubits by checking to make sure that they are still the same as before, and then correcting them (using ancillary qubits that are initialized to 0) whenever needed. Therefore, as long as you are able to run multiple of these QEC subprograms during the algorithm, your entire code should still work.

The fourth criteria is to create a set of universal gates that are able to implement every possible algorithm. This isn’t as bad as it sounds (for now) – for instance, classical computing only requires the NAND gate to build any other gate. Similarly, QC only needs the CNOT gate and a rotation gate (as well as a measurement gate but that’s a whooole ‘nother criteria). The typical way to implement this is to find some Hamiltonian that corresponds to the unitary transformation you want to execute, and then “turn on” that Hamiltonian for the desired amount of time.

U = e^{\frac{iHt}{\hbar}}

But that actually has a ton of problems in its implementation within different architectures. DiVincenzo brings up that some of these desired Hamiltonians in NMR systems could only be turned on and not off, posing quite a challenge. Others do not interact with more than two-qubits, which makes gates (like the Toffoli CCNOT gate) much more challenging to implement. Even for a good Hamiltonian, controlling the time pulse at which the gate is active can be a concern. The gate needs to evolve unitarily, meaning that it needs to be slow enough to satisfy adiabatic requirements (which I don’t super understand at the quantum level?). Furthermore, if the time signal is controlled by classical components, there needs to be a good separation between that QM system and the classical system.

I feel like DiVincenzo gave up a bit of ground here at the end. He admits that the unitary gates cannot be implemented perfectly, and instead recommends that such inaccuracies should only be taken into account in the future. Instead of trying to make the perfect unitary gates, we should aim to understand how much noise the gates introduce to the system and correct for them using QEC. Oh, QEC, how you bring life to our gates!

The fifth criteria is to have good measuring devices. That is, if the qubit is blue, don’t let your measuring device say that it is red! It might sound simple, but this criteria measures the fidelity of the qubits. Your measuring device will likely not be able to exactly say the correct state of the qubit each time, which seems really confusing to me!

Aside: I can understand fidelity well enough when it comes to determined qubits. That is, if a qubit is |0> and it is being measured as |1> every now and then, I understand that the measuring device has a fidelity less than 100%. But if the initial qubit is in the Bell state, how do you even estimate the fidelity of the measuring device? Do you simply characterize the measuring device using a known qubit and then say that the fidelity does not change with an entangled qubit in the future?

In any case, DiVincenzo remarks that a low fidelity system (a system that has poor measuring devices) can still be overcome, if you repeat the same experiment over and over again using a large ensemble of particles. That would satisfy the threshold levels of reliability for the experiment, and hopefully, for the computation. However, such processes again requires rapid gate actions, especially if the final state qubit is not easily replicable and the entire algorithm is needed to run again.

By looking at the five criteria, you start to get a sense for how daunting the task is, but also in how powerful QC can be. Let’s play with this idea with a sample problem.

Suppose that you have a problem that typically takes exponential time with a classical computer and would take linear time with a QC. However, to carry out the problem, you need a large number of qubits – let’s say 100. Each of those computational qubits needs to be backed up by a series of QEC ancillary bits, each of which are running constant subroutines to ensure that the qubits do not decohere during that process. With that many qubits rubbing against each other, perhaps interactions between qubits becomes less stable, and qubits are able to become excited to non-two level systems. Furthermore, if the final measurement device has a low fidelity, the entire process may be repeated several times to reach a certain threshold level of certainty. And even then, if the entire algorithm is a non-deterministic algorithm (as most QC algorithms are), it would take some number of tries of each different sample value before a final solution is given!

All that’s to say, QC has a lot of challenges to face before solving large problems reliably. Just the sheer amount of error correcting that would be needed strongly motivates the creation of a ton of stable qubits, and a good conveyor belt system to pipe in fresh qubits that are initialized in the proper state. However, the rewards can still be great. If it were as easy to manufacture qubits as it is to manufacture silicon bits together, the world may never be the same.

Reference: January 6th, 2017: DiVincenzo, D. P. The Physical Implementation of Quantum Computing (2000)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s