# QCJC: Cai 2013

Alright, as we promised yesterday, a look at the guts of the experimental side of the papers! I’m going to be looking into Cai 2013, which is one of the earlier implementations of the QC solution to linear systems of equations, as was very roughly described in the previous post. The main challenge is to see if I can get some meaning out of the theoretical abstractness that was in the last paper, by seeing some real examples being worked.

The premise remains the same: Cai et al. aims to implement the algorithm described by Harrow using photonic qubits. The paper treats the problem much more practically by aiming to solve a specific problem:

$A x = b, A = \begin{bmatrix} 0.5 & 1.5 \\ 1.5 & 0.5 \end{bmatrix}$

Like the original theory paper, Cai completes this process in three steps. The initial step deals with determining the eigenvalues of the matrix $A$, the second step conducts rotations on the eigenvalues to find inverse eigenvalues (for the inverse matrix), and the third step extracts the proper vector $x$ from the system. The algorithm is initialized with three different types of bits – one bit that is an extra bit to assist in the inverse finding step, labeled as Ancilla, two qubits to act as “memory” to store information about the matrix $A$, which are labeled as the Register, and one qubit to act as the input bit.

One of the parts of this algorithm that most confuses me is in the setup of the matrix $A$ in step 1. This algorithm appears to have hard coded in the eigenvalues of the matrix. Just for sanity’s sake, let’s do this brief eigenvalue calculation here.

First, we know that the eigenvalue problem we need to solve is $det(A - \lambda I) = 0$, and so

$\det (\begin{bmatrix} 1.5 - \lambda & 0.5 \ 0.5 & 1.5 -\lambda \end{bmatrix}) = 0$

$( 1.5 - \lambda )^2 - 0.25 = 0$

$\lambda = 1, 2$

which nicely verifies with what the paper claims. The paper states that to optimize the circuit, they encode the eigenvalues of the matrix in binary and store them within the register qubits. To my understanding, this is a bit of a simplification that they are working with that is not part of the original algorithm. That is, instead of allowing the algorithm to solve for the eigenvalues of the matrix, they are hard-coding the eigenvalues of the matrix into the algorithm and therefore simulating the desired matrix. This makes sense as a simplification – the original paper seems to rely on some kind of more complex manner of phase estimation.

Disclaimer – I’m not sure if this implementation actually is a form of phase estimation itself, mainly because I am unfamiliar with that process. However, the way that this algorithm is implemented here seemed a bit fishy to me. For instance, what if the eigenvalues of the matrix were not so nice and easy, but were actually very far apart? For instance, if the two eigenvalues were 1 and 9000, then you would need at least 14 register bits to just encode the eigenvalue, not to mention the number of operations needed to instantiate such values. It is possible that this is one of the drawbacks that the original paper discusses, when it comes to $\kappa$, which the paper defines to be as the “ration between A’s largest and smallest eigenvalues”. The 2009 theory paper claims a less stable solution set when $\kappa$ is large, and that the complexity increases by $\kappa^2$.

After that first step, the next two steps are done to entangle the qubits together and to invert the eigenvalues. Now, I’m not sure if my math is right here. However, what it seems to me is that in step two, you are doing the equivalent of inverting the matrix $A$ by finding the inverses of the eigenvalues. In that process, because those eigenvalues are entangled with the original input qubits, you are also transforming the input qubits. Step three then separates the two again.

At the end of step three, there is a measurement stage. To the best of my understanding, this measurement stage does not always work. That is, it is possible to measure results that do not give |1> for the ancillary qubit and |00> for the register qubits. In those cases, I think that the calculation was then moot and the same process is repeated again from step 1, until those two conditions are met. Only then is the input qubit known to be the solution.

I wasn’t yet able to follow along with a computation of all of the register qubits. However, I did try out some of the results in Figure 3, simply as sanity checks to ensure that my calculations were correct. For this paper, the experimenters evaluate the three following vectors:

$b = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix}, \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -1 \end{pmatrix}, \begin{pmatrix} 1 \\ 0 \end{pmatrix}$

Using these values, we can either find the solution by manually solving the system of linear equations, or by finding the inverse of the matrix. The inverse of the matrix is

$A^{-1} = \frac{1}{4} \begin{bmatrix} 3 & -1 \\ -1 & 3 \end{bmatrix}$

and by left multiplying the inverse matrix with the input vectors, we find the following normalized solutions to $x$ as follows:

$b_1 = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \ 1 \end{pmatrix}, b_2 = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -1 \end{pmatrix} b_3 = \frac{1}{\sqrt{10}} \begin{pmatrix} 3 \ -1 \end{pmatrix}$

Afterwards, recall that the overall purpose is not to recover the actual vectors of $x$, as that would require at least $n$ operations to read out. Instead, the purpose is to find expectation values. For this experiment, the expectation values used are the Pauli matrices, measuring in the X, Y, and Z directions. Since we know the values of $x$, we can compute the expected expectation values and compare them to the experimental results. (Note that this is already done for us as well in Figure 3, in the grey bars. This calculation is more or less an exercise!)

For the Pauli matrix $\sigma_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$, the expectation values for each of the three vectors are

$\langle b_1|\sigma_x|b_1 \rangle = 1, \langle b_2|\sigma_x|b_2\rangle = -1, \langle b_3|\sigma_x|b_3\rangle = -.6$

which agree with that which is shown.

It’s a bit odd to not be discussing the actual experiment that this paper carries out, but that’s more or less because of my ignorance. I’m still not as familiar with these rotation transformations as I would like, and need to improve on understanding the physical implementations of these logical quantum gates. Hopefully I’ll gain more understanding and discuss this part in the future! One part that did catch my attention was the apparent worsening of the algorithm with vectors $b_2$ and $b_3$. The paper explains that this is due to the specific optical setup used in the experiment, and that it is caused by “high-order photon emission events and post-selection in CNOT gates”. This seems troubling – even when you are using an algorithm that is optimized for a specific matrix, you still have such large discrepancies? But I’m certain that in the future, these fidelity values will improve!

In summary, this paper shows experimental proof that it is possible to calculate expectation values to systems of linear equations in an efficient manner. We follow the example equation that is provided and compute some of the same values, while leaving the harder experimental items behind for another day :)

References: Cai, X. D., et al. Experimental Quantum Computing to Solve Systems of Linear Equations Phys. Rev. Lett. 110 230501 (2013)