Ok, today we’ll tackle something that I’ve never talked about in my blog before:Quantum computing.There might be a few more quantum computing posts in the future if I am in the mood for it.
The problem at hand is simple: There is an unordered list of items of which
are marked;Approximate the value of
. Throughout, we’ll represent the
items by
and the
by
.
This is actually a rather old problem which has been tackled before, and is deeply connected to amplitude estimation(see [1]) which uses uses the Quantum Fourier transform. Recently though, a simpler proof which uses only Grover-like techniques was published by Scott Aaronson, a professor at UT Austin(Hook em’ Horns!). Note that the algorithm is probabilistic with no bounds.
The heart of the technique is Grover’s algorithm so perhaps I should spend a few words on that. Consider an unordered list of size where you want to find a particular element which can be checked to be a solution in constant time, then this can be found in
.
In the classical case, one is forced to do this(both the above problem and QAC) in time. The full strength of Grover’s algorithm is that one can actually find the pre-image
for
where
maps
to
latex x$ is a solution to the given problem. This is all encoded as a quantum black-box unitary operator
defined by
if
and
if
.
The idea of the algorithm is that we spit out the the superposition of all states:
followed by applying the operator above which selects the marked item by flipping it.
and then apply the diffusion operator :
. This, in effect, flips the marked qubit around the mean, selectively increasing its relative amplitude.


Explicitly, the mapping of the diffusion gate is:
where
is the mean of the
.
One repeats this a certain number of times(specifically and then measures, obtaining the corresponding eigenvalue with high probability.
A Random Aside?
In the case of a non-local hidden-variable quantum computer, one can actually do this in atmost time. I’m not really going to talk about this. However, this spawned a simple question:What if you could perform it in
time? What would that actually imply?
Consider the SAT problem: There are variables and expressions in these variables $l_{1},\cdots,l_{k}$. Find an assignment of either true/false to each
such that all the expressions
evaluate to true.,i.e
is true.
Anyways, we can store all possibilities and search through them in
time in our hypothetical case, which is linear. This would mean that confirmed NP decision problems like
, Hamiltonian path problem can be solved in polynomial time which implies
. So, it is unlikely that this is the case.
I’ll get back to the these things in the end of this post.
Quantum Approximate Counting
Consider an unordered collection of size where
of them is marked. Estimate the value of
. This is the problem at hand.
Firstly, let’s clarify the exact problem statement. We wish to find an approxiamte solution, so that means we select multiplicative error , i.e find $\tilde{K}$ such that
Moreover, the algorithm is supposed to be probabilistic so we also select a small probability of failure $\delta$.
Let’s first review some obvious solutions before getting to the actual algorithm. One must note that in the case of a quantum solution, we only care about how many query calls we make,i.e the query call complexity. The real difference in these cases is that in a quantum computer, we can measure observables and get a probability distribution of measurements, so we want to make clever choices for the observables that we measure to extract useful information, the same philosophy behind Grover’s algorithm. As such, it seems reasonable that in a good solution, the query call complexity depends somehow on the value itself.
Classical solution
To exactly find , the only choice is a linear search which is
. Actually, the approximate case(deterministic) could be done in
.
‘Easy’ quantum solution
Use Grover’s algorithm where maps the input to
if and only if it is marked, this would be
.
Query call complexity of the ideal solution
Perhaps, it is best if I reveal the complexity before moving on. The ideal ideal query call complexity to obtain which
with high probability of success
is
.
Initial steps
Instead of approximating , we equivalently approxiamte
.
There are really two steps:
First, we find a good constant-factor approximation to and then we cleverly shave off the bounds to get an
.
So, first we obtain some interval containing
.
This is explained in part 1 of Aaronson’s overview of the algorithm:

Here , is the uniform superposition of all states and
is the diffusion operator
Note that :
. In the computational basis, the probability of measuring a marked item is
. Step 1 essentially returns a constant factor approxiamtion to
withhigh probability. This can be proven using Chernoff bounds which I’ll omit from the discussion.
It is also proven in page 5 that with high probability, one will not find enough marked items to finish step 1 before . Also, with high probability, we’ll see enough marked items in
.The interesting part is how to get an
.
Final solution
Now, that we have , we use the following clever result on page 9:
Let
latex \theta_{max}=\theta_{min}(1+\gamma)$ where
. There exists an integer
such that the following is true:
Consider tossing a biased coin which is heads with probability
at least
times.
If more heads are observed, set
else set
.
This process fails to maintain
with low probability
. The value
also has some non-trivial bounds.

The probability of getting heads above is exactly the probability of getting a marked item in above.
The upshot is that we can distinguish by measurement by considering instead
which are nearly orthogonal. Based on our results of measurement, we shave-off either
or
by a factor of
getting closer and closer(with high probability) to our required
estimate of
.
Philosophical musings on Energy and Algorithms
I beg the reader to ignore everything here and close this tab immediately as I have no idea what I’m saying(not that I ever do).
Recently, I came across a few interesting thins=gs:
One was an experimental method to tackle classic difficult computer science problems. One was using chemical reactions with DNA molecules to solve the Hamiltonian path problem which is NP-hard. The problem was that it requires an amount factorial in the number of vertices to work. Another solution uses an optical contraption of cables and beam splitters to get a solution. Again, the drawback is that amount of required energy is exponential in the number of vertices. There is another example of some kind of a bizarre mold which rearranges itself in a manner to give an approximate solution to the Travelling Salesman Problem.
In fact, there is an entire field of DNA computing. The aim is to use parallel computing to optimize our solutions. The drawback is that the processing speed is slow.
Again, even for Grover’s algorithm, note that we must first ‘store’/obtain the entire quantum state itself unlike a classic linear-search where the auxilliry space needed is 1. Within a certain degree(spanning all these systems of computing), it seems that we can, to SOME degree, substitute time for space/energy.An obvious thing to note is that you can’t decide anything about some new data if you haven’t even stored it. More often than not, we are willing to make this sacrifice because we care more for efficiency.
Parallel computing speedups require increasing the number of components. Of course, there is Amdahl’s law which puts some kind of a restriction to this.
Aaronson talked about this in one of his informal talks(I think an interview or TedTalk).
The study of complexity theory really may reveal things about physics, nature and energy it seems. In classical computing, there is a limit to how much we can trade space for time so we consider other unconventional types of computing where the threshold for this tradeoff is higher. This is truly an interesting direction to think in.