Quantum Complexity

February 4, 2019


Before we may undergo an analysis of quantum complexity theory, it is useful to first discuss classical complexity theory. Algorithms provide precise instructions for how to solve computable problems– they are the fundamental concept of computer science. The model that we will use to study al- gorithms, and hence computation, is called a Turing Machine: it is rather similar to an everyday personal computer, but rather than being a physical object, it is an abstract notion.

Definition. A Turing Machine consists of four elements:

  1. A program — in the same sense as an ordinary computer.
  2. A finite-state control— acts as a processor.
  3. A tape— acts as memory.
  4. A read-write tape head— points to the position on the tape that is currently readable or writable.

The formal definition of a Turing machine is slightly more subtle than the one given here, but this definition will be sufficient for our needs.

Thesis. Church-Turing: A Turing Machine consists of four elements:
The class of functions computable by a Turing machine corresponds exactly to the class of functions which we would normally regard as being computable by an algorithm.

By choosing that particular definition for an abstract model of a computer, Alan Turing completely captured the idea of computability. This revelation was a tipping point that ultimately led to the development of computer science as an active field of research. Quantum computers also obey the Church-Turing Thesis: they may only compute problems that are computable on a Turing machine. Their advantage lies in efficiency, which we will soon discuss in detail.

To discuss computational complexity, it is also imperative to introduce the idea of asymptotic behavior. The key idea is that as the input n to a function f(n) grows in size, slowly growing terms inside f(n) become negligible. In addition, constant factors become irrelevant (due in part to the fact that computers have doubled in speed every two years). Big-O notation is used to denote upper bounds on the behavior of a function: f(n) = O(g(n)) if there exists an input n0 such that for all n > n0 there exists a c such that f(n) ≤ cg(n). A concrete example: n2 + n + 2 = O(n2) since for n ≥ 2, n2 + n + 2 ≤ 2n2. Conversely, big-Ω notation is used to denote lower bounds: f(n) = Ω(g(n)) if there exists an input n0 such that for all n > n0 there exists a c such that f(n) ≥ cg(n). For example, 10n = Ω(9n) since for n > 0, 9n ≤ 10n. Since f(n) = Ω(g(n)) and f(n) = O(g(n)) are not mutually exclusive (equality is allowed in both cases) if both hold, we say f(n) = Θ(g(n)). The prior example given for big-O notation can also serve as an example for this case: n2 + n + 2 = Θ(n2).

Computational complexity is a tricky subject. Different machines don’t necessarily require the same resources to solve certain algorithms, and hence defining the resources required for a given algo- rithm is not a straightforward task. To get around this difficulty, computer scientists classify problems based on whether or not they can be solved using resources bounded by any polynomial in the input n. More succinctly, if the resources required for solving a particular algorithm scale as a polynomial with the size of the input n, then a problem is said to be polynomial. Generally, problems whose resource requirements scale faster than any polynomial in n are considered exponential. Polynomial problems are considered computationally feasible; exponential problems are not. The primary reason for this particular choice of classification lies in the Strong Church-Turing thesis:

Thesis. Strong Church-Turing:
Any model of computation can be simulated on a probabilistic Turing machine with at most a polynomial increase in the number of elementary operations required.

This effectively means that if a problem is polynomial in some model of computation, it is also polynomial for a probabilistic Turing machine. Hence using our classification schema we can limit our attention to the probabilistic Turing machine model of computation— this makes our job a whole lot easier, since we now have a formal model for classifying the resource requirements of any algorithm (the ”any” being due to the Church Turing thesis).

Quantum computing is an important new paradigm precisely because it casts into doubt the veracity of the strong Church-Turing thesis– certain problems that are exponential on a probabilistic Turing machine become polynomial (or even constant) given the correct choice of quantum algorithm. In fact, the computational power of quantum computers in general can be tied to some major open problems in classical complexity theory. Until these problems are solved, it’s impossible to precisely define the computational power of a quantum computer.

Decision problems are problems that have a yes or no answer. Most computational problems can be reduced into a decision problem, allowing a clean analysis of said problem’s complexity— for example, the common question of whether a number is prime or not is a decision problem, which is solvable in exponential time. Another useful definition is that of a language:

Definition. Language:
A language L over the alphabet Σ is a subset of the set of all possible strings of symbols from Σ.

For example, if Σ = {0, 1} then the set L = {1, 11, 111, 1111...} is a language over Σ. We now have enough background to finally introduce the most important complexity class in computer science:

Definition. Polynomial time (P):
A decision problem (whether x ∈ L) is solvable in polynomial time if there exists a turing machine M such that M decides whether or not an input x is in L in time O(nk), where n is the length of x.

Let’s break this down a little. Any decision problem can be reduced to determining whether or not an input x is in the language L (simply set L to be the set containing all solutions to the original problem). If such a problem can be solved on a deterministic Turing machine in O(nk) time for some k, then that problem is in P. P is a complexity class— complexity classes are sets of languages, where each language in the complexity class is a set containing all solutions to some problem in that complexity class. In a diagram:

Complexity Class ∋ Language pertaining to a single problem ∋ Solution to problem

Another common complexity class is nondeterministic polynomial time:

Definition. Nondeterministic polynomial time (NP):
A decision problem (whether x ∈ L) is solvable in nondeterministic polynomial time if there exists a turing machine M and two polynomials p and q such that

  1. If x ∈ L then there exists a ”witness” string y such that M returns yes in time equal to O(p(|x|) when the machine is started in the state x-blank-y.
  2. If x ∉ L then for all strings w that attempt to play the role of a witness M returns no in time equal to O(q(|x|)) when the machine is started in the state x-blank-w.

Put slightly more simply, if a problem admits an algorithm that checks whether or not an input is a solution in polynomial time, but doesn’t necessarily provide a method of finding a solution, that problem is said to be in NP. Clearly, P ⊂ NP. An extremely famous open problem in computer science is whether or not P = NP. Most computer scientists expect that the equality doesn’t hold, since there are problems in NP which seem very difficult to solve, but no formal proof has been discovered as of yet.

Definition. Bounded-error quantum polynomial time (BQP):
A decision problem (whether x ∈ L) is solvable in BQP if there exists a family of quantum circuits such that for each circuit Qn in the family,

  1. Qn takes as input n qubits
  2. ∀ x ∈ L, Pr( Q|x|(x) = 1 ) ≥ ⅔
  3. ∀ x ∉ L, Pr( Q|x|(x) = 0 ) ≥ ⅔

Essentially, BQP is the class of all problems solvable in polynomial time on a quantum computer. It is known that P ⊂ BQP, meaning that any efficient algorithm on a classical computer is also efficient on a quantum computer. The relation between BQP and NP is currently unknown; it is suspected that BQP contains problems in NP that have no known polynomial time solution on a classical computer. This suspicion in turn is what makes quantum computing very exciting, since it may be able to solve efficiently those problems which are classically extremely hard.