Complexity Classes: Are we on the cusp of change?
My background is in engineering physics rather than computer science, and over the years my interest in complexity theory has waxed and waned. Yet it seems to me that quantum computing (and quantum annealing in particular) might open up some new perspectives.
For example, the New York Times reported recently on a new “tree of life” devised by biologists, where the diversity of bacteria was more clearly represented. The large number of new branches gave the tree a much different shape. So…
- What might happen to the current hierarchy of complexity classes if a LARGE number of quantum classes were added?
- Is the classical model of computing simply a special case of something larger, and if so, what might that be?
Diversity is something very different ( and very needed in biologically inspired systems ( alike GP and other forms of self-evolving evolutionary programming ) ).
Problem computability / decideability are another classification point of view onto the landscapes of advanced challenges, the frontiers of our civilisation’s knowledge face and QC may help with.
( well, hope we have much more interesting community lives, than bacteriae do ( at least I had never a phone call with any such, well, so far … ) so as the classic says … only the time will tell … )
The hierarchy of complexity classes are sometimes referred to as the “Complexity Zoo.” The first image below, while not the most readable, demonstrates the staggering scope of the complexity theory environment. A new complexity class can be relatively easy to define. Many of them are of the form “The class of problems solvable on some model of computation subject to some constraint.” As an example, the well known class P is the class of decision problems solvable on a Turing Machine (TM) in polynomial time. But since the zoo is already so large, I don’t think merely defining a bunch of new classes will have a impact on the landscape, per se. What is critical is how those new complexity classes relate to the ones we already know. Proving that some complexity class contains another can effectively collapse the landscape of classes, which could effectively change how we understand what makes computation “hard.”
Let’s examine BQP, the class of decision problems solvable by a QC in polynomial time with an error probability of at most 1/3. The second image below shows some known and unknown relationships between BQP and other complexity classes. It is known that BQP is a subset of PSPACE, since one can simulate the state vector of a QC on a TM with at most polynomial space. However, the relationships between BQP and other classes are not yet understood. A proof that P=NP would collapse several of the classes in this diagram, but would not necessarily collapse BQP with it, as there may be problems in PSPACE outside of NP but still within BQP.
Your second question is interesting, but it probably outside the scope of our current understanding of complexity theory.
(cit.:) “Section 4.3 ( translator’s notice: is there any chance for a giant leap ) BEYOND THE CLASS-2 COMPUTER BOUNDARIES 
The main practical – though negative – implication of the previous thoughts is a fact that within the Class-2 computing, there is not to be expected any efficient solution for sequentially intractable problems.
Nevertheless, a question raises here, whether some other sort of parallel computers could be imaginable, that would be computationally more efficient than Class-2 computers.
Indications, coming from many known, conceptually different C2 class computer models, suggest that without adding some other, fundamental computing capability, the parallelism per se does not suffice to overcome C2 class boundaries, irrespective how we try to modify, within all thinkable possibilites, the architectures of such computers.
As a matter of fact, it turns out, that C2 class boundaries will be crossed, if there would be a non-determinism added to an MIMD-type parallelism ( Ref. Section 3.5 ).
Non-deterministic PRAM (*) can, as an example, solve ( intractable ) problems from NPTIME class in polylogarithmic time and problems of a demonstrably exponential sequential complexity in polynomial time.
Because, in the context of computers, where the non-determinism is equally well technically feasible to be implemented as a clairvoyance, the C2 computer class seems to represent, from the efficiency point of view, the ultimate class for the parallel computers, the borders of which will never be crossed. ”
*) PRAM: a Parallel-RAM, not a SIMD-only processor, demonstrated by Savitch, Stimson in 1979 
 SAVITCH, W. J. – STIMSON, M. J.: Time bounded random access machines with parallel processing. J. ACM 26, 1979, Pg. 103-118.
 WIEDERMANN, J.: Efficiency boundaries of parallel computing systems. ( Medze efektivnosti paralelných výpočtových systémov ). Advances in Mathematics, Physics and Astronomy ( Pokroky matematiky, fyziky a astronomie ), Vol. 33 (1988), No. 2, Pg. 81–94
I’m wondering at this point if we can find some practical examples. For example, the D-Wave Quantum Annealer finds the minimum energy state out of an exponentially large number of possible states, and it does this in essentially constant time. As I understand it, this is different from the traditional concept of parallelism, where independent calculations are run on separate processors.
As I said at the outset, I’m not a complexity theorist. I’m basically an observer with the belief that anything involving so much intellectual effort has to be telling us SOMETHING. So I know from Jonathan that there’s at least one “official” member of the complexity zoo, i.e. BQP, “the class of decision problems solvable by a QC in polynomial time with an error probability of at most 1/3”. But I also know from Michal that the zoo is based on the assumption of sequential processing as the ultimate measure, i.e. the Turing machine with its tapes and state tables and so on.
And this is where I think we need some examples to help us. For example, Quantum Mechanics had its first real success in explaining the spectral lines of atoms. Yet according to the Wikipedia, Fraunhofer made his first diffraction grating in 1821, and there were others (such as David Rittenhouse) who had experimented with them as early as 1785. It took a lot of intermediate steps before people came to see transition between energy states as the “natural” explanation. However, along the way there were plenty of “anomalous observations” that pushed the theory forward.
So I’d like to revise my original question: we must have examples of problems that can be solved by the annealer (or other systems real or imagined) at an anomalous speed. And if so, can we expand this set of examples in a way that tells us something about how they can be classified?
( nota bene: after some magic, it comes to me as the very same surprise as it might come to you, but somehow it happened and today I see as if I were the author of this initial question ( who, out of any doubts, I am not, but Andrew MILNE is ) , so some contexts might look strange until admins will fix this ) (FIXED 🙂 the ADMIN)_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\_/\
With all due respect, Andrew, the so called complexity classes ( ref. below + the whole Complexity-ZOO taxonomy ) are not related to anything else, but to a sequential-mode of processing. Turing Machine model was intentionally used for such classification, as there is some common ground for both theoretical proofs and for practical model-evaluation in both TIME and SPACE dimensions of the problem-under-review scaling ( a.k.a. computing complexity ).
While there were some efforts to extend this SEQ-mode taxonomy in early stages somewhere in late 70-ies/ 80-ies into a direction of PAR-mode accompanions ( with proposals alike ||PTIME, denoting a polynomial time PTIME-complexity even for a PARALLEL-computing architecture system undertaking the non-SEQ processing ), the QUANTUM-computing architecture stays on much different grounds and both it’s SPACE and TIME dimensions seem to me, from the available research published so far, to be rather constant-bound a-priori ( fixed by the manufacturing constraints at the moment ).
One might be delighted by a below cited research on a general possibility to overcome C2-class computer systems’ boundaries, where the last paragraph is my beloved one.
Anyway — looking forward to read more news on this innovative research and it’s applied sciences impacts.
Let me pose this question a different way: Is there a form of computation that does not require counting? I’m inclined to think that counting is fundamental to human knowledge, even in the implicit sense that we learn things by repetition. And when our traditional concepts fail, as they do with Gettier-type problems, we typically respond that the observer needed more data, another step, “one more” of something.
But what if counting is not enough? For example, WordPress has counted more than 1000 views of this discussion, but we seem no closer to an answer. Maybe what we need is not complexity theory, but “simplicity theory”, as in Einstein’s famous “make things as simple as possible, but no simpler”.
The page has more than 3000 views recently and I decided respond to Andrew’s appeal and his question, raised in the beginning of August:
Is there a form of computation that does not require counting?
- fludic computers do not use counting ( analog computing systems originally developed for missile-based controls )
- randomness-sources for cryptography ( where any form of “counting” would intrinsically mean a “predictable” non-randomness )
Hope you forgive, Andrew, using a narrow-meaning of the form-of-computation, where you tried to evoke new ideas that could release our minds from “just counting”.
Looking forward to further inspirations and kind surprises your views and new questions are sure to open.