Quantum ComputingNo Comments

default thumbnail

By Alastair A. Abbott and Cristian S. Calude

The development of a scientific field

The origins of artificial intelligence (AI) as a scientific field can be traced back to the discussions that took place during the Dartmouth Summer Research Project on Artificial Intelligence Conference in 1956. Amongst those involved were John McCarthy, Marvin Minsky and Herbert Simon, who all became leaders and promoters of the field. The long-term problem of simulating and creating intelligence was broken down into sub-problems including problem solving, natural language processing and cognitive simulation. Overly optimistic predictions prevailed over more pessimistic ones for decades, before a tangible lack of progress led to the onset of the “AI winter”.

Spring has sprung once more for AI with several recent breakthroughs including AlphaGo’s 4–1 victory over Go grandmaster Lee Sedol and, more importantly, the rise and invasiveness of “soft” AI with the development of self-driving cars, smart-homes and smartphones. With it, a degree of the euphoria from the dawn of AI research has been re-created.

In this second wave of AI, commercial IT companies have jumped into the mix, bringing with them a new tide of un-supported and un-supportable claims. While discussing the potential for AI to create not only intelligence, but also consciousness, David Gelernter, author of the book The Tides of Mind: Uncovering the Spectrum of Consciousness (Liveright, 2016), was recently quoted in Time Magazine (March 2016) as saying that “[t]here has never been more arrogance and smugness” as in today’s self-congratulatory culture.

A familiar cycle of seasons…

A similar history is starting to play out in the much younger, but equally promising, field of quantum computing. Following the development of Shor’s algorithm in 1994, quantum computing was seen as a bright beacon in the future of computing, which led to a surge of theoretical and experimental interest. But as experimental groups floundered while attempting to control more than a handful of qubits, a degree of pessimism began to infiltrate the initial optimism.

However, a new wave of ambitious industry-led research programmes in quantum computing – first by D-Wave Systems and Lockheed Martin, followed more recently by Google, IBM, Microsoft and others – has emerged, and, with it, a new sense of optimism has spread within both industry and academia.  Unprecedented funding has been pledged: the UK government will commit £270 million over the next five years, and the European Commission has announced the launch of a €1 billion initiative in Quantum Technology.

…but will the progress flourish into a quantum summer?

With relatively steady progress over this period (D-Wave, for example, have gone from 28 qubits in 2007 to more than 1000 in their latest D-Wave 2X machine released last year), bold claims about a future revolutionised by quantum computing are resurfacing. For example, Andy Rubin, once a key figure at Google, recently claimed that quantum computers will be so revolutionary that “there’d only be need for one in the world”.

The European flagship quantum programme, which explicitly strives to stimulate a “second quantum revolution”, also has its sights set big. It is aiming, amongst other goals, to “[b]uild a universal quantum computer able to demonstrate the resolution of a problem that, with current techniques on a supercomputer, would take longer than the age of the universe” by 2035. Although this programme will be a welcome boost to the scientific community and be extremely beneficial to the development of various quantum technologies, how realistic are such claims about the future of quantum computing? Predicting the future is dangerous, but we can at least critically consider the obstacles that might prevent current incremental progress from culminating at such a goal and fulfilling its bold promises.

Justified excitement, or overblown promise?

First and foremost, it’s important to clarify the theoretical claims themselves, since these can all too easily be misconstrued to give the impression that quantum computing will revolutionise each and every corner of computing. Claims such as those appearing in the Quantum Manifesto driving the European Commission’s flagship programme reinforce such a view, and it is indeed true that certain algorithms could revolutionise specific fields. The well-known implications of Shor’s algorithm for common cryptographic-techniques is certainly the chief such case.

But while several algorithms are conjectured (like Shor’s) or proved (as the Deutsch–Jozsa algorithm and various others in the “black-box” paradigm) to provide an exponential speedup over classical computers, this is far from the case in general. In fact, since the introduction of Shor’s and Grover’s algorithms some twenty years ago, development within the field of quantum algorithmics has been rather slow. In reality, many of the known algorithms can be seen as novel uses of a handful of core quantum algorithms.

Quantum (non) superiority

Perhaps the most important “difficult computational problems” are the NP-complete problems, which have applications in almost every area of science and beyond (from planning and logistics to microchip manufacturing). One such example is the well-known travelling-salesman decision problem.

Within the quantum complexity community it is, however, generally not thought that quantum computers can offer more than a polynomial advantage for such problems. Moreover, for many problems the best speedup known is quadratic or less. Such a speedup would even struggle to compete with the heuristic approaches commonly used to solve them in practice. In cases where exact solutions are needed, or for other problems that can classically be solved in sub-exponential time, a polynomial-order speedup could, of course, still be of significant practical benefit.

Nonetheless, the impression often conveyed publicly is that quantum computing will open the floodgates to widespread exponential computational speedups. Such radical claims only represent the more extreme possibilities of quantum computing rather than their likely advantage for most problems.

Where’s my black-box?

Outside of the black-box paradigm mentioned above, none of the conjectured quantum speedups, exponential or not, have been mathematically proved. This is perhaps largely because proving lower bounds on the complexity of classical problems is notoriously difficult. Black-box problems, where one is required to compute a function or property of a classical input by querying a quantum box, are easier to prove speedups for because of their extra formal structure. But this also means that they often have a somewhat artificial flavour and their practical relevance is questionable.

One well-known such algorithm is Grover’s algorithm, in which one is given access to an unsorted quantum database that can be queried with a quantum input, and asked to determine if it contains a specific entry. However, Grover’s algorithm only permits a polynomial speedup and, more importantly, the problem it solves is far from being realistic. The cost of constructing the quantum database could negate any advantage of the algorithm, and in many classical scenarios one could do much better by simply creating (and maintaining) an ordered database.

The future is in the (noisy) details

Key to the continual growth and widespread success of classical computing over the last 50 years has been our ability to scale our devices. The exponential growth of computing power, famously described by Moore’s law, has been matched by an exponential decrease in the cost of adding more memory.

Scalability is an equally important factor in determining the success of quantum computing. Until now, with the exception of D-Wave (which we will return to below), researchers have been limited to “computers” with but a handful of qubits. In order for a quantum computer to come into its own and start to outperform current supercomputers, it would need to operate on several hundred, or more likely, thousand qubits. Hence, achieving the goal of universal quantum computing is going to require real scalability. For example, to factor a 1024 bit RSA key using Shor’s algorithm (which currently sits on the edge of tractability with classical techniques), one would require, even in principle, at least 1024 qubits, and the best-known optimisations require a further half as many again.

Scaling the computer, but not the noise

The issue of scalability, however, is fundamentally more complicated for quantum computers. It isn’t sufficient to simply put more qubits on a chip; one must, moreover, be able to create, control and maintain quantum coherences (including entanglement) between all the qubits.

The principal difficulty in achieving such control arises because quantum systems and processes are inherently noisy: one must constantly fight against the environment which acts to degrade the coherence of the system. The design of a quantum computer must thus counter this corrupting action of noise as best as possible: the cost in doing this increases with the size of the system, whereas the error in the computation needs to be held relatively constant as it scales.

It has long been realised that error-correction techniques will thus be essential. However, there is a price to pay for this: a significant increase in the number of qubits required for any computation. What is not known, however, is how the effort required to implement such error-correction and obtain a constant error rate will scale with the size of the physical system. It is a distinct possibility that this cost will scale too fast – possibly exponentially – hence scaling could become intractable beyond a certain point in a kind of reverse Moore’s law.

Thus, although the lure of practical, universal quantum computing is bright, it is far from clear that the light will not fade into the distance as we attempt to scale our devices and fight to suppress noise. There may be a danger that time, patience and money will wear short before such issues are resolved – if they even can be – and results of practical and public interest can be demonstrated.

Setting the D-Wave in motion

With this perspective in mind, the approach of D-Wave – although often much maligned by some members of the scientific community – is not only quite sensible, but probably the most practical to date, even if it is still far from assured. In essence, D-Wave have opted to trade universality for practicality: rather than attempting to produce a device capable of performing any computation D-Wave is focused on a general quantum approach to solving NP-complete problems. Physically, this is  achieved via a technique called quantum annealing.

Instead of allowing each qubit in the device to directly couple to every other qubit, D-Wave allows only restricted, local connections. These qubits can then be used to mediate and control longer range coherences. This simplification has allowed D-Wave’s engineers to scale rapidly the number of physical qubits on their chips. The flip side is that making an arbitrary problem “fit” within this restricted structure generally entails a quadratic increase in the number of qubits required, thus reducing the effective size of the D-Wave devices.

This approach addresses some of the problems of scalability, but analogue errors can introduce noise that scales with the size of the system. Stochasticity due to noise can be beneficial in some algorithms and applications, but this must be carefully controlled. It will be a significant challenge to successfully suppress and control these forms of noise as the process of scaling continues. If this is not achieved, the probability of obtaining a correct solution to a problem will tend quickly to zero as the size of the device is increased.

D-Wave’s restricted form of quantum annealing seems unlikely to offer the drastic speedups that might potentially be possible on some problems with a universal quantum computer. However, it does open up the possibility of tapping into more modest – yet potentially very useful – practical speedups in the immediate future. D-Wave’s approach could also help temper unrealistic expectations of quantum computing. At the same time, it could show its ability to provide tangible benefits and helping forge a more sustainable future for quantum computing. Of course, many potential obstacles remain to be overcome before such a point is reached.

Quantum benchmarking

In the years since D-Wave released their first generation chip, debate has raged over its ability to outperform classical computers. The key message to be drawn comes not from the results themselves, but the inherent difficulties associated with quantum benchmarking.

It has become clear that, in order for a meaningful assessment to be made, a holistic approach must be taken: quantum computers are not, in the foreseeable future, going to be embodied as stand-alone devices, but rather as machines operating in conjunction with classical computers and computations. Thus, the costs of this symbiosis must also be taken into account if one wants to really understand the gains available.

For D-Wave, the very feature allowing the scalability and control of qubits in their devices (specifically, the limited connectivity between qubits their model imposes) forces the need to perform costly “embedding” computations prior to running the quantum computation. Such costs must be included in any fair assessment, and, moreover, the performance must be compared against the best classical approaches available.

A quassical vision for a sustainable future

Despite such concerns, one should not presume the outlook is entirely gloomy. Instead, the recognition of such issues should motive us to try and think outside the box to exploit any potential speedup that may exist. One such approach could be to mitigate the effect of the costly classical computations associated with the quantum ones by using quantum computations as subroutines in a hybrid computational approach. This approach, which has been tentatively termed quassical computing, thus spreads out the costly embedding over many runs of the quantum subroutines.

For example, Grover’s algorithm can be used as a subroutine for solving problems in computer graphics, where maintaining a sorted database would be illogical. By doing so, the cost of preparing the quantum “database” can be spread out over several calls.

This approach may put the long-term goals of universal quantum computing on the back-burner while we focus on approaches that can reap a less drastic, but nonetheless practical, more immediate benefit from quantum computing. Coupled with the potential peripheral applications of quantum information theory – not only in quantum communication but, more generally, for “quantum-enhanced” devices like quantum radar – this seems a potentially more sustainable strategy to a future where quantum computing has a place, a future that can be achieved without creating too much false hope and disillusionment along the way.

AI’s second coming has succeeded in infiltrating our lives by making small, yet tangible, steps in the direction of the original goals of the field. In doing so, it has forged a more secure future for it as a respectable scientific field, despite on-going overzealous claims. It would be prudent for the current, second wave of quantum computing to follow a similar, sustainable path.

Be the first to post a comment.

Add a comment