User:Mcatalano26/Quantum supremacy

In quantum computing, quantum supremacy is the goal of demonstrating that a programmable quantum device can solve a problem that no classical computer can solve in any feasible amount of time (irrespective of the usefulness of the problem).[1][2] In other words, 'quantum supremacy' refers to a quantum computers ability to exceeds the capabilities of a classical computer. The term was originally coined by John Phillip Preskill [1] but the concept of a quantum computational advantage, specifically for simulating quantum systems, dates back to Yuri Manin's (1980)[5] and Richard Feynman's (1981) proposals of quantum computing.[6]

Conceptually, quantum supremacy involves two tasks: the engineering task of building a powerful quantum computer, and the computational-complexity-theoretic task of finding a problem that can be solved by that quantum computer. The computational-complexity problem must have a superpolynomial speedup over the best known or possible classical algorithm for that task.[3][4]

By comparison, the weaker quantum advantage is the demonstration that a quantum device can solve a problem merely faster than classical computers.

https://www.quantamagazine.org/john-preskill-explains-quantum-supremacy-20191002/

Quantum Supremacy in the 20th century
In 1936, Alan Turing published his paper, “On Computable Numbers”, in response to the 1900 Hilbert Problems. Turing’s paper described what he called a “universal computing machine”, which later became known as a Turing machine. In 1980, Paul Benioff utilized Turing’s paper to propose the theoretical feasibility of Quantum Computing. His paper, “The Computer as a Physical System: A Microscopic Quantum Mechanical Hamiltonian Model of Computers as Represented by Turing Machines“, was the first to exhibit that it was possible to show the reversible nature of quantum computing as long as the energy dissipated was arbitrarily small. In 1981, Richard Feynman showed that quantum mechanics could not be simulated on classical devices. During a lecture, he delivered the famous quote, “Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy.” Soon after, David Deutsch produced a description for a quantum Turing machine and designated an algorithm created for the purpose of running on a quantum computer.

In the next decade, progress toward quantum supremacy was made when Peter Shor formulated Shor's algorithm, streamlining a method for factoring integers in polynomial time. Around the same time, Christopher Monroe and colleagues published their paper, “Demonstration of a Fundamental Quantum Logic Gate”, marking the first demonstration of a quantum logic gate, the two-bit "controlled-NOT". In 1996, Lov Grover put into motion an interest in fabricating a quantum computer after published his algorithm, Grover’s Algorithm, in his paper, “A fast quantum mechanical algorithm for database search”. Soon after, “Implementation of a Quantum Algorithm to Solve Deutsch's Problem on a Nuclear Magnetic Resonance Quantum Computer” was published, marking the first demonstration of a quantum algorithm.

Progress in the 21st century
Vast progress was made in the 2000’s toward quantum supremacy, from the first 5-qubit Nuclear Magnetic Resonance computer (2000), the Demonstration of Shor’s theorem (2001), and the implementation of Deutsch’s algorithm in a clustered quantum computer (2007). In 2011, D-Wave Systems of Burnaby in British Columbia became the first company to sell a quantum computer commercially. In 2012, physicist Nanyang Xu landed a milestone accomplishment after using an improved adiabatic factoring algorithm to factor 143, although the methods used were met with objections. Not long after, Google purchased its first quantum computer.

Computational complexity
Complexity arguments concern how the amount of some resource needed to solve a problem (generally time or memory) scales with the size of the input. In this setting, a problem consists of an inputted problem instance (a binary string) and returned solution (corresponding output string), while resources refers to designated elementary operations, memory usage, or communication. A collection of local operations allows for the computer to generate the output string. A circuit model and its corresponding operations are useful in describing both classical and quantum problems. The classical circuit model consists of basic operations such as AND gates, OR gates, and NOT gates. The quantum model gates include that of classical circuits and particularly utilizes unitary operations. Unlike the finite set of classical gates, there are an infinite amount of quantum gates due to the continuous nature of unitary operations. In both classical and quantum cases, complexity mounts with increasing problem size, such as computing the parity of n bits. As an extension of classical computational complexity theory, quantum complexity theory considers what a theoretical universal quantum computer could accomplish without necessarily accounting for the difficulty of building a physical quantum computer or dealing with decoherence and noise. Since quantum information is a generalization of classical information, quantum computers can simulate any classical algorithm.

Quantum complexity classes are sets of problems that share a common quantum computational model, with each model containing specified resource constraints. Circuit models are useful in describing quantum complexity classes. The most useful quantum complexity class is BQP (bounded-error quantum polynomial time), the class of decision problems that can be solved in polynomial time by a universal quantum computer. Questions about BQP still remain, such as the connection between BQP and the polynomial-time hierarchy, whether or not BQP contains NP-complete problems, and the exact lower and upper bounds of the BQP class. Not only would answers to these questions reveal the nature of BQP, but they would also answer difficult classical complexity theory questions. One strategy for better understanding BQP is by defining related classes, ordering them into a conventional class hierarchy, and then looking for properties that are revealed by their relation to BQP. There are several other quantum complexity classes, such as QMA (quantum Merlin Arthur) and QIP (quantum interactive polynomial time), all varying in difficulty.

The difficulty of proving what cannot be done with classical computing is a common problem in definitively demonstrating quantum supremacy. Contrary to decision problems that require yes or no answers, sampling problems ask for samples from probability distributions. If there is a classical algorithm that can efficiently sample from the output of an arbitrary quantum circuit, the polynomial hierarchy would collapse to the third level, which is considered very unlikely. Boson sampling is a more specific proposal, the classical hardness of which depends upon the intractability of calculating the permanent of a large matrix with complex entries, which is an NP-complete problem. The arguments used to reach this conclusion have also been extended to IQP (instantaneous quantum polynomial-time) Sampling, where only the conjecture that the average- and worst-case complexities of the problem are the same is needed.

Proposed experiments addition
Main article: Travelling Salesman Problem

Travelling Salesman Problem
The Travelling Salesman Problem (TSP) is one such problem that could easily display the supremacy of quantum technology over that of classical technology. The problem is as follows: find the shortest possible path between a set of $$N$$ cities such that each city is visited only once and the path begins and ends in the same location. In it's most general form, the TSP is classified as an NP-Complete problem. The problem is set up so that there is a cost associated with each city which is representative of the distance the salesman must take to get from one city to another. The goal of the problem is to minimize this cost. The Big O time-complexity of the most general version of this problem has been shown to be $$O(N!)$$. Big O notation is an illustration of the length of time that a computational problem will take to solve. A Big O value of $$O(N!)$$ is demonstrating that the time to complete the problem will increase at a rate equal to a factorial dependence on the input size. This makes the problem nearly impossible to solve classically for large values of $$N$$. For TSP's that meet specific requirements, branch and bound algorithms have been shown to solve the problem efficiently. Unfortunately, branch and bound algorithms have only been proven to work on a small subset of exceptional TSP's. Algorithms that make use of heuristics to determine TSP solutions have been seen to be occasionally effective as well. The issue with a heuristic algorithm, however, is the fact that heuristics inherently rely on approximations. Therefore, a heuristic-based algorithm will not solve the generalized TSP as accurately as possible. To solve the TSP as generally as possible, a classical brute force method must be used at time-complexity $$O(N!)$$.

Although this problem is classically NP-hard, there are many quantum algorithms that can solve this problem efficiently. An algorithm utilizing adiabatic quantum computation has been shown to efficiently solve the TSP. Quantum frameworks have also been created to solve the approximate TSP with proof of quadratic speed-up. Additionally, an efficient solution to the TSP would go further than just solving this particular issue as it would imply that there exists a proof to confirm P=NP.

P=NP
Solving any problem in the NP-Complete problem set efficiently (and proving P=NP) by quantum means would display the supremacy of quantum technology over that of classical technology. If a solution for one problem in this problem set is found, then that solution can be extended to every other problem in the set. For one discrete example, take the Travelling Salesman Problem. The problem is as follows: find the shortest possible path between a set of $$N$$ cities such that each city is visited only once and the path begins and ends in the same location. In it's most general form, the TSP is classified as an NP-Complete problem. The problem is set up so that there is a cost associated with each city which is representative of the distance the salesman must take to get from one city to another. The goal of the problem is to minimize this cost. The Big O time-complexity of the most general version of this problem has been shown to be $$O(N!)$$. This value demonstrates that the time to complete the problem will increase at a rate with a factorial dependence on the input size. This makes the problem nearly impossible to solve classically for large values of $$N$$. This set-up is analogous to all of the other problems in the NP-Complete problem set. Algorithms to solve this problem have been constructed in quantum frameworks, but more rigorous analysis into the time-complexity of such algorithms would need to be done before they could be said to be efficient. If these algorithms could one day be run efficiently on quantum systems, not only would quantum supremacy be achieved, but P=NP (a problem that has plagued computer scientists for years) would be proven.

Susceptibility to Error
Quantum computers are much more susceptible to errors than classical computers due to decoherence and noise. The threshold theorem states that a noisy quantum computer can use quantum error-correcting codes to simulate a noiseless quantum computer assuming the error introduced in each computer cycle is less than some number. Numerical simulations suggest that that number may be as high as 3%. However, it is not known how the resources needed for error correction will scale with the number of qubits. Skeptics point to the unknown behavior of noise in scaled-up quantum systems as a potential roadblock for successfully implementing quantum computing and demonstrating quantum supremacy.

Proving a negative
There have also been algorithmic breakthroughs in classical computing as a result of quantum computing research resulting in comparable performance of classical computers. This suggests that more research needs to be done into classical algorithms before a suitable test for quantum supremacy can be devised. Until it can be determined that a classical algorithm is definitively as efficient as it can possibly be, a quantum computer cannot be said to be determinately better. This implies that at some level quantum supremacy may be trying to prove a negative. The negative, here, being that an algorithm doesn't exist that allows classical computers to perform equally well.

Criticism of the name
Some researchers have suggested that the term ‘quantum supremacy’ should not be used, arguing that the word ‘supremacy’ evokes distasteful comparisons to the racist belief of white supremacy. A controversial Nature commentary signed by thirteen researchers asserts that the alternative phrase ‘quantum advantage’ should be used instead. John Preskill, the professor of theoretical physics at the California Institute of Technology who coined the term, has since clarified that the term was proposed to explicitly describe the moment that a quantum computer gains the ability to perform a task that a classical computer never could. He further explained that he specifically rejected the term quantum advantage as it did not fully encapsulate the meaning of his new term. The word ‘advantage’ would imply that a computer with quantum supremacy would have a slight edge over a classical computer while the word ‘supremacy’ better conveys complete ascendancy over any classical computer.