User:Wadams3/sandbox

User:Wadams3/sandboxLink to editing article: Quantum machine learning

Quantum annealing
Quantum annealing is an optimization technique used to determine the local minima and maxima of a function over a given set of candidate functions. Quantum annealing is a method of discretizing a function with many local minima or maxima in order to determine the observables of the function. The process can be distinguished from Simulated annealing by the Quantum tunneling process, through which particles tunnel through kinetic or potential barriers from a high state to a low state. Quantum annealing starts from a superposition of all possible states of a system, weighted equally. Then the time-dependent Schrödinger equation guides the evolution of the system, serving to affect the amplitude of each state as time increases. Eventually, the ground state can be reach yielding the instantaneous Hamiltonian of the system.

HHL Algorithm
The HHL Algorithm is a quantum algorithm devised to solve sets of linear equations. It was developed in 2009 by Aram Harrow, Avinatan Hassidim, and Seth Lloyd, and has found its way into the structure of several quantum computing operations. The algorithm attempts to solve the equation Ax = b such that given Hermitian matrix A and unit vector x, solve for output vector b. The algorithm's runtime is significantly short, providing an exponential speed-up compared to its classical counterparts, and thus is very useful in developing complex algorithms with accessibility of an operation quantum computer. The algorithm can be useful in phase estimation, amplitude amplification, and the difficult process of loading large matrices of data into a quantum computer.

The term quantum machine learning is also used for approaches that apply classical methods of machine learning to the study of quantum systems. A prime example is the use of classical learning techniques to process large amounts of experimental data in order to characterize an unknown quantum system (for instance in the context of quantum information theory and for the development of quantum technologies), but there are also more exotic applications.

The ability to experimentally control and prepare increasingly complex quantum systems brings with it a growing need to turn large and noisy data sets into meaningful information. This is a problem that has already been studied extensively in the classical setting, and consequently, many existing machine learning techniques can be naturally adapted to more efficiently address experimentally relevant problems. For example, Bayesian methods and concepts of algorithmic learning can be fruitfully applied to tackle quantum state classification, Hamiltonian learning, and the characterization of an unknown unitary transformation. Other problems that have been addressed with this approach are given in the following list:


 * Identifying an accurate model for the dynamics of a quantum system, through the reconstruction of the Hamiltonian;
 * Extracting information on unknown states;
 * Learning unknown unitary transformations and measurements;
 * Engineering of quantum gates from qubit networks with pairwise interactions, using time dependent or independent Hamiltonians.

However, the characterization of quantum states and processes is not the only application of classical machine learning techniques. Some additional applications include


 * Inferring molecular energies;
 * Automatic generation of new quantum experiments;
 * Solving the many-body, static and time-dependent Schrödinger equation;
 * Identifying phase transitions from entanglement spectra;
 * Generating adaptive feedback schemes for quantum metrology.

Variational circuits
Variational circuits are a family of algorithms which utilize training based on circuit parameters and an objective function. Variational circuits are generally composed of a classical device communicating input parameters (random or pre-trained parameters) into a quantum device along with a classical Mathematical optimization function. These circuits are very heavily dependent on the architecture of the proposed quantum device, because parameter adjustments are adjusted based solely on the classical components within the device. Though the application is considerably infantile in the field of quantum machine learning, it has incredibly high promise for more efficiently generating efficient optimization functions.

Implementations and experiments
The earliest experiments were conducted using the adiabatic D-Wave quantum computer, for instance, to detect cars in digital images using regularized boosting with a nonconvex objective function in a demonstration in 2009. Many experiments followed on the same architecture, and leading tech companies have shown interest in the potential of quantum machine learning for future technological implementations. In 2013, Google Research, NASA, and the Universities Space Research Association launched the Quantum Artificial Intelligence Lab which explores the use of the adiabatic D-Wave quantum computer. A more recent example trained a probabilistic generative models with arbitrary pairwise connectivity, showing that their model is capable of generating handwritten digits as well as reconstructing noisy images of bars and stripes and handwritten digits.

Using a different annealing technology based on nuclear magnetic resonance (NMR), a quantum Hopfield network was implemented in 2009 that mapped the input data and memorized data to Hamiltonians, allowing the use of adiabatic quantum computation. NMR technology also enables universal quantum computing, and it was used for the first experimental implementation of a quantum support vector machine to distinguish hand written number ‘6’ and ‘9’ on a liquid-state quantum computer in 2015. The training data involved the pre-processing of the image which maps them to normalized 2-dimensional vectors to represent the images as the states of a qubit. The two entries of the vector are the vertical and horizontal ratio of the pixel intensity of the image. Once the vectors are defined on the feature space, the quantum support vector machine was implemented to classify the unknown input vector. The readout avoids costly quantum tomography by reading out the final state in terms of direction (up/down) of the NMR signal.

Photonic implementations are attracting more attention, not the least because they do not require extensive cooling. Simultaneous spoken digit and speaker recognition and chaotic time-series prediction were demonstrated at data rates beyond 1 gigabyte per second in 2013. Using non-linear photonics to implement an all-optical linear classifier, a perceptron model was capable of learning the classification boundary iteratively from training data through a feedback rule. A core building block in many learning algorithms is to calculate the distance between two vectors: this was first experimentally demonstrated for up to eight dimensions using entangled qubits in a photonic quantum computer in 2015.

Recently, based on a neuromimetic approach, a novel ingredient has been added to the field of quantum machine learning, in the form of a so-called quantum memristor, a quantized model of the standard classical memristor. This device can be constructed by means of a tunable resistor, weak measurements on the system, and a classical feed-forward mechanism. An implementation of a quantum memristor in superconducting circuits has been proposed, and an experiment with quantum dots performed. A quantum memristor would implement nonlinear interactions in the quantum dynamics which would aid the search for a fully functional quantum neural network.

Over the past several years, IBM has been developing on online cloud-based platform for quantum software developers, called the IBM Q Experience. This platform consists of several fully operational quantum processors accessible via the IBM Web API. In doing so, the company is encouraging software developers to pursue new algorithms through a development environment with quantum capabilities. New architectures are being explored on an experimental basis, up to 32 qbits, utilizing both trapped-ion and superconductive quantum computing methods.