User:Fluffle-Prime/Quantum neural network

Most QNN are developed as feed-forward networks. Similar to their classical counterparts, this structure intakes input from one layer of qubits, and passes that input onto another layer of qubits. This layer of qubits evaluates this information and passes on the output to thw next layer. Eventually the path leads to the final layer of qubits. The layers do not have to be of the same width, meaning they don't have to have the same number of qubits as the layer before or after it. This structure is trained on which path to take similar to classical artificial neural networks. This is discussed in a lower section. Quantum neural networks refer to three different catagories: Quantum computer with classical data, classical computer with quantum data, and quantum computer with quantum data.

Training
Quantum Neural Networks can be theoretically trained similarly to training classical/artificial neural networks. A key difference lies in communication between the layers of a neural networks. For classical neural networks, at the end of a given operation, the current perceptron copies its output to the next layer of perceptron(s) in the network. However, in a quantum neural network, where each perceptron is a qubit, this would violate the no-cloning theorem. A proposed generalized solution to this is to replace the classical fan-out method with an arbitrary unitary that spreads out, but does not copy, the output of one qubit to the next layer of qubits. Using this fan-out Unitary (Uf) with a dummy state qubit in a known state (Ex. |0> in the computational basis), also known as an Ancilla bit, the information from the qubit can be transferred to the next layer of qubits. This process adheres to the quantum operation requirement of reversibility.

Using this quantum feed-forward network, deep neural networks can be executed and trained efficiently. A deep neural network is essentially a network with many hidden-layers, as seen in the sample model neural network above. Since the QNN being discussed utilizes fan-out Unitary operators, and each operator only acts on its respective input, only two layers are used at any given time. In other words, no Unitary operator is acting on the entire network at any given time, meaning the number of qubits required for a given step depends on the number of inputs in a given layer. Since Quantum Computers are notorious for their ability to run multiple iterations in a short period of time, the efficiency of a QNN is solely dependent on the number of qubits in any given layer, and not on the depth of the network.

Cost Functions
To determine the effectiveness of a neural network, a cost function is used, which essentially measures the proximity of the network’s output to the expected or desired output. In a Classical Neural Network, the weights ($$w$$) and biases ($$b$$) at each step determine the outcome of the cost function $$C(w, b)$$. When training a Classical Neural network, the weights and biases are adjusted after each iteration, and given equation 1 below, where $$y(x)$$ is the desired output and $$a^{out}(x)$$ is the actual output, the cost function is optimized when  $$C(w, b)$$= 0. For a QNN, the cost function is determined by measuring the fidelity of the outcome state ($$\rho^{out}$$) with the desired outcome state ($$\phi^{out}$$), seen in Equation 2 below. In this case, the Unitary operators are adjusted after each iteration, and the cost function is optimized when C = 1. Equation 1 $$C(w,b)={1 \over N}\sum_{x}{||y(x)-a^{out}(x)|| \over 2}$$

Equation 2 $$C ={1 \over N}\sum_{x}^N{\langle\phi^{out}|\rho^{out}|\phi^{out}\rangle}$$