User:Jimmy Novik/Deep Feedforward Networks

Deep feedforward networks, also called feedforward neural networks, or multilayer perceptrons(MLPs), are the quintessential deep learning neural network training algorithms that generate models. A feedforward network is an extension of linear models that represents nonlinear functions of x, which is achieved by the nonlinear transformation function φ(x) or by the kernel trick.

The goal of a feedforward network is to approximate some function $$f^{*}$$.

For example, for a classifier, $$y=f^{*}(x=$$ maps an input $$x$$ to a category $$y$$. A feedforward network defines a mapping $$y=f(x;\theta)$$ and learns the value of the parameters $$\theta$$ that result in the best function approximation determined by finding a minimum value of a cost function.