User:Mbuset14/sandbox

One topic that Rumelhart was interested in, was unsupervised learning. He used the name "competitive learning" to refer to it. Competitive learning was examined by the use of computer simulation and formal analysis. Rumelhart & Zipser found that when competitive learning was applied to parallel networks mimicking neuron like elements, many learning tasks can be achieved.

Jordan & Rumelhart (1992), explained how certain algorithms made two assumptions in regards to neural networking. The perceptron and LMS algorithms assumed that the only adaptive units in the network were output units and that a teacher provides desired states of all of said output units. However, once more research was done it was found that internal unites adaptively recode the initial input representation of the environment. This finding eliminated the early assumption that the only output units were adaptive units in a network. Algorithms such as Boltzmann learning and backpropagation are considered training networks that use nonlinear internal units. Backpropagation strives to find the minimum error function, which is then considered to be the solution of the learning problem. Backpropagation differs from perceptrons by its use of activation function rather than step function. In addition to the first assumption being disproven, further research challenged the second as well. Unsupervised learning algorithms require no use of a “teacher”. These algorithms do their work by clustering and extracting features on the input data about statistical or topological properties.