User talk:Aluhaib

Introduction
Backpropagation neural networks are commonly trained through gradient descent procedures. Gradient descent does not allow direct minimization of the number of misclassified patterns (Duda, 1973). Therefore, an objective function must be derived that will result in increased classification accuracy as well as a minimalization of objective error.

Classification of N classes is often viewed as a regression problem with an N-valued response, with a value of 1 in the nth position if the observation falls in class n and 0 otherwise (quoted in Rimer, 2002). The values of zero and one can be considered as hard target values. However, in practice there is no reason why class targets must take zero and one (Rimer, 2002). To generalize well, a network must be trained using a proper objective function.

Backpropagation training often uses an objective function that encourages making weights larger in an attempt to output a value approaching hard targets of 0 or 1 (±1 for tanh). Using hard targets is a somewhat naïve way of training and does not generalize well. Different fractions of the data are learned at different times during training, and using hard targets not only leads to weight saturation, making it harder and slower to learn samples that have yet to be learned, but also forces the learner to overfit on samples that have already been learned. Using hard targets of 0.1 and 0.9 presents a less severe solution (Rimer, 2002).

The dynamic target is a new enhancement method that has been added as an input to the objective function for better classification. Instead of using the hard target 0 and 1, it utilizes the concept of dynamic change where the target changes through time using a predefined function until it becomes the hard target.