Neuron is a basic element of the neural network. The main feature of the single neuron is that it has many inputs and only one output. From the mathematical point of view neuron is an element that realizes function:

where f() is the activation function, wi weights for all inputs, xi neuron input values. Neuron is summing all elements of the input vector multiplied by the weights, and the result is used as the argument of the activation function, this way the neuron output value is created. In most of applications neuron inputs and weights are normalized. In geometry it is equal to move of input vector points to the surface of N dimensional sphere with unitary radius, where N is a size of the input vector. In the simplest case, for the two dimensional vector, normalization is the movement of all input points to the edge of the unitary radius circle. Normalization could be written as:

where xi coordinate to normalize, xj all of vector coordinates. Use of normalization either to input vectors or input weights of the neuron, improves learning neuron properties. We can use linear or nonlinear function, as an activation function. In the event of linear neuron, its mathematical equation could be written as follows:

It is one of the simplest of the neuron models, which is only occasionally used in practice because the most of phenomena in the surrounding world have nonlinear characteristics. As the example we can mean the biological neurons. Neuron could be biased; it means that it has additional input with constant value. The weight of that input is modified during the learning process like the other neuron weights. Generally we assume the bias input equal to one, in this case the neuron mathematical equation could be written as follows:

where f() is the activation function, wi weights of all inputs, xi neuron input values and w0 is the weight value of the bias. When we assume that the bias input value is zero we obtain equation for non biased neuron. Now we should say what is the use of that “bias”.