Neural networks
Artificial inteligence - what is it all about?
A neuron or a neural cell is specialized in transferring and modifying electric signals. Such cells are the building material of the human brain. For computer scientists a neuron is simply a data processing unit, equipped with a multiple input, called dendrites and a single output, called neurite or axon. The functional scheme of a neuron is given on the figure below.
The functioning of a neuron may be described as follows. The xi signals on input are multiplied by the appropriate numbers wi, called synaptic weights, and then summed up. If the result reaches a given level, a signal will appear on output . As the values of the weights are different from each other, the input signals are not equally treated – some of them have higher rank, some other lower, thus contributing more or less to the total signal.
x = x1 w1 + x2 w2 + x3 w3 + ... + xn wn
A neuron possesses the ability to adapt, steaming from the fact that synaptic weights can change. In this way the rank of information put on a particular input can change as well. However, the appearance of the output signal depends not on the signal on a particular dendrite, but rather on the entire configuration of the input signals. Thus, some configurations stimulate a neuron, while the others do not.
A neuron equipped with an additional control input can be trained by an external teacher. The mechanism is quite simple. We put on the dendrites signals constituting a particular configuration. Suppose, we want it to stimulate the neuron. If it already does, we do nothing, if not - we put a signal on the control input in order to modify the synaptic weights, as it is schematically shown on the picture below.
Then we repeat the experiment an again, if the neuron is not stimulated, we modify the weights. By means of an appropriate, convergent procedure, we do the job until we obtain the desired result. Then we can say, that our neuron is trained to recognize a particular configuration and we can try some other.
Of course, the information capacity of a single neuron is not particularly high, so it can n memorizeot too many patterns. In order to increase this capacity one has to join neurons with each other, thus creating a network resembling a part of human brain.
In principal, one can imagine networks with different, even very complicated architecture, consisting of a large number of neurons. E.g., a human brain contains about 10 bilion of neurons. However, it turns out that even small neural structures reveal surprising features - there is a theorem, telling that a three-layer network can perform every logical operation, thus being a good candidate for a neurocomputer.
It should be emphasized that neurocomputers are essentially different from traditional digital machines - actually, taking into account the way they process information, they resemble creatures. They do not follow any given algorithms, but they use the knowledge and experience gained during the training process. They do not give precise answers and they make mistakes, but they have intuition and the predictive ability. Instead of solving problems numerically, they rather build the models of reality, adjusting synaptic weights of their neurons. Therefore they can spot some hidden correlation in data supplied.
See our product: N-Expert.