Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Artificial Neural Network
Transcript of Artificial Neural Network
that changes its structure based on external or internal information
that flows through the network during the learning phase This vision guided car is not given any driving instructions in the software from the start. It actually "learns" how to drive and stay in the "left hand lane", just by watching and observing the driving habits of a human driver (or operator) who must train it to drive that way Background The original inspiration for the term Artificial Neural Network came from examination of central nervous systems
These networks are similar to the biological neural networks in the sense that functions are performed collectively and in parallel by the units, rather than there being a clear delineation of subtasks to which various units are assigned (see also connectionism).
Neural networks or parts of neural networks (such as artificial neurons) are used as components in larger systems that combine both adaptive and non-adaptive elements. While the more general approach of such adaptive systems is more suitable for real-world problem solving, it has far less to do with the traditional artificial intelligence connectionist models. Models The spikes travelling along the axon of the pre-synaptic neuron trigger the release of neurotransmitter substances at the synapse.
The neurotransmitters cause excitation or inhibition in the dendrite of the post-synaptic neuron.
The integration of the excitatory and inhibitory signals may produce spikes in the post-synaptic neuron.
The contribution of the signals depends on the strength of the synaptic connection. The McCullogh-Pitts model Inputs x1
xn w1 w2 w3 wn Output y Processing The McCullogh-Pitts model:
spikes are interpreted as spike rates;
synaptic strength are translated as synaptic weights;
excitation means positive product between the incoming spike rate and the corresponding synaptic weight;
inhibition means negative product between the incoming spike rate and the corresponding synaptic weight; Learning Learning in biological systems Learning = learning by adaptation
The young animal learns that the green fruits are sour, while the yellowish/reddish ones are sweet. The learning happens by adapting the fruit picking behavior.
At the neural level the learning happens by changing of the synaptic strengths, eliminating some synapses, and building new ones. Learning as optimisation The objective of adapting the responses on the basis of the information received from the environment is to achieve a better state. E.g., the animal likes to eat many energy rich, juicy fruits that make its stomach full, and makes it feel happy.
In other words, the objective of learning in biological organisms is to optimise the amount of available resources, happiness, or in general to achieve a closer to optimal state Neural network tasks control
approximation These can be reformulated in general as
tasks. Approximation: given a set of values of a function g(x) build a neural network that approximates the g(x) values for any input x. Applications Classification, including pattern and sequence recognition, novelty detection and sequential decision making.
Data processing, including filtering, clustering, blind source separation and compression.
Robotics, including directing manipulators, Computer numerical control Thanks a lot.....