Introducing 

Prezi AI.

Your new presentation assistant.

Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.

Loading…
Transcript

Artificial Neural Networks (ANN)

As Classification Method

Department of Mathematics and Statistics

Artificial Neural Networks (ANN) as

a Classification Method

Arwa Alsaiari

April 22 , 2020

OUTLINE

  • Introduction
  • The History of the ANN
  • Components of the ANN
  • How the ANN works
  • Conclusion

Outline

Classification

Classification is placing objects into categories based on some attributes.

Types of Classifiers

Introduction

  • Support Vector Machine
  • Naïve Bayes
  • Logistic Regression
  • Artificial Neural Networks
  • ANNs are computing systems inspired by the biological neural networks that constitute human brains.

  • The main goal of the ANN was to solve problems in the same way that a human brain does.

However, over time, attention moved to

performing specific tasks.

  • ANNs have been used on a variety of tasks including computer vision, speech recognition, machine translation, social network filtering, playing board and video games, medical diagnosis and even in activities that have traditionally been considered as reserved to humans, like painting.

  • Also, neural networks could be used in classification, prediction, clustering, association, pattern recognition, and in addition machine learning.

History

History

Timeline

1958

1943

In 1958 Frank Rosenblatt created the “Perceptron” model, which was the first of its kind to perform pattern recognition which only consisted of a single layer.[3]

Timeline

In 1943 ,Warren McCulloch and Walter Pitts opened the subject by writing a paper on how neurons might work. In order to describe how neurons in the brain might work, they modeled a simple neural network using electrical circuits.[1]

1949

1975

But in 1975, Marvin Minsky and Seymour Papert found multiple problems with the Perceptron model [4], which were later solved by Paul Werbos, using Back Propagation.[5]

In 1949, Donald Hebb pointed out that neural pathways are strengthened each time they are used. It is similar to the ways in which humans learn.[2]

Timeline

1982

1982

In 1982, John Hopfield presented a paper to the National Academy of Sciences. Previously, the connections between neurons was only one way.His approach was to create more useful machines by using bidirectional lines.[6]

Timeline

In 1982, there was a joint US-Japan conference on Cooperative/Competitive Neural Networks.

Japan announced a new Fifth Generation effort on neural networks,

resulting US worrying about being left behind. As a result, there was more funding and thus more research in the field.[8]

1982

In 1982, Reilly and Cooper used a "Hybrid network" with multiple layers, each layer using a different problem-solving strategy.[7]

Timeline

Present &Future

1997

Today, ANN discussions are occurring everywhere, and the importance of ANNs becomes observed.

ANNs have taken a place as important mathematical/engineering tools.

However, The most important advances in ANNs almost

will be in the future.

The large number and wide variety of applications of this technology are very encouraging .

In 1997, a recurrent neural network framework, Long Short-Term Memory (LSTM) was proposed by Schmidhuber & Hochreiter.[9]

Timeline

1998

In 1998, Yann LeCun published Gradient-Based Learning Applied to Document Recognition.[10]

MECHANICS

Mechanics

Components of the ANN

In general, an ANN is a collection of connected nodes called

artificial neurons.

1

The nodes are organized into multiple layers (input, hidden,

and output).

Each node in one of these layers is connected to all the nodes in the next layer by links.

2

The connections (links) are used to transmit a signal (piece of

information) from a node to others. Each link has a weight, which determines the strength of one node's influence on another.

How the ANN works:

Conclusion

  • Neural networks are a very advanced topic , there are different types of neural networks and many methods that are used in training a neural network.

  • We covered a simple part of this large amount of information.

  • We have covered the basic structure of a neural network and explained how a simple multilayers perceptron the ANN got trained by using forward and back propagation.

References :

References

References:

1. McCulloch, Warren S., and Walter Pitts. "A logical calculus of the ideas immanent in nervous activity." The bulletin of mathematical biophysics 5.4 (1943): 115-133.

2. Hebb, Donald Olding. The organization of behavior: a neuropsychological theory. J. Wiley; Chapman & Hall, 1949.

3. Rosenblatt, Frank. "The perceptron: a probabilistic model for information storage and organization in the brain." Psychological review 65.6 (1958): 386.

4. Minsky, Marvin, and Seymour A. Papert. Perceptrons: An introduction to computational geometry. MIT press, 2017.

5. Widrow, Bernard, and Michael A. Lehr. "Perceptrons, Adalines, and backpropagation." Arbib 4 (1995): 719-724.

References:

6. Hopfield, John J. "Learning algorithms and probability distributions in feed-forward and feed-back networks." Proceedings of the national academy of sciences 84.23 (1987): 8429-8433.

7. Cooper, Leon N., Charles Elbaum, and Douglas L. Reilly. "Self organizing general pattern class separator and identifier." U.S. Patent No. 4,326,259. 20 Apr. 1982.

8. Amari, Shun-ichi, and Michael A. Arbib. "Competition and cooperation in neural nets." Springer Lecture Notes in Biomathematics 45 (1982).

9. Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780.

10. LeCun, Yann, et al. "Gradient-based learning applied to document recognition." Proceedings of the IEEE 86.11 (1998): 2278-2324.

Thank you !

Questions !

Learn more about creating dynamic, engaging presentations with Prezi