Introducing 

Prezi AI.

Your new presentation assistant.

Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.

Loading…
Transcript

10

10 * 0.8

8

Single layer Neural Network

Feed Forward Network

Feed Forward Network

1

Hidden Layer(s)

input I(x)

output o(x)

Hidden Layer 1

Hidden Layer 2

Input(i)

Output(o)

Input(i)

Output(o)

Hidden Layer 1

Hidden Layer 2

Multi layer Neural Network or Feed Forward Network

Feed Forward Network

Single layer Neural Network

Hidden Layer 2

Hidden Layer 1

Input(i)

Output(o)

Hidden Layer 1

Hidden Layer 2

Input(i)

Output(o)

Pixel(pre- processes)

Activation Function

Feed Forward Neural Network

Multi layer Neural Network or Feed Forward Neural Network

Single layer Neural Network

Weights * input(x) + bias (b) and Activation function

Hidden Layer 1

Hidden Layer 2

Input(i)

Output(o)

Hidden Layer 1

Hidden Layer 2

(

Input(i)

Output(o)

Error = Desired output - Guess = 1 - 8 = - 7

Image Classifier

1,1

1,4

1,3

1,2

1,1

1,2

1,3

1,4

2,1

2,2

2,3

2,4

2,1

2,2

2,3

2,4

3,3

3,1

3,2

3,3

3,4

3,1

3,2

3,4

4,2

4,1

4,3

4,4

4,1

4,3

4,4

The pixel of grey scale takes a score from 0- 255

complete black will be a 0

complete white will be a 255

Neural Networks - Perceptron

Weather

Some what important = 30 % or 0.3

Cold and Rainy = 1 else 0

Step function

Bias

Weighted Sum

Situation

For e.g lets say that it is rainy Monday after a rather relaxed weekend

Bias used to force an outcome, influence the activation function, bringing more flexibility

{

1 if >= 1

0 if < 1

0.8 + 0.3 = 1.1

1.1 is > 1

Coffee it is !!

Very important = 0.5 0r 0.5

Lot of work to do = 1 else 0

0.2

1 * 0.5 = 0.5

1 * 0.3 = 0.3

0 * 0.2 = 0

Also means

1 = coffee

0 = Tea

Can be a positive or a negative number

(0.5 + 0.3 + 0) = 0.8

Sleep Deprivation

matters = 20% or 0.2

High = 1 else 0

Output(o)

Weighted Sum

Weights(w)

Bias

Activation Function

Input(i)

99 %

Multi layer Neural Network or Deep Neural Network

Single layer Neural Network

Change the parameters/coefficients/ (weights + bias)

61 %

Input(i)

Output(o)

Hidden Layer 1

Hidden Layer 2

33 %

39 %

35 %

Reduced approximate model

output o(x)

4*4 Pixel

input i(x)

67 %

Back Propogate the information in a way to reduce the error

75 %

43 %

Hidden Layer(s)

55 %

65 %

10

New Error = 1 - 7 = -6

Learning Rate/ Step size / Delta = (08-0.6)/0.8. = 0.125 or 12.5 %

Trained Model

Change the parameters in a way to reduce the error

I(x) * 0.1

1

New Error = 1 - 1 = 0

Training Day

New Parameter (Weights + Bias + more) = 0.1

New Parameter = 0.1 = Model

Important terms so far

output o(x)

4*4 Pixels

input i(x)

Ix = Input

o = Output

w = Weights

b = Bais

= Weighted Sum

F(A) = Activation Function

E = Error w/ Error Function

Bp = Back Propogation

Gradient Descent

Local Minimum

Cost function

others

Hidden Layer(s)

10

Reduced approximate model

Paramters

output

Input

Training Day

output o(x)

4*4 Pixels

input i(x)

Hidden Layer(s)

10

Weights * input(x) + bias (b) and Activation function

Learning Begins

output o(x)

4*4 Pixels

input i(x)

Hidden Layer(s)

10

Error = Desired Output(ground truth) - Networks' guess

= 1 - 7 = - 6

Evaluate

output o(x)

4*4 Pixels

input i(x)

Hidden Layer(s)

10

Adjust Weights and Biases proportiontal to their contribution

Back Propogation

output o(x)

4*4 Pixel

input i(x)

Gradient Descent (cost function)

i(x) * (w(x) + bias (b))

Hidden Layer(s)

Last Layer

First Layer

10

output O(x)

input I(x)

Hidden Layer(s)

Error = Desired Output(ground truth) - Networks' guess

Back Propogation

i(x) * (w(x) + bias (b))

1 * (6)

output O(x)

Hidden Layer(s)

input I(x)

Error = Desired Output(ground truth) - Networks' guess

Iteration with Back Propogation on error and Learning step 1

output o(x)

4*4 Pixel

input i(x)

Hidden Layer(s)

10

Error = Desired Output(ground truth) - Networks' guess

= 1 - 6 = - 5

Iteration 2 or epoch 2 with learning step 1

i(x) * (w(x) + bias (b))

1 * (5)

output o(x)

4*4 Pixel

input i(x)

Hidden Layer(s)

10

Error = Desired Output(ground truth) - Networks' guess

= 1 - 5 = - 4

Iteration 6 or epoch 6 with learning step 1

i(x) * (w(x) + bias (b))

1 * (1)

output o(x)

4*4 Pixel

input i(x)

Hidden Layer(s)

10

Error = Desired Output(ground truth) - Networks' guess

= 1 - 1 = 0 !

10 * 0.8

8

10

1

input i(x)

output o(x)

Hidden Layer(s)

10 * 0.8

8

10

1

Hidden Layer

Output(o)

Input(i)

Hidden Layer

Output(o)

Input(i)

Hidden Layer

Output(o)

Input(i)

input i(x)

output o(x)

Hidden Layer(s)

input i(x)

output o(x)

Hidden Layer(s)

input i(x)

output o(x)

Hidden Layer(s)

10 * 0.1

1

10

1

10 * 0.7

7

10

1

255 200 150 255

255 225 150 255

255 245 150 255

255 160 130 255

input i(x)

output o(x)

Hidden Layer(s)

input i(x)

output o(x)

Hidden Layer(s)

input i(x)

output o(x)

Hidden Layer(s)

Hidden Layer

Output(o)

Input(i)

Learn more about creating dynamic, engaging presentations with Prezi