Introducing
Your new presentation assistant.
Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.
Trending searches
10
10 * 0.8
8
Single layer Neural Network
Feed Forward Network
Feed Forward Network
1
Hidden Layer(s)
input I(x)
output o(x)
Hidden Layer 1
Hidden Layer 2
Input(i)
Output(o)
Input(i)
Output(o)
Hidden Layer 1
Hidden Layer 2
Multi layer Neural Network or Feed Forward Network
Feed Forward Network
Single layer Neural Network
Hidden Layer 2
Hidden Layer 1
Input(i)
Output(o)
Hidden Layer 1
Hidden Layer 2
Input(i)
Output(o)
Pixel(pre- processes)
Activation Function
Feed Forward Neural Network
Multi layer Neural Network or Feed Forward Neural Network
Single layer Neural Network
Weights * input(x) + bias (b) and Activation function
Hidden Layer 1
Hidden Layer 2
Input(i)
Output(o)
Hidden Layer 1
Hidden Layer 2
(
Input(i)
Output(o)
Error = Desired output - Guess = 1 - 8 = - 7
Image Classifier
1,1
1,4
1,3
1,2
1,1
1,2
1,3
1,4
2,1
2,2
2,3
2,4
2,1
2,2
2,3
2,4
3,3
3,1
3,2
3,3
3,4
3,1
3,2
3,4
4,2
4,1
4,3
4,4
4,1
4,3
4,4
The pixel of grey scale takes a score from 0- 255
complete black will be a 0
complete white will be a 255
Neural Networks - Perceptron
Some what important = 30 % or 0.3
Cold and Rainy = 1 else 0
Situation
Lot of work to do = 1 else 0
Sleep Deprivation
High = 1 else 0
Output(o)
Weighted Sum
Weights(w)
Activation Function
Input(i)
99 %
Multi layer Neural Network or Deep Neural Network
Single layer Neural Network
Change the parameters/coefficients/ (weights + bias)
61 %
Input(i)
Output(o)
Hidden Layer 1
Hidden Layer 2
33 %
39 %
35 %
Reduced approximate model
output o(x)
4*4 Pixel
input i(x)
67 %
Back Propogate the information in a way to reduce the error
75 %
43 %
Hidden Layer(s)
55 %
65 %
10
New Error = 1 - 7 = -6
Learning Rate/ Step size / Delta = (08-0.6)/0.8. = 0.125 or 12.5 %
Trained Model
Change the parameters in a way to reduce the error
I(x) * 0.1
1
New Error = 1 - 1 = 0
New Parameter (Weights + Bias + more) = 0.1
New Parameter = 0.1 = Model
Important terms so far
output o(x)
4*4 Pixels
input i(x)
Ix = Input
o = Output
w = Weights
b = Bais
= Weighted Sum
F(A) = Activation Function
E = Error w/ Error Function
Bp = Back Propogation
Gradient Descent
Local Minimum
Cost function
others
Hidden Layer(s)
10
Reduced approximate model
Paramters
output
Input
output o(x)
4*4 Pixels
input i(x)
Hidden Layer(s)
10
Weights * input(x) + bias (b) and Activation function
output o(x)
4*4 Pixels
input i(x)
Hidden Layer(s)
10
output o(x)
4*4 Pixels
input i(x)
Hidden Layer(s)
10
output o(x)
4*4 Pixel
input i(x)
Gradient Descent (cost function)
Hidden Layer(s)
Last Layer
First Layer
10
output O(x)
input I(x)
Hidden Layer(s)
1 * (6)
output O(x)
Hidden Layer(s)
input I(x)
output o(x)
4*4 Pixel
input i(x)
Hidden Layer(s)
10
Iteration 2 or epoch 2 with learning step 1
1 * (5)
output o(x)
4*4 Pixel
input i(x)
Hidden Layer(s)
10
Iteration 6 or epoch 6 with learning step 1
1 * (1)
output o(x)
4*4 Pixel
input i(x)
Hidden Layer(s)
10
10 * 0.8
8
10
1
input i(x)
output o(x)
Hidden Layer(s)
10 * 0.8
8
10
1
Hidden Layer
Output(o)
Input(i)
Hidden Layer
Output(o)
Input(i)
Hidden Layer
Output(o)
Input(i)
input i(x)
output o(x)
Hidden Layer(s)
input i(x)
output o(x)
Hidden Layer(s)
input i(x)
output o(x)
Hidden Layer(s)
10 * 0.1
1
10
1
10 * 0.7
7
10
1
255 200 150 255
255 225 150 255
255 245 150 255
255 160 130 255
input i(x)
output o(x)
Hidden Layer(s)
input i(x)
output o(x)
Hidden Layer(s)
input i(x)
output o(x)
Hidden Layer(s)
Hidden Layer
Output(o)
Input(i)