Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Face Recognition using linear methods (PCA & LDA) and SVM

No description
by

Elena Abril

on 23 January 2014

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Face Recognition using linear methods (PCA & LDA) and SVM

Baseline Results
PCA + LDA + SVM
Conclusions
Image datasets
ORL
400 images of 40 subjects
4 illumination and 6 poses

Yale
165 images of 15 individuals
3 different illumination and 8 poses

YaleB
16128 images of 38 individuals
64 different illumination and 9 poses

Motivation
Handmade feature extraction
using graphic tablets
Face Recognition using linear methods (PCA, LDA) and SVM
Yale Face database B
PCA and LDA are methods for dimensionality reduction that provide good results with a low computational cost.

SVMs define separation hyperplanes between classes trying to obtain the minimum classification error possible.

Applying these methods to Face Recognition, a good system can be built, with not too high computational requirements.
Elena Abril Medina
Miguel Ángel Fernández Torres

Applications of Signal Processing - 15th January 2014
2014
PCA + SVM
1964
Face recognition system to cut up crimes and car accidents
System Proposed
PCA (Principal Component Analysis)
LDA (Linear Discriminant Analysis)
SVMs (Support Vector Machines)
Cross validation for selecting optimal parameters.
Linear and RBF kernel functions.
LibSVM
Unsupervised method. Maximize the variance.
Eigenvectors - Eigenfaces
PhD tool (Vitomir Struc)
Supervised method. Data labels in the training set known.
Maximizes distance between classes and minimize the intraclass distance.
PhD tool (Vitomir Struc)
ORL

Yale Face database
Model 1: 30% train, 70% test - Model 2: 50% train, 50% test
Model 1: 30% train, 70% test - Model 2: 50% train, 50% test
Model 1: 30% train, 70% test - Model 2: 50% train, 50% test
Eigenfaces
ORL
Eigenfaces
Yale B
Yale
Good performance.
High computational cost.
RBF kernel works better than linear one.
Use a large train set may overfit the model.
Similar or worse performance than baseline.
Lower computational cost.
RBF kernel works better than linear one.
Best performance of the three methods.
Lower computational cost.
Linear kernel works better than RBF one.
Applications
Unlock devices.
Security check for payments, online forms...
Safety and health applications
Smart home
Facial feature detection.
Tracking.
Body recognition.
Future work
References
PhD Tool (2012, Vitomir Struc)

Chih-Chung Chang and Chih-Jen Lin, LIBSVM : a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1--27:27, 2011. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.

2000 Mexican Elections: http://www.thefreelibrary.com/Mexican+Government+Adopts+FaceIt+Face+Recognition+Technology+to...-a062019954

2014 NYPD: http://www.elconfidencial.com/tecnologia/2014-01-03/la-policia-de-nueva-york-ensaya-con-un-coche-patrulla-capaz-de-reconocer-caras_71983/

Davis, M.; T. Ellis (August 1964). "The RAND Tablet: A Man-Machine Graphical Communication Device". ARPA. Retrieved 24 March 2011.

Datasets: http://www.cad.zju.edu.cn/home/dengcai/Data/FaceData.html
What happens if a new
face is passed to the SVM?
LibSVM may produce 3 outputs:
Predicted label
Accuracy
Probability estimates (-b 1)
Each row contains k values indicating the probability of that sample corresponds to each class.
How does it works?
LDA
PCA
Full transcript