Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
You can change this under Settings & Account at any time.
Transcript of FYP
Hafiz Saad Masood 2009089
Majid Parvez Qureshi 2009130
M. Bilal Shah 2009162
Salman Ashraf 2009242
Dr. Zahid Haleem GESTURE RECOGNITION SYSTEM
FOR IMPAIRED PERSONS Problem Statement
Project TimeLine LAYOUT OF PRESENTATION To design and implement a system capable to recognize the gestures of sign language through Kinect, gestures will be translated into text and finally the text will be converted into spoken language. GOAL It has been proposed in literature that covariance matching can serve as a robust and computationally feasible approach to action recognition.
This approach, which involves computing the covariance matrices of feature vectors that represent an action, can potentially be useful in our 2-D hand gesture recognition problem as well.
In addition to covariance matching, another approach has been proposed in the literature by Jmaa and Mahdi. LITERATURE OVERVIEW Instead of building a dictionary of covariance matrices, this approach is based on analyzing three primary features extracted from an image: location of fingers, height of fingers, and the distance between each pair of fingers.
In addition to the Jmaa and Mahdi method, there are numerous applications that use polygon approximation to detect convexity points for hand-digit recognition. LITERATURE OVERVIEW Analysis of gestures in sign language. This analysis will provide guide lines for developing an Algorithm.
Capture image through Kinect and extract information that is useful e.g. Hand Gestures.
Train the computer to recognize the gesture present in the image obtained from Kinect. PROJECT METHODOLOGY Translate the extracted pattern to text.
Convert the text to speech using some open source framework. PROJECT METHODOLOGY Microsoft Kinect Device
PC with good processing power
3D Graphic card
Internet RESOURCES REQUIRED Extraction of image from MS Kinect’s video stream.
Extraction of useful information from image. Ignoring un-necessary details.
Recognizing the gesture present in selected frame using useful extracted information.
Comprehend the gesture, understanding of its meaning.
Translation of it’s meaning into text. A word or a sentence.
Generation of speech from text. TASK BREAKUP PROJECT TIMELINE PROJECT TIMELINE K. Guo, P. Ishwar, and J. Konrad. “Action Recognition in Video by Covariance Matching of Silhouette Tunnels”, Proc. of 22nd Brazilain Symposium on Computer Graphics and ImageProcessing, SIBGRAPI-2009,2009
 - O. Tuzel, F. Porikli, and P. Meer. “Region Covariance: A Fast Descriptor for Detection and Classification”. In Proc. ECCV (2) 2006, pp.589-600 BIBLOGRAPHY