Introducing
Your new presentation assistant.
Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.
Trending searches
Demo
Help deaf and hearing loss people.
What is our approach
Ema'a organization
Baby Phone
MyEarDroid
Shazam
Original Audio Signal
FingerPrint
Extraction
PrePare Data
collect data
Extracted
Fingerprints
Training
model
FingerPrint
Extraction
FingerPrint
Matching
Fingerprints
+
Metadata
Captured
Signal
Model
Selection
Evaluation
Matched
Metadata
PrePare Data
collect data
Training
model
Model
Selection
Evaluation
Original Audio Signal
FingerPrint
Extraction
Extracted
Fingerprints
FingerPrint
Extraction
FingerPrint
Matching
Fingerprints
+
Metadata
Captured
Signal
Matched
Metadata
FFT
N envelope time
domain time segement
Covariance Matrix
Coherence
Number of sound Resource
Distance From Audio source
Calucate Time
Difference
Sound Source Direction
Azimuth
Angle
Original Audio Signal
FingerPrint
Extraction
PrePare Data
collect data
Extracted
Fingerprints
Training
model
FingerPrint
Extraction
FingerPrint
Matching
Fingerprints
+
Metadata
Captured
Signal
Model
Selection
Evaluation
Matched
Metadata
Fingerprints
Features
Feature
Extraction
Model
Match/
Not Match
sound
type result
Sound
Direction
• Languages : Java + Python + PHP
• Libraries: Spicy + Librosa + Numpy + Scikit-learn
• Frameworks: Laravel + Android Studio + Jupyter
• Using : Git-hub + Heroku online server
Urban Data Sound
8732 audio sound with ".wav" extension
We added New audio sound to urban dataset
Divided All sounds into 10 folds
To get Our Final Classes
Air-conditioner
jackhammer
Baby Cry
Car-horn
siren
Door Squeak
Children-playing
Street-music
Water Tape
Dog-bark
Glass-Broken
Footsteps
Engine-idling
Tones
Knock Knock
Gun-shot
Scream
People Talk
A. Signal Pre-processing
like : MFCC
B. Signal Projection
like : Fourier Transform
C. Feature Extraction
like : Mel-frequency cepstral coefficients(MFCC)
Chroma, melspectogram, spectral contrast, Tonnetz
D. Feature Optimization
like : Normalization
Algorithms we used to create our training model :
1- Multi-Layer Perceptron (MLP)
2- Linguistic Regression (LR)
3- K Nearest Neighbor (KNN)
4- Decision Tree (DT)
5- Support Vector Machine (SVM)
6- Random Forest
Use Microsoft Hololens With Augmented Reality
Adding Speech To Text
Use bracelets for hands on a slight shake to guide the user to the sound Direction