Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

ESUM

Brief presentation on ESUM project by ETH Zurich
by

Ashris Ashris

on 17 August 2015

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of ESUM

ESUM
Each iteration of a shoot generates 14 synced videos (7 spectroscopic pairs) which is then stitched to a 360 Degree Video viewable in any VR Player.
Auto Face Blurring Algorithm
Two approaches were tried to implement Face Blurring: MATLAB Computer Vision Toolbox and OpenCV Haar Training for Face Detection.

While MATLAB can only detect frontal faces, OpenCV has functionality to include profile faces too.
Capturing the Raw Videos
We use an assembly of 14 GoPros and 360HerosPro to make a Video Capturing Device.
DepthMap Extraction
Depthmaps are monochromatic images that are used to represent depth in a real life image. Thoretically, a depthmap can be generated using a steroscopic pair of images.
Currently, an open source software called DMAG5 is used which does not give the best results. We may require assistance from the Computer Vision Lab to make the process automated and done with precision.
Oculus Rift Simulation
We use Oculus Rift DK2 to simulate the 360 Video on a free VR player 'Kolor Eyes'. Considering the key requirements of the graphics specification, a new machine with high end capabilities is required.
VR Simulation and Statistical Analysis
Stitching the Generated Videos
The Sensor Backpack collects an array of data to specify the environment parameters of the experiment. It records GPS, Humidity, Iminance Levels, Temperature, Sound Pressure, Dust Levels, Carbon Dioxide and Nitrogen Dioxide Levels.
Sensor Backpack Data Visualization
We need to extract physiological and neurological data of the user from the E4 Wristband and Emotiv EEG device respectively and use the data to predict the emotional state of the user to correlate it with the spatial properties of the space.
Psychological Data Analysis ( In Process )
Reinhard König | Matthias Standfest | Ashris Choudhury
Training of Time Series by
manually labelled timeseries
via Wavelet Signal Processing
to generate a PAD Model for
predicting new unclassified
time series.
With YouTube supporting 360 Videos, Facebook investing in developing Oculus in near future, VR technology is very promising as an affordable and accessible tool in future.
Steps Ahead
Sensor Backpack Presentation Tomorrow

Data Collection
Spatial (DepthMap, Isovist Analysis, Space Syntax)
Physiological (E4)
Neurological (Emotiv)
Manual Feedback (Buttons pressed by User)
Environmental (Sensor Backpack)

Data Analysis

All these collected data will be analysed for correlations with high confidence. The large dataset gathered will be useful for mining results for other research projects too.
Full transcript