Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.
Kinect Workshop 2014
Transcript of Kinect Workshop 2014
What is Inside
From Servo's to Depth Image
Inside the Kinect
Using the Kinect
Lecturer Interactive Digital Media,, Digital Arts Division, Wits School of the Arts.
I've used the Kinect for
1. Theater Production & Installation
2. Large Scale Interactive Art Exhibition
3. Teaching Interactive Arts, Sound & Video Interaction
Why & Why Not
Use the Kinect
1. Excellent Alternative to Camera Tracking - requires no special lighting.
2. A Depth Camera.
3. Can do multiple tasks beyond motion and position tracking: like Skeletal Tracking and Facial Detection
4. Relatively Affordable, more than any tech of it's kind before.
5. No controller or extra props and devices required.
1. Depth and width Range is limited as was designed for a living room.
2. Does not work very close up.
3. Picks up interference from any other infrared emitter - ie. security beams / sunlight.
From Microsoft to Open Source
A peripheral for the XBox 360 game system.
A very robust, non light specific computer vision system.
The development was led by Richard Szeliski, who also taught computer vision at Univeristy of Washington.
Hardware developed by Israeli company - PrimeSense.
In collaboration with Microsoft software development.
Software Liscenced to Mircosoft.
Huge uptake in around the world because of the Open Source drivers developed soon after it's release that allowed anyone to access the data from the primary hardware.
Works I Like
Unnamed Sound Sculpture
Picture of You, Pixel Project and Digital Fabric - http://www.pixelproject.com/2012/08/pictures-of-yo/
BlahBlah Lab, Be Your Own Souvenir
In the base are motors and gears that allow for the kinect to tilt up / down by about 30 degrees.
These are super delicate - the kinect should not be forced into position
From left to right 1. IR Projector 2. RGB Camera 3. IR Camera.
A 3rd Eye = a IR projector = depth capabilities
Camera's & Infrared T & R
Projected IR Speckle
Captured on an IR camera.
Is a IR laser projecting through a plastic speckled sheet.
The distance between the speckles allows the IR camera to understand how far away it is.
The displacement between the dots indicates the distance between objects - the information is turned into depth data by the kinect processors.
Only issue is the creation of "shadow" as the kinect can only be used straight on.
PrimeSense diagram explaining how their reference platform works. The Kinect is the first (and only) implementation of this platform.
One camera (and one IR transmitter) provide input for the depth map (rumored to be just 320x240), while the third camera detects the human visual spectrum at 640x480 resolution.
Depth Image Conversion from IR Projector & IR Camera
Depth Camera & Colour Camera
Colour Camera is a standard webcamera and not a great resolution, but lining up the depth information to the colour information allows for you alter the colour image based on the depth image.
Benefit's of the Depth Camera:
+ Track a user not only in x & y, but in z - depth in space.
+ Hide certain things beyond a certain depth.
Built into the "depth sensor" not just for sound capture, but a bit like ears on a head, it can locate sound in space by using 4 mic's. Designed for multi-player commands and positioning.
Not available with the USB OS drivers so not in the software I will show today.
& the SDK
openKinect & SimpleOpenNI
The Pixel Depth Image
Distance through Pixels?
Real World vs Kinect
RGBA, 8bit - color values: 0-255
Depth Image in Processing give you a range based on the color range in greyscale (alpha) 0 = far, 255 = close.
Realworld range - from 50 cm to 8 meters.
Refection in a mirror causes distortion - good & bad.
Greyscale allows for a cleaner analysis image - excluding excessive colour information that the colour camera might bring - R G B - extra memory. Therefore allows for quicker comparisons and faster interaction
Source, Kyle McDonald http://www.flickr.com/photos/kylemcdonald/5641883004/in/photostream/
Uses one kinect to do the work of 5 - all sides of a single object.
It is not a straight conversion like this =
0 - 255 to 50 - 800 cm
This would also never be accurate enough as there are too many variables left out in between.
Can access the depth map from the Kinect directly, which gives a scarey accurate measurement of depth.
Tracking the closest object // Tracking the furthest object,
Tracking within a unique depth range only.
Processing + openKinect Contributed Library
by Daniel Shiffamn - http://www.shiffman.net/p5/kinect/
+ Accesses basic Depth Image & Colour Image
+ Point Cloud
Processing + SimpleOpenNI
OpenNI is a middleware developed by PrimeSense.
developed by Max Rheiner makes installing the middleware and functions with Processing very easy and simple.
+ All of Above
+ Skeletal Tracking
+ Skeletal Calibration Saving
User Tracking & Skeletal Data
How we can use
COM & Skeletal Tracking
A user list is established through the CoM process, from there each skeleton is associated with the user. If the user walks out of range for a few minutes that user list no. is re-assigned.
Skeletal Data and Depth Data can be used together.
And relationships between joints can assessed too - measuring the distance between joints is a fun interactive way of playing with the human body - i.e. flexing muscles or distance between hands. (This could also include working with speed of movement.
PSI for Calibration
But for the OpenNI to see the mass in space as a body there is a Calibration pose that you will use when first setting up. Know as the "Psi" pose by PrimeSense also the submissive pose.
But you can calibrate a single user and use that as the basic XML information that is loaded that then allows other users to be calibrated by the same data. Re. the long pre-setup for room and user "avatars" with the XBox 360 Games.
OpenNI allows us to track people through the depth data, the middle way does most of the work and allows us to simple track the user with out having to look through all the depth data.
+ Multiple body silhouettes in space
+ Centers of Mass - COM for each
Uses the user tracking to identify "joints" or divisions in the body.
Head, Neck, Shoulders, Torso, Hips, Knees and Feet.
3. Why / Not
5. OpenNI + Skeletal Data
Chris O’Shea - Body Swap - skeleton
Design I/O - Puppet Parade - hand detect
Patricio González Vivo - Liquid - point cloud
only on windows
Easy to Use
that Send Kinect Data
IN: great for blobs, fingers, objects
OUT:tuio, flash, binary, to flash, processing or max
Patricio González Vivo
Microsoft SDK allows for:
ofxKinect - http://ofxaddons.com/repos/2
depth, pointcloud, tilt
users, skeletons, CoM, hands
wrapper to integrate with Microsoft SDK
Major Referance for this workshop:
"Making Things See" by Greg Bornstien
IN: OpenNI Skeletal Tracking
OUT: OSC to Max or any other OSC receiving program.
- Requires the PSI Pose
Synapse by Ryan Challinor
Feed's all the Synapse OSC info to Max / MSP Jitter
- Requires the PSI Pose
Kinect via Synapse by Jon Bellona
Feed's all the Synapse OSC info to Max / MSP Jitter
- Requires no PSI Pose / now dated with new Simple OpenNI
But I have a quick fix for the workshop.
Kinect via Processing by Jon Bellona