Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Kinect Workshop 2014

Overview Workshop on the Kinect for "Free Particle"
by

Tegan Bristow

on 16 March 2014

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Kinect Workshop 2014

Base Motors
Ref: http://www.ifixit.com/Teardown/Microsoft-Kinect-Teardown/4066/1
What is Inside
From Servo's to Depth Image
Inside the Kinect
Tegan Bristow
Experiences
Using the Kinect
Workshop: Introduction
Lecturer Interactive Digital Media,, Digital Arts Division, Wits School of the Arts.


I've used the Kinect for
1. Theater Production & Installation
2. Large Scale Interactive Art Exhibition
3. Teaching Interactive Arts, Sound & Video Interaction
Why
Why & Why Not
Use the Kinect
Why Not
1. Excellent Alternative to Camera Tracking - requires no special lighting.
2. A Depth Camera.
3. Can do multiple tasks beyond motion and position tracking: like Skeletal Tracking and Facial Detection
4. Relatively Affordable, more than any tech of it's kind before.
5. No controller or extra props and devices required.
1. Depth and width Range is limited as was designed for a living room.
2. Does not work very close up.
3. Picks up interference from any other infrared emitter - ie. security beams / sunlight.
Depth Camera
From Microsoft to Open Source
Short History
A peripheral for the XBox 360 game system.
A very robust, non light specific computer vision system.
The development was led by Richard Szeliski, who also taught computer vision at Univeristy of Washington.

Hardware developed by Israeli company - PrimeSense.
In collaboration with Microsoft software development.
Software Liscenced to Mircosoft.
Huge uptake in around the world because of the Open Source drivers developed soon after it's release that allowed anyone to access the data from the primary hardware.
Works I Like
& Why
Unnamed Sound Sculpture
http://wearechopchop.com/%E2%80%9Cunnamed-soundsculpture%E2%80%9D/
Picture of You, Pixel Project and Digital Fabric - http://www.pixelproject.com/2012/08/pictures-of-yo/
BlahBlah Lab, Be Your Own Souvenir
http://vimeo.com/21676294#
In the base are motors and gears that allow for the kinect to tilt up / down by about 30 degrees.
These are super delicate - the kinect should not be forced into position
From left to right 1. IR Projector 2. RGB Camera 3. IR Camera.
A 3rd Eye = a IR projector = depth capabilities
Camera's & Infrared T & R
Projected IR Speckle
Captured on an IR camera.
Is a IR laser projecting through a plastic speckled sheet.
The distance between the speckles allows the IR camera to understand how far away it is.
The displacement between the dots indicates the distance between objects - the information is turned into depth data by the kinect processors.
Only issue is the creation of "shadow" as the kinect can only be used straight on.
PrimeSense diagram explaining how their reference platform works. The Kinect is the first (and only) implementation of this platform.

One camera (and one IR transmitter) provide input for the depth map (rumored to be just 320x240), while the third camera detects the human visual spectrum at 640x480 resolution.
Depth Image Conversion from IR Projector & IR Camera
Depth Camera & Colour Camera
Colour Camera is a standard webcamera and not a great resolution, but lining up the depth information to the colour information allows for you alter the colour image based on the depth image.
Benefit's of the Depth Camera:
+ Track a user not only in x & y, but in z - depth in space.
+ Hide certain things beyond a certain depth.
4 Microphones
Built into the "depth sensor" not just for sound capture, but a bit like ears on a head, it can locate sound in space by using 4 mic's. Designed for multi-player commands and positioning.

Not available with the USB OS drivers so not in the software I will show today.
Interfaces
& the SDK
Closer Look
Through
Processing
Processing with
openKinect & SimpleOpenNI
The Pixel Depth Image
Distance through Pixels?
Real World vs Kinect
RGBA, 8bit - color values: 0-255
Depth Image in Processing give you a range based on the color range in greyscale (alpha) 0 = far, 255 = close.
Realworld range - from 50 cm to 8 meters.

Refection in a mirror causes distortion - good & bad.

Greyscale allows for a cleaner analysis image - excluding excessive colour information that the colour camera might bring - R G B - extra memory. Therefore allows for quicker comparisons and faster interaction
feedback.
Source, Kyle McDonald http://www.flickr.com/photos/kylemcdonald/5641883004/in/photostream/
Uses one kinect to do the work of 5 - all sides of a single object.
It is not a straight conversion like this =
0 - 255 to 50 - 800 cm
This would also never be accurate enough as there are too many variables left out in between.

Can access the depth map from the Kinect directly, which gives a scarey accurate measurement of depth.

Uses:

Tracking the closest object // Tracking the furthest object,
Tracking within a unique depth range only.
Processing + openKinect Contributed Library
by Daniel Shiffamn - http://www.shiffman.net/p5/kinect/
+ Accesses basic Depth Image & Colour Image
+ Point Cloud

Processing + SimpleOpenNI
OpenNI is a middleware developed by PrimeSense.
SimpleOpenNI http://code.google.com/p/simple-openni/
developed by Max Rheiner makes installing the middleware and functions with Processing very easy and simple.
+ All of Above
+ Skeletal Tracking
+ Skeletal Calibration Saving
Basic of
Skeletal Tracking
OpenNi
User Tracking & Skeletal Data
How we can use
COM & Skeletal Tracking
Calibration
A user list is established through the CoM process, from there each skeleton is associated with the user. If the user walks out of range for a few minutes that user list no. is re-assigned.

Skeletal Data and Depth Data can be used together.

And relationships between joints can assessed too - measuring the distance between joints is a fun interactive way of playing with the human body - i.e. flexing muscles or distance between hands. (This could also include working with speed of movement.
PSI for Calibration

But for the OpenNI to see the mass in space as a body there is a Calibration pose that you will use when first setting up. Know as the "Psi" pose by PrimeSense also the submissive pose.

But you can calibrate a single user and use that as the basic XML information that is loaded that then allows other users to be calibrated by the same data. Re. the long pre-setup for room and user "avatars" with the XBox 360 Games.

User Tracking:
OpenNI allows us to track people through the depth data, the middle way does most of the work and allows us to simple track the user with out having to look through all the depth data.
+ Multiple body silhouettes in space
+ Centers of Mass - COM for each

Skeletal Tracking:
Uses the user tracking to identify "joints" or divisions in the body.
Head, Neck, Shoulders, Torso, Hips, Knees and Feet.
1. Introduction
2. Mechanics
3. Why / Not
6. Interfaces
4. Depth
5. OpenNI + Skeletal Data
Chris O’Shea - Body Swap - skeleton
https://vimeo.com/20745353
Design I/O - Puppet Parade - hand detect
https://vimeo.com/34824490
Patricio González Vivo - Liquid - point cloud
www.youtube.com/watch?v=z__jq0r4ERo
seated skeleton
facial tracking
voice recognition
only on windows
Easy to Use
StandaloneApps
that Send Kinect Data
IN: great for blobs, fingers, objects
OUT:tuio, flash, binary, to flash, processing or max
http://patriciogonzalezvivo.com/2011/kinectcorevision/
TUIO
Patricio González Vivo
http://tuio.org
Microsoft SDK allows for:
http://www.microsoft.com/en-us/kinectforwindows/
openframeworks
c++ openframeworks.cc
addons http://ofxaddons.com
OpenKinect
OpenNI
MicrosoftSDK
ofxKinect - http://ofxaddons.com/repos/2
depth, pointcloud, tilt
ofxOpenNI http://ofxaddons.com/repos/472
users, skeletons, CoM, hands
ofxMsKinect http://ofxaddons.com/repos/28
wrapper to integrate with Microsoft SDK
Interactive Art
Kinect Workshop

Major Referance for this workshop:
"Making Things See" by Greg Bornstien
http://makingthingssee.com/
IN: OpenNI Skeletal Tracking
OUT: OSC to Max or any other OSC receiving program.
- Requires the PSI Pose
http://synapsekinect.tumblr.com/post/6307752257/maxmsp-jitter

Synapse by Ryan Challinor
Feed's all the Synapse OSC info to Max / MSP Jitter
- Requires the PSI Pose
http://deecerecords.com/kinect/

Kinect via Synapse by Jon Bellona
Feed's all the Synapse OSC info to Max / MSP Jitter
- Requires no PSI Pose / now dated with new Simple OpenNI
But I have a quick fix for the workshop.
http://deecerecords.com/kinect/

Kinect via Processing by Jon Bellona
Full transcript