Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Professor Andrew Davison: Robotic Vision. Inaugural Lecture.

No description
by

Andrew Davison

on 11 August 2014

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Professor Andrew Davison: Robotic Vision. Inaugural Lecture.

Department of Computing Robotics Course (Third Year + MSc)
3D Scanning at the Imperial Festival 2012
Robotics Teaching
Robotic Vision
Professor Andrew Davison
Department of Computing
What is a Robot?
A physical, artificially intelligent device with sensors and actuation.
It can sense. It can act. It must think, or process information, to connect the two autonomously.
So is a washing machine a robot? Most people would say not!
A good distinction between an appliance and a robot: whether its workspace is inside or outside of its body.
The Classical Robotics Industry: Robot Arms
Mounted on fixed bases, and operating in highly controlled environments.
Robots for the Wider World
They need perception which gives them a suitable level of understanding of their complex and changing surroundings.
Robots for the Home
The clutter and complication of a home presents at least as much challenge as any other location.
The main barriers to performance like this are with perception and planning rather than the physical robot body.
Video from Stanford Personal Robotics Program, 2008. Teleoperated by a human!
My Life and Career
Growing up in Kent... family, friends, school, maths, languages, computers, radio control cars!
University of Oxford: BA in Physics
University of Oxford, Robotics Research Group, D. Phil.
Japan: EU-STF Research Fellow at AIST, Tsukuba
Robot Navigation using Active Vision
Yorick series of high performance "active heads" developed at the Active Vision Lab, Oxford, led by David Murray.
Can we put one on a mobile robot and use it for navigation?
From Local Navigation to Global Spatial Reasoning?
Servoing: closed loop control of steering based on fixation angle.
Serial fixation on multiple natural landmarks: can we make a coherent map?
Localisation and Mapping Results
AIST, Japan: 3D Robotic Inspection
Clarification of the general character of "Simultaneous Localisation and Mapping" problems: SLAM!
Visual SLAM....
First release of "SceneLib" open source library.
My Focus: Scene Tracking, Modelling and Understanding from Moving Cameras
Real-time computer vision systems which construct and track a coherent 3D scene model in real-time.
Emphasis on practical, low-cost cameras and platforms. Live demonstrations!
What else is it useful for besides robotics?
SLAM: starting from nothing, make a map from a moving sensor. How?
Factor Graph
(Visualisations by Frank Dellaert, Georgia Institute of Technology)
Visual SLAM using Double Window Optimisation
Probablistic Inference in Robotics
Thanks
Thanks
1973
1983
AIST, Japan: EU Science and Technology Fellow
University of Oxford, Post-Doc and EPSRC Advanced Research Fellow
Imperial College London, Department of Computing: Lecturer, Reader and Professor
Adrien Angeli
Gerardo Carrera
Ping-Lin Chang
Margarita Chli
Ankur Handa
Jan Jachnik
Hanme Kim
Steven Lovegrove
Robert Lukierski
Peter Mountney
Richard Newcombe
Lukas Platinsky
Renato Salas-Moreno
Hauke Strasdat
Akis Tsiotsios
Jacek Zienkiewicz
All of my collaborators and colleagues from around the world... but especially:
Pablo Alcantarilla
Mike Aldred
Javier Civera
Charles Collis
Nobuyuki Kita
John Leonard
Walterio Mayol
Nick Molton
Jose Maria Montiel
David Murray
Jose Neira
Ian Reid
Olivier Stasse
Mingo Tardos
Thanks
All of my colleagues at the Department of Computing, Imperial College, with a special mention to...
Fellow members of the VIP Section
The office and computing support staff
Susan Eisenbach
Jeff Magee
Guang-Zhong Yang
Murray Shanahan
Paul Kelly
Tony Field
Imperial College Corporate Partnerships Team
Thanks
Family and Friends, from Maidstone, Oxford, Japan, London, Madrid and beyond.
My Mum and Dad, and brother Steve
Lourdes
Rafa, Blanca and Adela
Thanks
The research councils and companies that have supported my research with long-term, unfettered funding.
MonoSLAM
Can we still do SLAM with a single unconstrained camera, flying through the world in 3D?
There were similar results from other groups around this time, but usually using specialised sensors, not standard cameras.
MonoSLAM Experimental Applications
Inverse Depth Feature Representation
(With David Murray and Ian Reid)
(With Nobuyuki Kita)
(With Ian Reid and Nick Molton)
(With Olivier Stasse)
(With Walterio Mayol and David Murray)
(With Jose Maria Montiel and Javier Civera)
(With Hauke Strasdat, Kurt Konolige and Jose Maria Montiel)
Tracking Faster Motion
Connecting More Widely...
Robot Floor Cleaners
Dyson and Robotics
Smartphone Gaming and Augmented Reality
Dense Reconstruction from a Single Camera
DTAM: Dense Tracking and Mapping
KinectFusion
(With Richard Newcombe, Shahram Izadi, et al. at Microsoft Research, Cambridge)
SLAM++: SLAM at the level of Objects
Meet our new professors
(With Renato Salas-Moreno, Richard Newcombe, Hauke Strasdat and Paul Kelly)
(With Richard Newcombe and Steven Lovegrove )
1993
2003
1978
1988
1998
2013
2008
A Technological Singularity?
#roboteyes
Find matches in data that correspond to static scene features.
Jointly estimate the sensor trajectory and feature positions which best agree with the measurements.
All real sensor measurements are uncertain.
Bayesian theory provides the engine to digest uncertain data into probabilistic world models.
Tracking Using Whole Image Alignment
25Hz Video
100Hz Video
Active Matching: Guided by Information Theory
(With Margarita Chli, Ankur Handa and Hauke Strasdat)
Progress in Commodity Processors
Massively parallel processing from Graphics Processing Units (GPUs) highly suited to image processing tasks.
(With Richard Newcombe, Steven Lovegrove, Ankur Handa, Adrien Angeli, Javier Ibanez-Guzman)
(Image by Michael Galloy)
Self-Calibrating Dense Visual Odometry for Indoor Robotics
(With Jacek Zienkiewicz and Robert Lukierski)
Sequentially Updating a Probabilistic Map
Single Camera Dense Fusion
(From Richard Newcombe's PhD Thesis)
Low-Cost 3D Scanning
ASDA
ReconstructMe
Skanect
FabliTec
iRobot Roomba
Roomba Cleaning Pattern
Neato
Mint
1995
2000
2005

2015
2010
Near Future Robot Vision-Enabled Products
Qualcomm
Ogmento
Metaio
DC06, 2004
2020
2025
PhD Students and Post-Docs of the Robot Vision Research Group
The Next 10 Years...?
Practical real-time scene understanding on smartphone-class embedded platforms... 3D maps of the whole world in the cloud?
Back to active vision and robotics! Manipulation of complex objects and scenes.
A return to biologically-inspired methods for their low power requirements and robustness.
(Image by Phil Scordis!)
(Image by J. N. Nielsen)
Accelerating Progress
Solve jointly for a depth value for every pixel in a reference image, requiring the overall solution to be smooth.
SLAM State Vector and Covariance
Represents a joint Gaussian distribution over all uncertain parameters.
Each match provides a measurement of the relative position of the sensor and feature.
See also modern re-implementation by Hanme Kim, SceneLib2.
Dense Tracking Applications and Evaluation
(Visualisation by Jacek Zienkiewicz)
Dyson Ground Truth System, 2013
Full transcript