Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Professor Andrew Davison: Robotic Vision. Inaugural Lecture.
Transcript of Professor Andrew Davison: Robotic Vision. Inaugural Lecture.
3D Scanning at the Imperial Festival 2012
Professor Andrew Davison
Department of Computing
What is a Robot?
A physical, artificially intelligent device with sensors and actuation.
It can sense. It can act. It must think, or process information, to connect the two autonomously.
So is a washing machine a robot? Most people would say not!
A good distinction between an appliance and a robot: whether its workspace is inside or outside of its body.
The Classical Robotics Industry: Robot Arms
Mounted on fixed bases, and operating in highly controlled environments.
Robots for the Wider World
They need perception which gives them a suitable level of understanding of their complex and changing surroundings.
Robots for the Home
The clutter and complication of a home presents at least as much challenge as any other location.
The main barriers to performance like this are with perception and planning rather than the physical robot body.
Video from Stanford Personal Robotics Program, 2008. Teleoperated by a human!
My Life and Career
Growing up in Kent... family, friends, school, maths, languages, computers, radio control cars!
University of Oxford: BA in Physics
University of Oxford, Robotics Research Group, D. Phil.
Japan: EU-STF Research Fellow at AIST, Tsukuba
Robot Navigation using Active Vision
Yorick series of high performance "active heads" developed at the Active Vision Lab, Oxford, led by David Murray.
Can we put one on a mobile robot and use it for navigation?
From Local Navigation to Global Spatial Reasoning?
Servoing: closed loop control of steering based on fixation angle.
Serial fixation on multiple natural landmarks: can we make a coherent map?
Localisation and Mapping Results
AIST, Japan: 3D Robotic Inspection
Clarification of the general character of "Simultaneous Localisation and Mapping" problems: SLAM!
First release of "SceneLib" open source library.
My Focus: Scene Tracking, Modelling and Understanding from Moving Cameras
Real-time computer vision systems which construct and track a coherent 3D scene model in real-time.
Emphasis on practical, low-cost cameras and platforms. Live demonstrations!
What else is it useful for besides robotics?
SLAM: starting from nothing, make a map from a moving sensor. How?
(Visualisations by Frank Dellaert, Georgia Institute of Technology)
Visual SLAM using Double Window Optimisation
Probablistic Inference in Robotics
AIST, Japan: EU Science and Technology Fellow
University of Oxford, Post-Doc and EPSRC Advanced Research Fellow
Imperial College London, Department of Computing: Lecturer, Reader and Professor
All of my collaborators and colleagues from around the world... but especially:
Jose Maria Montiel
All of my colleagues at the Department of Computing, Imperial College, with a special mention to...
Fellow members of the VIP Section
The office and computing support staff
Imperial College Corporate Partnerships Team
Family and Friends, from Maidstone, Oxford, Japan, London, Madrid and beyond.
My Mum and Dad, and brother Steve
Rafa, Blanca and Adela
The research councils and companies that have supported my research with long-term, unfettered funding.
Can we still do SLAM with a single unconstrained camera, flying through the world in 3D?
There were similar results from other groups around this time, but usually using specialised sensors, not standard cameras.
MonoSLAM Experimental Applications
Inverse Depth Feature Representation
(With David Murray and Ian Reid)
(With Nobuyuki Kita)
(With Ian Reid and Nick Molton)
(With Olivier Stasse)
(With Walterio Mayol and David Murray)
(With Jose Maria Montiel and Javier Civera)
(With Hauke Strasdat, Kurt Konolige and Jose Maria Montiel)
Tracking Faster Motion
Connecting More Widely...
Robot Floor Cleaners
Dyson and Robotics
Smartphone Gaming and Augmented Reality
Dense Reconstruction from a Single Camera
DTAM: Dense Tracking and Mapping
(With Richard Newcombe, Shahram Izadi, et al. at Microsoft Research, Cambridge)
SLAM++: SLAM at the level of Objects
Meet our new professors
(With Renato Salas-Moreno, Richard Newcombe, Hauke Strasdat and Paul Kelly)
(With Richard Newcombe and Steven Lovegrove )
A Technological Singularity?
Find matches in data that correspond to static scene features.
Jointly estimate the sensor trajectory and feature positions which best agree with the measurements.
All real sensor measurements are uncertain.
Bayesian theory provides the engine to digest uncertain data into probabilistic world models.
Tracking Using Whole Image Alignment
Active Matching: Guided by Information Theory
(With Margarita Chli, Ankur Handa and Hauke Strasdat)
Progress in Commodity Processors
Massively parallel processing from Graphics Processing Units (GPUs) highly suited to image processing tasks.
(With Richard Newcombe, Steven Lovegrove, Ankur Handa, Adrien Angeli, Javier Ibanez-Guzman)
(Image by Michael Galloy)
Self-Calibrating Dense Visual Odometry for Indoor Robotics
(With Jacek Zienkiewicz and Robert Lukierski)
Sequentially Updating a Probabilistic Map
Single Camera Dense Fusion
(From Richard Newcombe's PhD Thesis)
Low-Cost 3D Scanning
Roomba Cleaning Pattern
Near Future Robot Vision-Enabled Products
PhD Students and Post-Docs of the Robot Vision Research Group
The Next 10 Years...?
Practical real-time scene understanding on smartphone-class embedded platforms... 3D maps of the whole world in the cloud?
Back to active vision and robotics! Manipulation of complex objects and scenes.
A return to biologically-inspired methods for their low power requirements and robustness.
(Image by Phil Scordis!)
(Image by J. N. Nielsen)
Solve jointly for a depth value for every pixel in a reference image, requiring the overall solution to be smooth.
SLAM State Vector and Covariance
Represents a joint Gaussian distribution over all uncertain parameters.
Each match provides a measurement of the relative position of the sensor and feature.
See also modern re-implementation by Hanme Kim, SceneLib2.
Dense Tracking Applications and Evaluation
(Visualisation by Jacek Zienkiewicz)
Dyson Ground Truth System, 2013