Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Hand-held scanning of surface details

No description
by

Aron Monszpart

on 20 March 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Hand-held scanning of surface details

Hand-held Scanning of Surface Details MSc Final project 2013
Computer Graphics, Vision and Imaging
University College London source: 4DDynamics Measure of success? Problem definition
Usage / Motivation
Related work
Existing solutions
Evaluation methods, Time plan Hand-held Scanning of Surface Details Usage / Motivation "...In this way, we can collect a world-class 3D object repository via leveraging crowdsourcing...." Lewis Hall, University of Washington
source: photocitygame.com Sculpture outside Hayden Library, MIT
source: www.hypr3d.com The Walking Man, by Auguste Rodin
source: thingiverse.com Existing solutions KinectFusion Photomodeler Autodesk Catch VSFM Microsoft Photosynth Thank You ! Timeplan References Google
PhotoTours, Google MapsGL
"Reconstructing the World's Museums"
Microsoft - at TechFest '13
"3D Object Reconstruction and Recognition"
KinectFusion into Windows SDK
University of Washington
Photocity, PointCraft
3D printing:
Makerbot Digitizer, Hypr3D.com, Thingverse.com source: youtube.com/watch?v=quGhaggn3cQ Existing solutions KinectFusion Photomodeler Autodesk Catch VSFM Microsoft Photosynth Existing solutions KinectFusion Photomodeler Autodesk Catch VSFM Microsoft Photosynth pocketnow.com Existing solutions KinectFusion Photomodeler Autodesk Catch VSFM + CMVS Microsoft Photosynth Existing solutions KinectFusion Photomodeler Autodesk Catch VSFM Microsoft Photosynth January - March:
Already done some testing

Early June:
Create tool-chain pipeline
Plan solution theory and design algorithms
Late June - Early August:
Implementation
Evaluation
Early August - September 1st week:
Write up, Submit
September 2nd week:
Prepare presentation d2t1xqejof9utc.cloudfront.net Usage / Motivation Medicine - 3D body scanning, prostheses on3dprinting.com photomodeler.com Usage / Motivation Medicine - 3D body scanning, prostheses
Education
Preservation of cultural heritage 3dprototyping.com.au Usage / Motivation Medicine - 3D body scanning, prostheses
Education
Preservation of cultural heritage
Additive manufacturing, Rapid prototyping www.creaform3d.com/ graphics.stanford.edu/projects/mich Usage / Motivation Medicine - 3D body scanning, prostheses
Education
Preservation of cultural heritage
Additive manufacturing, Rapid prototyping
Custom fit design Toshifumi Kitamura / AFP Usage / Motivation Medicine - 3D body scanning, prostheses
Education
Preservation of cultural heritage
Additive manufacturing, Rapid prototyping
Custom fit design
Forensics
Film, animation and special effects
Scene interaction with particle systems
"...highly accurate bump maps of real surfaces..." photomodeler.com Method olegalexander.com Makerbot Digitizer, source: mashable.com research.microsoft.com Methods
Hand-held acquisition
Offline refinement
Static scenes
Seeded with structure from motion (SFM)
Indoors and out [Lowe 2004], [Wu 2007], [Wu et al., 2011], [Furukawa et al., 2010], [Furukawa and Ponce, 2007] photosynth.net homes.cs.washington.edu/~ccwu/vsfm/ [Snavely et al. 2006] [Izadi et al., 2011], [Newcombe et al., 2011] 123dapp.com photomodeler.com Related work Active lighting Related work Related work Related work Result in 3D: Ground truth: Input: Related work
Sensor fusion What are Surface Details? [Diebel and Thrun, 2006] Problem definition
Usage / Motivation
Related work
Existing solutions
Evaluation methods, Time plan Hand-held Scanning of Surface Details Problem definition
Usage / Motivation
Related work
Existing solutions
Evaluation methods, Time plan Hand-held Scanning of Surface Details Problem definition
Usage / Motivation
Related work
Existing solutions
Evaluation methods, Time plan Hand-held Scanning of Surface Details Problem definition
Usage / Motivation
Related work
Existing solutions
Evaluation methods, Time plan Hand-held Scanning of Surface Details What is 3D scanning? 3dvisa.cch.kcl.ac.uk emeraldinsight.com youtu.be/watch?v=bswUKHVnoWA pages.shanti.virginia.edu/medianet 3dfocus.co.uk usinenouvelle.com dinavian.com What are Surface Details? Active depth
Time-of-flight
3D geometry
Reconstruction from priors
Editing, Smoothing Surface details Method Methods
Hand-held acquisition
Offline refinement
Static scenes
Seeded with structure from motion (SFM)
Indoors and out
Calibration:
IMU, Gyorscope, Magnetometer, GPS
Active depth
Infrared
Extra data:
High-resolution RGB
Shape from shading
Active lighting
Training examples Surface details Usage / Motivation [Nehab et al., 2005] Qualitative Strecha dataset
Rendered, synthetic data (http://cvlabwww.epfl.ch/~strecha/testimages.html) [Rusinkiewicz et al, 2002] [Brostow et al., 2011] [Yang et al., 2007] [Fuhrmann and Goesele, 2011] [Sun et al., 2010] [Jancosek and Pajdla, 2011] [Hiep et al., 2009] [Mac Aodha et al., 2012] Aron Monszpart
aron.monszpart.12@ucl.ac.uk
supervised by: Dr. Gabriel Brostow
UCL Department of Computer Science
20/03/2013 [Brostow et al., 2011], [Hernández et al., 2007], [Rusinkiewicz et al., 2002] Active lighting [Brostow et al., 2011], [Hernández et al., 2007], [Rusinkiewicz et al., 2002] [Endres et al., 2012], [Fuhrmann and Goesele, 2011], [Yang et al., 2007] Active lighting
Sensor fusion
Super-resolution
using priors [Brostow et al., 2011], [Hernández et al., 2007], [Rusinkiewicz et al., 2002] [Endres et al., 2012], [Fuhrmann and Goesele, 2011], [Yang et al., 2007] [Tappen and Liu, 2012], [Sun et al., 2010] Active lighting
Sensor fusion
Super-resolution
using priors
using data
Multi-View Stereo [Brostow et al., 2011], [Hernández et al., 2007], [Rusinkiewicz et al., 2002] [Endres et al., 2012], [Fuhrmann and Goesele, 2011], [Yang et al., 2007] [Tappen and Liu, 2012], [Sun et al., 2010] [Jancosek and Pajdla, 2011], [Tuite et al., 2011], [Hiep et al., 2009] Active lighting
Sensor fusion
Super-resolution
using priors
using data [Brostow et al., 2011], [Hernández et al., 2007], [Rusinkiewicz et al., 2002] [Endres et al., 2012], [Fuhrmann and Goesele, 2011], [Yang et al., 2007] [Tappen and Liu, 2012], [Sun et al., 2010] [Mac Aodha et al., 2012], [HaCohen et al., 2010] [Brostow et al., 2011] Brostow, G. J., Hernández, C., Vogiatzis, G., Stenger, B., and Cipolla, R. (2011). Video normals from colored lights. IEEE Trans. Pattern Anal. Mach. Intell., 33(10):2104–2114.
[Diebel and Thrun, 2005] Diebel, J. and Thrun, S. (2005). An application of markov random fields to range sensing. In NIPS.
[Endres et al., 2012] Endres, F., Hess, J., Engelhard, N., Sturm, J., Cremers, D., and Burgard, W. (2012). An evaluation of the RGB-D SLAM system. In Proc. of the IEEE International Conference on Robotics and Automation (ICRA).
[Fuhrmann and Goesele, 2011] Fuhrmann, S. and Goesele, M. (2011). Fusion of depth maps with
multiple scales. ACM Trans. Graph., 30(6):148:1–148:8.
[Furukawa et al., 2010] Furukawa, Y., Curless, B., Seitz, S. M., and Szeliski, R. (2010). Towards internet-scale multi-view stereo. In CVPR, pages 1434–1441.
[Furukawa and Ponce, 2007] Furukawa, Y. and Ponce, J. (2007). Accurate, dense, and robust multi- view stereopsis. In CVPR.
[HaCohen et al., 2010] HaCohen, Y., Fattal, R., and Lischinski, D. (2010). Image upsampling via texture hallucination. In IEEE International Conference on Computational Photography (ICCP), pages 1–8.
[Hernández et al., 2007] Hernández, C., Vogiatzis, G., Brostow, G. J., Stenger, B., and Cipolla, R. (2007). Non-rigid photometric stereo with colored lights. In Proc. of the 11th IEEE Intl. Conf. on Comp. Vision (ICCV).
[Hiep et al., 2009] Hiep, V. H., Keriven, R., Labatut, P., and Pons, J.-P. (2009). Towards high-resolution large-scale multi-view stereo. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1430–1437.
[Izadi et al., 2011] Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R. A., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A. J., and Fitzgibbon, A. W. (2011). Kinectfusion:real-time 3d reconstruction and interaction using a moving depth camera. In UIST, pages 559–568.
[Jancosek and Pajdla, 2011] Jancosek, M. and Pajdla, T. (2011). Multi-view reconstruction preserving weakly-supported surfaces. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3121–3128.
[Lowe, 2004] Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. In International Journal of Computer Vision, pages 91–110.
[Mac Aodha et al., 2012] Mac Aodha, O., Campbell, N. D., Nair, A., and Brostow, G. J. (2012). Patch based synthesis for single depth image super-resolution. In ECCV (3), pages 71–84.
[Nehab et al., 2005] Nehab, D., Rusinkiewicz, S., Davis, J., and Ramamoorthi, R. (2005). Efficiently combining positions and normals for precise 3d geometry. In ACM SIGGRAPH 2005 Papers, pages 536–543.
[Newcombe et al., 2011] Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A. J., Kohli, P., Shotton, J., Hodges, S., and Fitzgibbon, A. W. (2011). Kinectfusion: Real-time dense surface mapping and tracking. In ISMAR, pages 127–136.
[Rusinkiewicz et al., 2002] Rusinkiewicz, S., Hall-Holt, O. A., and Levoy, M. (2002). Real-time 3d model acquisition. ACM Trans. Graph., 21(3):438–446.
[Schuon et al., 2009] Schuon, S., Theobalt, C., Davis, J., and Thrun, S. (2009). Lidarboost: Depth superresolution for tof 3d shape scanning. In CVPR, pages 343–350.
[Snavely et al., 2006] Snavely, N., Seitz, S. M., and Szeliski, R. (2006). Photo tourism: exploring photo collections in 3d. In ACM Trans. Graph., pages 835–846.
[Sun et al., 2010] Sun, J., Zhu, J., and Tappen, M. (2010). Context-constrained hallucination for image super-resolution. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 231–238.
[Tappen and Liu, 2012] Tappen, M. F. and Liu, C. (2012). A bayesian approach to alignment-based image hallucination. In ECCV, pages 236–249.
[Tuite et al., 2011] Tuite, K., Snavely, N., Hsiao, D.-y., Tabing, N., and Popovic, Z. (2011). Photocity: training experts at large-scale image acquisition through a competitive game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’11, pages 1383–1392.
[Wu, 2007] Wu, C. (2007). SiftGPU: A GPU implementation of scale invariant feature transform (SIFT). http://cs.unc.edu/~ccwu/siftgpu. [Online; accessed 19 March 2013].
[Wu et al., 2011] Wu, C., Agarwal, S., Curless, B., and Seitz, S. M. (2011). Multicore bundle adjustment. In CVPR, pages 3057–3064.
[Yang et al., 2007] Yang, Q., Yang, R., Davis, J., and Nistér, D. (2007). Spatial-depth super resolution for range images. In CVPR.
All web addresses in this presentation accessed on 17/3/2013 aron.monszpart.12@ucl.ac.uk Quantitative 2/20 3/20 4/20 5/20 5/20 5/20 5/20 5/20 6/20 7/20 8/20 8/20 9/20 10/20 11/20 13/20 12/20 14/20 15/20 16/20 17/20 18/20 19/20 20/20 vfxbro.com [Mac Aodha et al., 2012], [HaCohen et al., 2010]
Full transcript