Akwizycja obrazów RGB-D: metody

pol Artykuł w języku polskim DOI: 10.14313/PAR_203/82

Maciej Stefańczyk , wyślij Tomasz Kornuta Instytut Automatyki i Informatyki Stosowanej, Politechnika Warszawska

Pobierz Artykuł

Streszczenie

Dwuczęściowy artykuł poświęcono czujnikom umożliwiającym akwizycję chmur punktów oraz map głębi. W poniższej, pierwszej części uwagę skupiono na trzech głównych metodach pomiarowych: stereowizji, świetle strukturalnym oraz pomiarze czasu lotu wiązki jako tych, które są najpowszechniej stosowane w robotyce. Poza zasadą działania każdej z metod przeanalizowano także ich właściwości, złożoność obliczeniową oraz potencjalne zastosowania.

Słowa kluczowe

chmura punktów, czujnik RGB-D, lot wiązki, mapa głębi, obraz RGB-D, stereowizja, światło strukturalne

Acquisition of RGB-D images: methods

Abstract

The two-part article is devoted to sensors enabling the acquisition of depth information from the environment. The following, first part concentrates on three main methods of depth measurement: stereovision, structured light and time of flight (ToF). Along with the principle of operation of each of the method we also deliberate on their properties, analyse the complexity of required computations and present potential applications.

Keywords

depth map, point cloud, RGB-D image, RGB-D sensor, stereovision, structured light, time-of-flight

Bibliografia

  1. Beetz M., Burgard W., Cremers D., Pangercic D., Sturm J., RGB-D Workshop on 3D Perception in Robotics. Part of the European Robotics Forum, 2011.
  2. Chen S., Li Y., Zhang J., Vision processing for realtime 3-d data acquisition based on coded structured light, Image Processing, IEEE Transactions on, 17(2), 2008, 167–176.
  3. Evans J., Krishnamurthy B., Barrows B., Skewis T., Lumelsky V., Handling real-world motion planning: a hospital transport robot. Control Systems, IEEE, 12(1):15–19, feb 1992. 4. A. Fossati, J. Gall, H. Grabner, M. Hansard. 3rd IEEE Workshop on Consumer Depth Cameras for Computer Vision. Workshop in conjunction with International Conference on Computer Vision (ICCV), 2013.
  4. Fossati A., Gall J., Grabner H., Ren X., Konolige K., 1st Workshop on Consumer Depth Cameras for Computer Vision. Workshop in conjunction with 13th International Conference on Computer Vision (ICCV), 2011.
  5. Fossati A., Gall J., Grabner H., Ren X., Konolige K., Lee S., Hansard M., 2nd Workshop on Consumer Depth Cameras for Computer Vision. Workshop in conjunction with 12th European Conference on Computer Vision (ECCV), 2012.
  6. Fox D., Konolige K., Kosecka J., Ren X., RGB-D: Advanced Reasoning with Depth Cameras. Workshop in conjunction with Robotics: Science and Systems (RSS), 2010.
  7. Fox D., Konolige K., Kosecka J., Ren X.. RGB-D: Advanced Reasoning with Depth Cameras. Workshop in conjunction with Robotics: Science and Systems (RSS), 2011.
  8. Fox D., Konolige K., Kosecka J., Ren X.. RGB-D: Advanced Reasoning with Depth Cameras. Workshop in conjunction with Robotics: Science and Systems (RSS), 2012.
  9. Godin G., Goesele M., Matsushita Y., Sagawa R., Yang R. (eds) Special Issue on 3D Imaging, Processing and Modeling Techniques, wol. 102, International Journal of Computer Vision. IEEE, Mar. 2013.
  10. Han C., Jiang Z., Indexing coded stripe patterns based on de bruijn in color structured light system. National Conference on Information Technology and Computer Science (CITCS), 621–624, 2012.
  11. Hirschmuller H., Stereo processing by semiglobal matching and mutual information, IEEE Trans. Pattern Anal. Mach. Intell., II 2008, 328–341.
  12. Konolige K., Projected texture stereo. International Conference on Robotics and Automation (ICRA), IEEE, 2010, 148–155.
  13. Konolige K., Agrawal M., Bolles R.C., Cowan C., Fischler M.A., Gerkey B.P., Outdoor mapping and navigation using stereo vision, International Symposium on Experimental Robotics, 2006, 179–190.
  14. Kornuta T., Percepcja robotów z wykorzystaniem obrazów RGB-D. Sesja specjalna 13. Krajowej Konferencji Robotyki (KKR), 2014.
  15. Lanman D., Crispell D., Taubin G.. Surround structured lighting for full object scanning. 3-D Digital Imaging and Modeling, 3DIM’07. 6th International Conference on, IEEE, 2007, 107–116.
  16. Lee S., Choi O., Horaud R., Time-of-flight cameras: principles, methods and applications. Springer, 2013.
  17. Prusak A., Melnychuk O., Roth H., Schiller I., Koch R., Pose estimation and map building with a Time-Of-Flight-camera for robot navigation. Int. J. Intell. Syst. Technol. Appl., 5:355–364, November 2008.
  18. Ribo M., Brandner M., State of the art on visionbased structured light systems for 3D measurements. International Workshop on Robotic Sensors: Robotic and Sensor Environments, 2–6, 2005.
  19. Rusu R.B., Aldoma A., Gedikli S., Dixon M., 3D Point Cloud Processing: PCL. Tutorial at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2011.
  20. Salvi J., Pages J., Batlle J., Pattern codification strategies in structured light systems, Pattern Recognition, 37(4):827–849, 2004.
  21. Saxena A., Koppula H., Newcombe R., Ren X.. RGB-D: Advanced Reasoning with Depth Cameras. Workshop in conjunction with Robotics: Science and Systems (RSS), 2013.
  22. Shao L., Han J., Xu D., Shotton J. (eds.), Special issue on Computer Vision for RGB-D Sensors: Kinect and Its Applications, vol. 43, IEEE Transactions on Systems, Man and Cybernetics – Part B: Cybernectics, 2013.
  23. Stefańczyk M., Kasprzak W., Multimodal segmentation of dense depth maps and associated color information. Proceedings of the International Conference on Computer Vision and Graphics, vol. 7594, Lecture Notes in Computer Science, Springer, Berlin/Heidelberg, 2012, 626–632.
  24. Surmann H., Lingemann K., Nüchter A., Hertzberg J., A 3d laser range finder for autonomous mobile robots. 32nd International Symposium on Robotics (ISR), 2001, 153–158.
  25. Tao T., Koo J.C., Choi H. R., A fast block matching algorthim for stereo correspondence, IEEE Conference on Cybernetics and Intelligent Systems, 38–41, 2008.
  26. Thrun S., Montemerlo M., Dahlkamp H., Stavens D., Aron A., Diebel J., Fong P., Gale J., Halpenny M., Hoffmann G., Lau K., Oakley C., Palatucci M., Pratt V., Stang P., Stanley: The robot that won the DARPA Grand Challenge. Journal of Field Robotics, 23(9), 2006, 661–692.
  27. Wei B., Fan Y., Gao B.. Mobile robot vision system based on linear structured light and DSP. International Conference on Mechatronics and Automation, ICMA 2009, 1285–1290.