Filtracja chmur punktów za pomocą dopasowania danych 2D-3D

pol Artykuł w języku polskim DOI: 10.14313/PAR_244/15

Karol Rzepka , Michał Kulczykowski , wyślij Paweł Wittels Avicon, Al. Jerozolimskie 202, 02-486 Warszawa

Pobierz Artykuł

Streszczenie

Precyzja jest cechą kluczową dla rozwoju systemów pomiarowych 3D. Wykorzystywane do takich pomiarów kamery Time-of-Flight tworzą chmury punktów zawierające dużo szumu, przez co mogą się okazać mało użyteczne w dalszej analizie. W ramach badań nad rozwiązaniem tego problemu proponujemy nową metodę precyzyjnego filtrowania chmur punktów. Do usuwania punktów odstających z pomiarów 3D, zarejestrowanych za pomocą kamery Time-of-Flight, wykorzystujemy informacje 2D z kamery z obiektywem telecentrycznym. Zastosowanie kamery telecentrycznej pozwala uzyskać najbardziej precyzyjną informację o konturze obiektu, co przekłada się na precyzyjne filtrowanie rekonstrukcji obiektu w 3D.

Słowa kluczowe

chmury punktów 3D, dopasowanie 2D, filtracja chmur punktów

Point Cloud Filtering Using 2D-3D Matching Method

Abstract

Precision is a key feature for the development of 3D measurement systems. Time-of-flight cameras used for such measurements create point clouds containing a lot of noise, which may not be useful for further analysis. In our research to solve this problem, we propose a new method for precise point cloud filtering. We use 2D information from a telecentric lens camera to remove outlier points from 3D measurements recorded with a Time-of-Flight camera. The use of a telecentric camera allows us to obtain the most precise information about the contour of an object, which allows us to accurately filter the object reconstruction in 3D.

Keywords

2D matching, 3D point clouds, point cloud filtering

Bibliografia

  1. Al-Yoonus M., Abdullah M.F.L., Jawad M.S., Al-Shargie F., Enhance quality control management for sensitive industrial products using 2D/3D image processing algorithms, [In:] Electrical Power, Electronics, Communications, Control and Informatics Seminar (EECCIS), 2014, 126–131, DOI: 10.1109/EECCIS.2014.7003732.
  2. Baek E.-T., Yang H.-J., Kim S., Lee G., Jeong H., Distance error correction in time-of-flight cameras using asynchronous integration time, “Sensors”, Vol. 20, No. 4, 2020, DOI: 10.3390/s20041156.
  3. Balta H., Velagic J., Bosschaerts W., De Cubber G., Siciliano B., Fast statistical outlier removal based method for large 3D point clouds of outdoor environments,” IFAC-PapersOnLine”, Vol. 51, No. 22, 2018, 348–353, 12th IFAC Symposium on Robot Control SYROCO 2018, DOI: 10.1016/j.ifacol.2018.11.566.
  4. Bay H., Ess A., Tuytelaars T., Van Gool L., Speeded-up robust features (SURF), “Computer Vision and Image Understanding”, Vol. 110, No. 3, 2008, 346–359, DOI: 10.1016/j.cviu.2007.09.014.
  5. Besl P., McKay N.D., A method for registration of 3-D shapes, “IEEE Transactions on Pattern Analysis and Machine Intelligence”, Vol. 14, No. 2, 1992, 239–256, DOI: 10.1109/34.121791.
  6. Canny J., A computational approach to edge detection, “IEEE Transactions on Pattern Analysis and Machine Intelligence”, Vol. PAMI-8, No. 6, 1986, 679–698, DOI: 10.1109/TPAMI.1986.4767851.
  7. DeTone D., Malisiewicz T., Rabinovich A., SuperPoint: Self-supervised interest point detection and description, [In:] The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018.
  8. Duan Y., Yang C., Li H., Low-complexity adaptive radius outlier removal filter based on PCA for lidar point cloud denoising, “Applied Optics”, Vol. 60, No. 20, 2021, E1–E7, DOI: 10.1364/AO.416341.
  9. Fischler M.A., Bolles R.C., Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, “Communications of the ACM”, Vol. 24, No. 6, 1981, 381–395, DOI: 10.1145/358669.358692.
  10. Frank M., Plaue M., Rapp H., Köthe U., Jähne B., Hamprecht F., Theoretical and experimental error analysis of continuous-wave time-of-flight range cameras, “Optical Engineering”, Vol. 48, No. 1, 2009, DOI: 10.1117/1.3070634.
  11. He Y., Liang B., Zou Y., He J., Yang J., Depth errors analysis and correction for time-of-flight (ToF) cameras, “Sensors”, Vol. 17, No. 1, 2017, DOI: 10.3390/s17010092.
  12. Hoegg T., Lefloch D., Kolb A., Time-of-Flight camera based 3D point cloud reconstruction of a car, “Computers in Industry”, Vol. 64, No. 9, 2013, 1099–1114, DOI: 10.1016/j.compind.2013.06.002.
  13. Hu G., Peng Q., Forrest A.R., Mean shift denoising of point-sampled surfaces, “The Visual Computer”, Vol. 22, 2006, 147–157, DOI: 10.1007/s00371-006-0372-0.
  14. Huang H., Li D., Zhang H., Ascher U., Cohen-Or D., Consolidation of unorganized point clouds for surface reconstruction, “ACM Transactions on Graphics”, Vol. 28, No. 5, 2009, DOI: 10.1145/1618452.1618522.
  15. Huhle B., Schairer T., Jenke P., Strasser W., Robust non-local denoising of colored depth data, [In:] IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008, DOI: 10.1109/CVPRW.2008.4563158.
  16. Jenke P., Wand M., Bokeloh M., Schilling A., Strasser W., Bayesian point cloud reconstruction, “Computer Graphics Forum”, Vol. 25, No. 3, 2006, 379–388, DOI: 10.1111/j.1467-8659.2006.00957.x.
  17. Johansen S., Juselius K., Maximum likelihood estimation and inference on cointegration–with applications to the demand for money, “Oxford Bulletin of Economics and Statistics”, Vol. 52, No. 2, 1990, 169–210.
  18. Leal E., Leal N., Point cloud denoising using robust principal component analysis, 2006, 51–58, DOI: 10.5220/0001358900510058.
  19. Liao B., Xiao C., Jin L., Efficient Feature-preserving Local Projection Operator for Geometry Reconstruction, [In:] Eurographics 2011 – Short Papers, Avis N., Lefebvre S., eds., The Eurographics Association, DOI: 10.2312/EG2011.short.013-016.
  20. Lipman Y., Cohen-Or D., Levin D., Tal-Ezer H., Parameterization-free projection for geometry reconstruction, “ACM Transactions on Graphics”, Vol. 26, No. 3, 2007, DOI: 10.1145/1276377.1276405.
  21. Lowe D.G., Object recognition from local scale-invariant features, [In:] Proceedings of the 7th IEEE International Conference on Computer Vision, 1999, DOI: 10.1109/ICCV.1999.790410.
  22. Melekhov I., Tiulpin A., Sattler T., Pollefeys M., Rahtu E., Kannala J., DGC-Net: Dense geometric correspondence network, 2018, DOI: 10.48550/arXiv.1810.08393.
  23. Muja M., Lowe D.G., Fast matching of binary features, [In:] Ninth Conference on Computer and Robot Vision, 2012, 404–410, DOI: 10.1109/CRV.2012.60.
  24. Muja M., Lowe D.G., Scalable nearest neighbor algorithms for high dimensional data, “IEEE Transactions on Pattern Analysis and Machine Intelligence”, Vol. 36, No. 11, 2014, DOI: 10.1109/TPAMI.2014.2321376.
  25. Muthukrishnan S., Sahinalp S.C., Simple and practical sequence nearest neighbors with block operations, [In:] Proceedings of the 13th Annual Symposium on Combinatorial Pattern Matching, 2002, 262–278, DOI: 10.5555/647821.736373.
  26. Park J., Kim H., Yu-Wing Tai, Brown M. S., Kweon I., High quality depth map upsampling for 3D-TOF cameras, [In:] International Conference on Computer Vision, 2011, 1623–1630, DOI: 10.1109/ICCV.2011.6126423.
  27. Revaud J., Weinzaepfel P., Harchaoui Z., Schmid C., DeepMatching: Hierarchical deformable dense matching, “International Journal of Computer Vision”, Vol. 120, 2016, 300–323, DOI: 10.1007/s11263-016-0908-3.
  28. Reynolds M., Doboš J., Peel L., Weyrich T., Brostow G.J., Capturing Time-of-Flight data with confidence, [In:] CVPR 2011, 945–952, DOI: 10.1109/CVPR.2011.5995550.
  29. Rosli N.A.I.M., Ramli A., Mapping bootstrap error for bila teral smoothing on point set, [In:] AIP Conference Proceedings, Vol. 1605, No. 1, 2014, 149–154, DOI: 10.1063/1.4887580.
  30. Rosten E., Drummond T., Fusing points and lines for high performance tracking, [In:] 10th IEEE International Conference on Computer Vision, Vol. 1, 2005, 1508–1515, DOI: 10.1109/ICCV.2005.104.
  31. Rublee E., Rabaud V., Konolige K., Bradski G.R., ORB: An efficient alternative to SIFT or SURF, [In:] International Conference on Computer Vision, 2011, 2564–2571, DOI: 10.1109/ICCV.2011.6126544.
  32. Rusu R.B., Cousins S., 3D is here: Point cloud library (PCL), [In:] IEEE International Conference on Robotics and Automation, 2011, 1–4, DOI: 10.1109/ICRA.2011.5980567.
  33. Schall O., Belyaev A. G., Seidel H.-P., Adaptive feature-preserving non-local denoising of static and time-varying range data, “Computer-Aided Design”, Vol. 40, No. 6, 2008, 701–707, DOI: 10.1016/j.cad.2008.01.011.
  34. Schall O., Belyaev A., Seidel H.-P., Robust filtering of noisy scattered point data, [In:] Proceedings Eurographics/IEEE VGTC Symposium Point-Based Graphics, 2005, 71–144, DOI: 10.1109/PBG.2005.194067.
  35. Wang J., Yu Z., Zhu W., Cao J., Feature-preserving surface reconstruction from unoriented, noisy point data, “Computer Graphics Forum”, Vol. 32, No. 1, 2013, 164–176, DOI: 10.1111/cgf.12006.
  36. Xie H., McDonnell K.T., Qin H., Surface reconstruction of noisy and defective data sets, [In:] IEEE Visualization, 2004, 259–266, DOI: 10.1109/VISUAL.2004.101.
  37. Zaman F., Wong Y.-P., Ng B.-Y., Density-based denoising of point cloud, [In:] Proceeding of 9th International Conference on Robotics, Vision, Signal Processing & Power Applications (ROVISP), 2016, DOI: 10.48550/arXiv.1602.05312.
  38. Zhang D., Lu X., Qin H., He Y., Pointfilter: Point cloud filtering via encoder-decoder modeling, “IEEE Transactions on Visualization and Computer Graphics”, Vol. 27, 2021, 2015–2027, DOI: 10.1109/TVCG.2020.3027069.
  39. Zhao J., Wang Y., Cao Y., Guo M., Huang X., Zhang R., Dou X., Niu X., Cui Y., Wang J., The fusion strategy of 2D and 3D information based on deep learning: A review, “Remote Sensing”, Vol. 13, No. 20, 2021, DOI: 10.3390/rs13204029