Safe and Reliable Movement of Fast LiDAR-based Self-driving Vehicle

eng Artykuł w języku angielskim DOI: 10.14313/PAR_253/109

Tomasz Buratowski *, wyślij Mariusz Giergiel *, Piotr Wójcicki *, Jerzy Garus **, Rafał Kot ** * AGH University of Krakow, al. A. Mickiewicza 30, 30-059 Krakow, Poland ** Polish Naval Academy, Śmidowicza Street 69, 81-127 Gdynia, Poland

Pobierz Artykuł

Abstract

Classification of objects is an important technique for autonomous ground vehicles to identify a surrounding environment and execute safe path planning. In this paper, a method based on horizontal segmentation is proposed to detect cone-shaped objects in vehicle’s vicinity using a LiDAR sensor. A captured point cloud is divided into five layers based on height information, and the division of detected objects into two groups, cones and others, has been made using classifiers available in MATLAB toolboxes. To separate the classified conical objects into four types used to mark the route, an algorithm for their recognition was developed and used. The proposed solution, verified by navigation experiments in real conditions using an unmanned racing car, has gave good results, i.e., a high rate of cone-shaped objects classification, a short processing time and a low computational load. The performed tests have allowed also to diagnose the causes of incorrect classification of objects. Thus, the experimental results indicated that the approach presented in this work can be used in real time for autonomous, collision-free driving along marked routes.

Keywords

autonomous driving, LIDAR, object classification, wheeled ground vehicle

Bezpieczna i niezawodna jazda szybkiego pojazdu autonomicznego z użyciem systemu LiDAR

Streszczenie

Klasyfikacja obiektów jest ważną technologią dla lądowych pojazdów autonomicznych pozwalającą na identyfikację otaczającego środowiska i zaplanowanie bezpiecznej trasy przejazdu. W artykule zaproponowano metodę klasyfikacji opartą na segmentacji poziomej do wykrywania obiektów w kształcie stożka drogowego w pobliżu pojazdu za pomocą sensora LiDAR. Przechwycona chmura punktów jest dzielona na pięć warstw na podstawie informacji o wysokości, a podziału wykrytych obiektów na dwie grupy, stożki i inne, dokonano z wykorzystaniem klasyfikatorów dostępnych w przybornikach środowiska obliczeniowego MATLAB. Do rozdzielenia sklasyfikowanych obiektów stożkowych na cztery typy, wykorzystywane do oznakowania trasy przejazdu, opracowano i zastosowano algorytm ich rozpoznawania. Zaproponowane metoda, zweryfikowana eksperymentami nawigacyjnymi w warunkach rzeczywistych z wykorzystaniem bezzałogowego samochodu wyścigowego, dała zadowalające wyniki, tj. wysoki poziom klasyfikacji obiektów w kształcie stożka, krótki czas przetwarzania i niską złożoność obliczeniową. Przeprowadzone testy pozwoliły także na zdiagnozowanie przyczyn nieprawidłowej klasyfikacji obiektów stożkopodobnych. Wyniki eksperymentów wykazały, że przedstawione w artykule rozwiązanie może być wykorzystane w czasie rzeczywistym do autonomicznej, bezkolizyjnej jazdy po oznaczonych trasach.

Słowa kluczowe

jazda autonomiczna, klasyfikacja obiektów, kołowy pojazd lądowy, LIDAR

Bibliografia

  1. Tomic T. et al., Toward a fully autonomous UAV: Research platform for indoor and outdoor urban search and rescue, “IEEE Robotics & Automation Magazine”, Vol. 19, No. 3, 2012, 46–56, DOI: 10.1109/MRA.2012.2206473.
  2. Biljecki F., Ledoux H., Van Oosterom P., Transportation mode-based segmentation and classification of movement trajectories, “International Journal of Geographical Information Science”, Vol. 27, No. 2, 2013, 385–407, DOI: 10.1080/13658816.2012.692791.
  3. Mukhtar A., Xia L., Tang T.B., Vehicle detection techniques for collision avoidance systems: A review, “IEEE Trans actions on Intelligent Transportation Systems”, Vol. 16, No. 5, 2015, 2318–2338, DOI: 10.1109/TITS.2015.2409109.
  4. Van Brummelen J., O’Brien M., Gruyer D., Najjaran H., Autonomous vehicle perception: The technology of today and tomorrow, “Transportation Research Part C: Emerging Technologies”, Vol. 89, 2018, 384–406, DOI: 10.1016/j.trc.2018.02.012.
  5. Dowling R., McGuirk P., Autonomous vehicle experiments and the city, “Urban Geography”, Vol. 43, No. 3, 2022, 409–426, DOI: 10.1080/02723638.2020.1866392.
  6. Iclodean C., Varga B.O., Pfister F., Autonomous Vehicles Technological Trends, “Electronics”, Vol. 12, No. 5, 2023, DOI: 10.3390/electronics12051149.
  7. Bhavana N., Kodabagi M.M., Exploring the Current State of Road Lane Detection: A Comprehensive Survey, “International Journal of Human Computations & Intelligence”, Vol. 2, No. 1, 2023, 40–46.
  8. Jacyna M., Semenov I., Models of vehicle service system supply under information uncertainty, “Eksploatacja i Niezawodność”, Vol. 22, No. 4, 2020, 694–704, DOI: 10.17531/ein.2020.4.13.
  9. Zhang G., Avery R.P., Wang Y., Video-based vehicle detec tion and classification system for real-time traffic data collection using uncalibrated video cameras, “Transportation Research Record: Journal of the Transportation Research Board”, Vol. 1993, No. 1, 2007, 138–147, DOI: 10.3141/1993-19.
  10. Yu Y., Guan H., Ji Z., Automated detection of urban road manhole covers using mobile laser scanning data, “IEEE Transactions on Intelligent Transportation Systems”, Vol. 16, No. 6, 2015, 3258–3269, DOI: 10.1109/TITS.2015.2413812.
  11. Lehtomäki M. et al., Object classification and recognition from mobile laser scanning point clouds in a road environment, “IEEE Transactions on Geoscience and Remote Sensing”, Vol. 54, No. 2, 2015, 1226–1239, DOI: 10.1109/TGRS.2015.2476502.
  12. Aamir M., Pu Y., Rahman Z., Abro W., Naeem H., Ullah F., Badr A., A hybrid proposed framework for object detection and classification, “Journal of Information Processing Sys tems”, Vol. 14, No. 5, 2018, 1176–1194, DOI: 10.3745/JIPS.02.0095.
  13. Nebiker S., Meyer J., Blaser S., Ammann M., Rhyner S., Outdoor mobile mapping and AI-based 3D object detection with low-cost RGB-D cameras: The use case of on-street parking statistics, “Remote Sensing”, Vol. 13, No. 16, 2021, DOI: 10.3390/rs13163099.
  14. Li Z., Du Y., Zhu M., Zhou S., Zhang L., A survey of 3D object detection algorithms for intelligent vehicles develop ment, “Artificial Life and Robotics”, Vol. 27, 2021, 115 122, DOI: 10.1007/s10015-021-00711-0.
  15. Wei Z., Zhang F., Chang S., Liu Y., Wu H., Feng Z., MmWave Radar and Vision Fusion for Object Detection in Autonomous Driving: A review, “Sensors”, Vol. 22, No. 7, 2022, DOI: 10.3390/s22072542.
  16. Khan S., Lee H., Lim H., Enhancing Object Detection in Self-Driving Cars Using a Hybrid Approach, “Electronics”, Vol. 12, No. 13, 2023, DOI: 10.3390/electronics12132768.
  17. Yang L., Xie T., Liu M., Zhang M., Qi S., Yang J., Infra red Small–Target Detection Under a Complex Background Based on a Local Gradient Contrast Method, “International Journal of Applied Mathematics and Computer Science”, Vol. 33, No. 1, 2023, 33–43, DOI: 10.34768/amcs-2023-0003.
  18. Oprzędkiewicz K., Ciurej M., Garbacz M., The agent, state-space model of the mobile robot, “Pomiary Automatyka Robotyka”, Vol. 22, No. 3, 2018, 41–50, DOI: 10.14313/PAR_229/41.
  19. Cui Y., Xu H., Wu J., Sun Y., Zhao J., Automatic vehicle tracking with roadside LiDAR data for the connected-vehicles system, “IEEE Intelligent Systems”, Vol. 34, No. 3, 2019, 44–51, DOI: 10.1109/MIS.2019.2918115.
  20. Wu J., Xu H., Liu W., Points registration for roadside LiDAR sensors, “Transportation Research Record: Journal of the Transportation Research Board”, Vol. 2673, No. 9, 2019, 627–639, DOI: 10.1177/0361198119843855.
  21. Lee S., Lee D., Choi P., Park D., Accuracy–power control lable LiDAR sensor system with 3D object recognition for autonomous vehicle, “Sensors”, Vol. 20, No. 19, 2020, DOI: 10.3390/s20195706.
  22. Arikumar K., Deepak Kumar A., Gadekallu T., Prathiba S., Tamilarasi K., Real-time 3D object detection and classification in autonomous driving environment using 3D LiDAR and camera sensors, “Electronics”, Vol. 11, No. 24, 2022, DOI: 10.3390/electronics11244203.
  23. Buratowski T., Garus J., Giergiel M., Kudriashov A., Real-time 3D mapping in isolated industrial terrain with use of mobile robotic vehicle, “Electronics”, Vol. 11, No. 13, 2022, DOI: 10.3390/electronics11132086.
  24. Mochurad L., Hladun Y., Tkachenko R., An obstacle-fin ding approach for autonomous mobile robots using 2D LiDAR data, “Big Data and Cognitive Computing”, Vol. 7, No. 1, 2023, DOI: 10.3390/bdcc7010043.
  25. Suganuma N., Yoshioka M., Yoneda K., Aldibaja M., LIDAR-based object classification for autonomous driving on urban roads, “Journal of Advanced Control, Automation and Robotics”, Vol. 3, No. 2, 2017, 92–95.
  26. Wu J., Xu H., Sun Y., Zheng J., Yue R., Automatic back ground filtering method for roadside LiDAR data, "Transportation Research Record: Journal of the Transportation Research Board”, Vol. 2672, No. 45, 2018, 14–22, DOI: 10.1177/0361198118775841.
  27. Weon I., Lee S., Ryu J., Object recognition based interpola tion with 3D LiDAR and vision for autonomous driving of an intelligent vehicle, “IEEE Access”, Vol. 8, 2020, 65599 65608, DOI: 10.1109/ACCESS.2020.2982681.
  28. Wu J., Xu H., Tian Y., Pi R., Yue R., Vehicle detection under adverse weather from roadside LiDAR data, “Sen sor”, Vol. 20, No. 12, 2020, DOI: 10.3390/s20123433.
  29. Chu P.M., Cho S., Park J., Fong S., Cho K., Enhanced ground segmentation method for Lidar point clouds in human-centric autonomous robot systems, “Human-centric Computing and Information Sciences”, Vol. 9, 2019, DOI: 10.1186/s13673-019-0178-5.
  30. Wu J., Tian Y., Xu H., Yue R., Wang A., Song X., Auto matic ground points filtering of roadside LiDAR data using a channel-based filtering algorithm, “Optics & Laser Technology”, Vol. 115, 2019, 374–383, DOI: 10.1016/j.optla stec.2019.02.039.
  31. Guo Y., Bennamoun M., Sohel F., Lu M., Wan J., 3D object recognition in cluttered scenes with local surface features: A survey, “IEEE Transactions on Pattern Analysis and Machine Intelligence”, Vol. 36, No. 11, 2014, 2270–2287, DOI: 10.1109/TPAMI.2014.2316828.
  32. Kyriazis I., Fudos I., Building editable free-form models from unstructured point clouds, “Computer-Aided Design and Applications”, Vol. 10, No. 6, 2013, 877–888, DOI: 10.3722/cadaps.2013.877-888.
  33. Spinello L., Arras K., Triebel R., Siegwart R., A layered approach to people detection in 3D range data, [In:] Proceedings of the 24th AAAI Conference on Artificial Intelligence, Atlanta, Georgia, USA, 2010, 1625–1630.
  34. Kim B., Choi B., Park S., Kim H., Kim E., Pedestrian/ vehicle detection using a 2.5D multi-layer laser scanner, “IEEE Sensors Journal”, Vol. 16, No. 2, 2016, 400–408, DOI: 10.1109/JSEN.2015.2480742.
  35. Tombari F., Salti S., Di Stefano L., Unique signatures of histograms for local surface description, [In:] Computer Vision – ECCV 2010, Berlin, Germany: Springer, 2010, 356–369, DOI: 10.1007/978-3-642-15558-1_26.
  36. Luo D., Yan-min W., Rapid extracting pillars by slicing point clouds, “The International Archives of the Photo grammetry, Remote Sensing and Spatial Information Scien ces”, Vol. 37, 2008, 215–218.
  37. Pu S., Rutzinger M., Vosselman G., Elberink S.O., Reco gnizing basic structures from mobile laser scanning data for road inventory studies, “ISPRS Journal of Photogram metry and Remote Sensing”, Vol. 66, No. 6, 2011, 28–39, DOI: 10.1016/j.isprsjprs.2011.08.006.