Multimodal Panoptic Segmentation of 3D Point Clouds
Material type:![Text](/opac-tmpl/lib/famfamfam/BK.png)
- text
- computer
- online resource
- KSP/1000161158
Item type | Home library | Collection | Call number | Materials specified | Status | Date due | Barcode | |
---|---|---|---|---|---|---|---|---|
![]() |
OPJGU Sonepat- Campus | E-Books Open Access | Available |
Open Access Unrestricted online access star
The understanding and interpretation of complex 3D environments is a key challenge of autonomous driving. Lidar sensors and their recorded point clouds are particularly interesting for this challenge since they provide accurate 3D information about the environment. This work presents a multimodal approach based on deep learning for panoptic segmentation of 3D point clouds. It builds upon and combines the three key aspects multi view architecture, temporal feature fusion, and deep sensor fusion.
Creative Commons https://creativecommons.org/licenses/by-sa/4.0/ cc
https://creativecommons.org/licenses/by-sa/4.0/
English
There are no comments on this title.