收录:
摘要:
6D pose estimation is widely used in robot tasks such as sorting and grasping. RGB-D-based methods have recently attained brilliant success, but they are still susceptible to heavy occlusion. Our critical insight is that color and geometry information in RGBD images are two complementary data, and the crux of the pose estimation problem under occlusion is fully leveraging them. Towards this end, we propose a new color and geometry feature fusion module that can efficiently leverage two complementary data sources from RGB-D images. Unlike prior fusion methods, we conduct a two-stage fusion strategy to do color-depth fusion and local-global fusion successively. Specifically, we fuse the color features extracted from RGB images into the point cloud in the first stage. In the second stage, we extract local and global features from the fused point cloud using an ASSANet-like network and splice them together to obtain the final fusion features. We conducted experiments on the widely used LineMod and YCB-Video datasets, which shows that our method improves the prediction accuracy while reducing the training time. © 2022 IEEE.
关键词:
通讯作者信息:
电子邮件地址:
来源 :
年份: 2022
页码: 83-88
语种: 英文
归属院系: