收录:
摘要:
To grasp the target object stably and orderly in the object-stacking scenes, it is important for the robot to reason the relationships between objects and obtain intelligent manipulation order for more advanced interaction between the robot and the environment. This paper proposes a novel graph-based visual manipulation relationship reasoning network (GVMRN) that directly outputs object relationships and manipulation order. The GVMRN model first extracts features and detects objects from RGB images, and then adopts graph convolutional network (GCN) to collect contextual information between objects. To improve the efficiency of relation reasoning, a relationship filtering network is built to reduce object pairs before reasoning. The experiments on the Visual Manipulation Relationship Dataset (VMRD) show that our model significantly outperforms previous methods on reasoning object relationships in object-stacking scenes. The GVMRN model is also tested on the images we collected and applied on the robot grasping platform. The results demonstrated the generalization and applicability of our method in real environment.
关键词:
通讯作者信息:
电子邮件地址:
来源 :
FRONTIERS IN NEUROROBOTICS
ISSN: 1662-5218
年份: 2021
卷: 15
3 . 1 0 0
JCR@2022
ESI学科: COMPUTER SCIENCE;
ESI高被引阀值:87
JCR分区:2
归属院系: