收录:
摘要:
Most man-made indoor and urban scenes are composed of a set of orthogonal and parallel planes. In robotics and computer vision, these scenes typically represented by the Manhattan-World model. The accurate estimation of the Manhattan Frame, which consists of three orthogonal directions being used to represent the Manhattan-World, plays an important role in many applications, such as SLAM, scene understanding and 3D reconstruction. In this paper, a new method for accurately recovering the Manhattan frame from a single RGB-D image by using the orientation relevance is proposed. It first extracts planes from the input single RGB-D image. Then three orthogonal dominant planes are determined by introducing the concept of orientation relevance. Finally, the Manhattan Frame can be easily recovered from the obtained three orthogonal dominant planes. Experiments with open dataset validate the proposed method. The overall performance of the proposed method, which takes both accuracy and speed into account, is superior to that of the state-of-the-art methods. It is also applied on the application of scene annotation to confirm its applicability. © 2017 IEEE.
关键词:
通讯作者信息:
电子邮件地址:
来源 :
年份: 2017
页码: 4574-4579
语种: 英文
归属院系: