收录:
摘要:
Visual context between objects is an important cue for object position perception. How to effectively represent the visual context is a key issue to study. Some past work Introduced task-driven methods for object perception, which led a large coding quantity. This paper proposes an approach that incorporates feature-driven mechanism into object-driven context representation for object locating. As an example, the paper discusses how a neuronal network encodes the visual context between feature salient regions and human eye centers with as little coding quantity as possible. A group of experiments on efficiency of visual context coding and object searching are analyzed and discussed, which show that the proposed method decreases the coding quantity and improve the object searching accuracy effectively.
关键词:
通讯作者信息:
电子邮件地址: