Indexed by:
Abstract:
Visual context between objects is an important cue for object position perception. How to effectively represent the visual context is a key issue to study. Some past work Introduced task-driven methods for object perception, which led a large coding quantity. This paper proposes an approach that incorporates feature-driven mechanism into object-driven context representation for object locating. As an example, the paper discusses how a neuronal network encodes the visual context between feature salient regions and human eye centers with as little coding quantity as possible. A group of experiments on efficiency of visual context coding and object searching are analyzed and discussed, which show that the proposed method decreases the coding quantity and improve the object searching accuracy effectively.
Keyword:
Reprint Author's Address:
Email:
Source :
2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8
ISSN: 2161-4393
Year: 2008
Page: 3800-,
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1