• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Liang, Fangfang (Liang, Fangfang.) | Duan, Lijuan (Duan, Lijuan.) (Scholars:段立娟) | Ma, Wei (Ma, Wei.) | Qiao, Yuanhua (Qiao, Yuanhua.) (Scholars:乔元华) | Cai, Zhi (Cai, Zhi.) | Miao, Jun (Miao, Jun.) | Ye, Qixiang (Ye, Qixiang.)

Indexed by:

EI Scopus SCIE

Abstract:

Many convolutional neural network (CNN)-based approaches for stereoscopic salient object detection involve fusing either low-level or high-level features from the color and disparity channels. The former method generally produces incomplete objects, whereas the latter tends to blur object boundaries. In this paper, a coupled CNN (CoCNN) is proposed to fuse color and disparity features from low to high layers in a unified deep model. It consists of three parts: two parallel multilinear span networks, a cascaded span network and a conditional random field module. We first apply the multilinear span network to compute multiscale saliency predictions based on RGB and disparity individually. Each prediction, learned under separate supervision, utilizes the multilevel features extracted by the multilinear span network. Second, a proposed cascaded span network, under deep supervision, is designed as a coupling unit to fuse the two feature streams at each scale and integrate all fused features in a supervised manner to construct a saliency map. Finally, we formulate a constraint in the form of a conditional random field model to refine the saliency map based on the a priori assumption that objects with similar saliency values have similar colors and disparities. Experiments conducted on two commonly used datasets demonstrate that the proposed method outperforms previous state-of-the-art methods. (C) 2020 Published by Elsevier Ltd.

Keyword:

Salient object detection Coupled CNN Cascaded span network Stereoscopic images

Author Community:

  • [ 1 ] [Liang, Fangfang]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 2 ] [Duan, Lijuan]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 3 ] [Ma, Wei]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 4 ] [Cai, Zhi]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 5 ] [Duan, Lijuan]Beijing Key Lab Trusted Comp, Beijing, Peoples R China
  • [ 6 ] [Ma, Wei]Beijing Key Lab Trusted Comp, Beijing, Peoples R China
  • [ 7 ] [Duan, Lijuan]Natl Engn Lab Crit Technol Informat Secur Classif, Beijing, Peoples R China
  • [ 8 ] [Ma, Wei]Natl Engn Lab Crit Technol Informat Secur Classif, Beijing, Peoples R China
  • [ 9 ] [Qiao, Yuanhua]Beijing Univ Technol, Coll Appl Sci, Beijing, Peoples R China
  • [ 10 ] [Cai, Zhi]Beijing Key Lab Integrat & Anal Large Scale Strea, Beijing, Peoples R China
  • [ 11 ] [Miao, Jun]Beijing Informat Sci & Technol Univ, Sch Comp Sci, Beijing Key Lab Internet Culture & Digital Dissem, Beijing, Peoples R China
  • [ 12 ] [Ye, Qixiang]Univ Chinese Acad Sci, Beijing, Peoples R China

Reprint Author's Address:

  • 段立娟

    [Duan, Lijuan]100 Pingleyuan, Beijing 100124, Peoples R China

Show more details

Related Keywords:

Related Article:

Source :

PATTERN RECOGNITION

ISSN: 0031-3203

Year: 2020

Volume: 104

8 . 0 0 0

JCR@2022

ESI Discipline: ENGINEERING;

ESI HC Threshold:115

Cited Count:

WoS CC Cited Count: 12

SCOPUS Cited Count: 14

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 0

Affiliated Colleges:

Online/Total:774/5291347
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.