• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Li, Mengran (Li, Mengran.) | Zhang, Ronghui (Zhang, Ronghui.) | Zhang, Yong (Zhang, Yong.) (Scholars:张勇) | Piao, Xinglin (Piao, Xinglin.) | Zhao, Shiyu (Zhao, Shiyu.) | Yin, Baocai (Yin, Baocai.)

Indexed by:

EI Scopus SCIE

Abstract:

Describing an object from multiple perspectives often leads to incomplete data representation. Consequently, learning consistent representations for missing data from multiple views has emerged as a key focus in the realm of Incomplete Multi-view Representation Learning (IMRL). In recent years, various strategies, such as subspace learning, matrix decomposition, and deep learning, have been harnessed to develop numerous IMRL methods. In this article, our primary research revolves around IMRL, with a particular emphasis on addressing two main challenges. Firstly, we delve into the effective integration of intra-view similarity and contextual structure into a unified framework. Secondly, we explore the effective facilitation of information exchange and fusion across multiple views. To tackle these issues, we propose a deep learning approach known as Structural Contrastive Auto-Encoder (SCAE) to solve the challenges of IMRL. SCAE comprises two major components: intra-view structural representation learning and inter-view contrastive representation learning. The former involves capturing intra-view similarity by minimizing the Dirichlet energy of the feature matrix, while also applying spatial dispersion regularization to capture intra-view contextual structure. The latter encourages maximizing the mutual information of inter-view representations, facilitating information exchange and fusion across views. Experimental results demonstrate the efficacy of our approach in significantly enhancing model accuracy and robustly addressing IMRL problems. The code is available at https://github.com/limengran98/SCAE.

Keyword:

MC-VAE Dirichlet energy mutual information maximization Incomplete multi-view representation learning contrastive learning

Author Community:

  • [ 1 ] [Li, Mengran]Sun Yat Sen Univ, Sch Intelligent Syst Engn, Guangdong Prov Key Lab Intelligent Transport Syst, Guangzhou 510006, Peoples R China
  • [ 2 ] [Zhang, Ronghui]Sun Yat Sen Univ, Sch Intelligent Syst Engn, Guangdong Prov Key Lab Intelligent Transport Syst, Guangzhou 510006, Peoples R China
  • [ 3 ] [Zhang, Yong]Beijing Univ Technol, Beijing Inst Artificial Intelligence, Dept Informat Sci, Beijing Key Lab Multimedia & Intelligent Software, Beijing, Peoples R China
  • [ 4 ] [Piao, Xinglin]Beijing Univ Technol, Beijing Inst Artificial Intelligence, Dept Informat Sci, Beijing Key Lab Multimedia & Intelligent Software, Beijing, Peoples R China
  • [ 5 ] [Zhao, Shiyu]Beijing Univ Technol, Beijing Inst Artificial Intelligence, Dept Informat Sci, Beijing Key Lab Multimedia & Intelligent Software, Beijing, Peoples R China
  • [ 6 ] [Yin, Baocai]Beijing Univ Technol, Beijing Inst Artificial Intelligence, Dept Informat Sci, Beijing Key Lab Multimedia & Intelligent Software, Beijing, Peoples R China

Reprint Author's Address:

  • [Zhang, Yong]Beijing Univ Technol, Beijing Inst Artificial Intelligence, Dept Informat Sci, Beijing Key Lab Multimedia & Intelligent Software, Beijing, Peoples R China;;

Show more details

Related Keywords:

Source :

ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS

ISSN: 1551-6857

Year: 2024

Issue: 9

Volume: 20

5 . 1 0 0

JCR@2022

Cited Count:

WoS CC Cited Count: 55

SCOPUS Cited Count: 5

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 2

Affiliated Colleges:

Online/Total:747/6551027
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.