收录:
摘要:
Content curation social networks (CCSNs), which provide users a platform to share their interests by multimedia information, are the most rapidly growing social networks in recent years. Since large-scale multimodal data have been generated by CCSN users, learning multimodal representations for contents have become the key to the progress of many applications such as user interest analysis and recommender system for curation networks. Learning representations for CCSNs faces a vital challenge: the sparsity of multimodal data. It is difficult for most existing approaches to learn effective representations for multimodal CCSNs because they didn't provide a solution on how to model sparse and noisy multimodal data. In this paper, we propose a 2-step approach to learn accurate multimodal representations from sparse multimodal data. First, we propose a novel Board-Image-Word (BIW) graph to model the multimodal data. Benefited from the unique board-image relation on CCSNs, embeddings of images and texts which endow semantic relations are learned from the network topology of the BIW graph. As the second step, a deep vision model with modified loss function are trained by minimizing the distance between the visual features of contents and their corresponding semantic relation embeddings to learn representations which incorporate visual information and graph-based semantic relations. Experiments on the dataset from Huaban.com demonstrate that under the circumstance of sparser text modality, our method significantly outperformed multimodal DBN, DBM and unimodal representation learning methods on pin classification and board recommendation tasks. © 2019 IEEE.
关键词:
通讯作者信息:
电子邮件地址: