收录:
摘要:
The RGBT target tracking method has recently gained popularity owing to the complementarity of RGB images and thermal images information. Although numerous RGBT tracking methods have been proposed, effectively utilizing dual-modality information is still challenging. To solve this problem, we design a dual-modality feature extraction network to extract common and specific modality features. For specific modality features, we design two unique feature extraction networks to learn the independent dual-modality information respectively. For common modality features, we propose a common feature extraction network based on the graph attention method, which could learn the shared modality information of dual-modality images. According to experiments on the RGBT234 and LasHeR datasets, our suggested method performs sufficiently. © 2022 ACM.
关键词:
通讯作者信息:
电子邮件地址:
来源 :
年份: 2022
页码: 248-253
语种: 英文
归属院系: