• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Wang, Lichun (Wang, Lichun.) (Scholars:王立春) | Li, Shuang (Li, Shuang.) | Wang, Shaofan (Wang, Shaofan.) | Kong, Dehui (Kong, Dehui.) (Scholars:孔德慧) | Yin, Baocai (Yin, Baocai.) (Scholars:尹宝才)

Indexed by:

EI Scopus SCIE

Abstract:

Sparse representation is a powerful tool in many visual applications since images can be represented effectively and efficiently with a dictionary. Conventional dictionary learning methods usually treat each training sample equally, which would lead to the degradation of recognition performance when the samples from same category distribute dispersedly. This is because the dictionary focuses more on easy samples (known as highly clustered samples), and those hard samples (known as widely distributed samples) are easily ignored. As a result, the test samples which exhibit high dissimilarities to most of intra-category samples tend to be misclassified. To circumvent this issue, this paper proposes a simple and effective hardness-aware dictionary learning (HADL) method, which considers training samples discriminatively based on the AdaBoost mechanism. Different from learning one optimal dictionary, HADL learns a set of dictionaries and corresponding sub-classifiers jointly in an iterative fashion. In each iteration, HADL learns a dictionary and a sub-classifier, and updates the weights based on the classification errors given by current sub-classifier. Those correctly classified samples are assigned with small weights while those incorrectly classified samples are assigned with large weights. Through the iterated learning procedure, the hard samples are associated with different dictionaries. Finally, HADL combines the learned sub-classifiers linearly to form a strong classifier, which improves the overall recognition accuracy effectively. Experiments on well-known benchmarks show that HADL achieves promising classification results.

Keyword:

Dictionaries Training classification Boosting Visualization AdaBoost dictionary learning Task analysis Face recognition Sparse representation

Author Community:

  • [ 1 ] [Wang, Lichun]Beijing Univ Technol, Fac Informat Technol, Beijing Artificial Intelligence Inst, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China
  • [ 2 ] [Li, Shuang]Beijing Univ Technol, Fac Informat Technol, Beijing Artificial Intelligence Inst, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China
  • [ 3 ] [Wang, Shaofan]Beijing Univ Technol, Fac Informat Technol, Beijing Artificial Intelligence Inst, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China
  • [ 4 ] [Kong, Dehui]Beijing Univ Technol, Fac Informat Technol, Beijing Artificial Intelligence Inst, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China
  • [ 5 ] [Yin, Baocai]Beijing Univ Technol, Fac Informat Technol, Beijing Artificial Intelligence Inst, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China

Reprint Author's Address:

  • [Wang, Shaofan]Beijing Univ Technol, Fac Informat Technol, Beijing Artificial Intelligence Inst, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China

Show more details

Related Keywords:

Source :

IEEE TRANSACTIONS ON MULTIMEDIA

ISSN: 1520-9210

Year: 2021

Volume: 23

Page: 2857-2867

7 . 3 0 0

JCR@2022

ESI Discipline: COMPUTER SCIENCE;

ESI HC Threshold:87

JCR Journal Grade:1

Cited Count:

WoS CC Cited Count: 8

SCOPUS Cited Count: 10

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 0

Online/Total:720/5312845
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.