• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索

作者:

Liu, Yutao (Liu, Yutao.) | Gu, Ke (Gu, Ke.) (学者:顾锞) | Wang, Shiqi (Wang, Shiqi.) | Zhao, Debin (Zhao, Debin.) | Gao, Wen (Gao, Wen.)

收录:

EI Scopus SCIE

摘要:

Camera images in reality are easily affected by various distortions, such as blur, noise, blockiness, and the like, which damage the quality of images. The complexity of distortions in camera images raises significant challenge for precisely predicting their perceptual quality. In this paper, we present an image quality assessment (IQA) approach that aims to solve this challenging problem to some extent. In the proposed method, we first extract the low-level and high-level statistical features, which can capture the quality degradations effectively. On the one hand, the first kind of statistical features are extracted from the locally mean subtracted and contrast normalized coefficients, which denote the low-level features in the early human vision. On the other hand, the recently proposed brain theory and neuroscience, especially the free-energy principle, reveal that the human brain tries to explain its encountered visual scenes through an inner creative model, with which the brain can produce the projection for the image. Then, the quality of perceptions can be reflected by the divergence between the image and its brain projection. Based on this, we extract the second type of features from the brain perception mechanism, which represent the high-level features. The low-level and high-level statistical features can play a complementary role in quality prediction. After feature extraction, we design a neural network to integrate all the features and convert them to the final quality score. Extensive tests performed on two real camera image datasets prove the validity of our method and its advantageous predicting ability over the competitive IQA models.

关键词:

neural network natural image statistics no-reference (NR)/blind Image quality assessment (IQA) free-energy principle

作者机构:

  • [ 1 ] [Liu, Yutao]Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150001, Heilongjiang, Peoples R China
  • [ 2 ] [Zhao, Debin]Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150001, Heilongjiang, Peoples R China
  • [ 3 ] [Gao, Wen]Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150001, Heilongjiang, Peoples R China
  • [ 4 ] [Gu, Ke]Beijing Univ Technol, Fac Informat Technol, Beijing Key Lab Computat Intelligence & Intellige, Beijing 100124, Peoples R China
  • [ 5 ] [Wang, Shiqi]City Univ Hong Kong, Dept Comp Sci, Kowloon Tong, Hong Kong, Peoples R China
  • [ 6 ] [Gao, Wen]Peking Univ, Sch Elect Engn & Comp Sci, Natl Engn Lab Video Technol, Beijing 100871, Peoples R China
  • [ 7 ] [Gao, Wen]Peking Univ, Sch Elect Engn & Comp Sci, Key Lab Machine Percept, Beijing 100871, Peoples R China

通讯作者信息:

  • 顾锞

    [Gu, Ke]Beijing Univ Technol, Fac Informat Technol, Beijing Key Lab Computat Intelligence & Intellige, Beijing 100124, Peoples R China

查看成果更多字段

相关关键词:

相关文章:

来源 :

IEEE TRANSACTIONS ON MULTIMEDIA

ISSN: 1520-9210

年份: 2019

期: 1

卷: 21

页码: 135-146

7 . 3 0 0

JCR@2022

ESI学科: COMPUTER SCIENCE;

ESI高被引阀值:147

JCR分区:1

被引次数:

WoS核心集被引频次: 53

SCOPUS被引频次: 66

ESI高被引论文在榜: 0 展开所有

万方被引频次:

中文被引频次:

近30日浏览量: 0

在线人数/总访问数:1546/4242839
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司