Indexed by:
Abstract:
Speech emotion recognition is mainly based on the differences of characteristics between different emotions. The traditional recognition method is based on the manual extracted features, such as MFCC and LPCC, etc., and also achieved well. But it is unclear what kind of feature are able to reflect the characteristics of human emotion from speech. With Convolution Neural Network (CNN) shows strong ability in the field of image classification, attracting more researchers to apply CNN to the learning of the spectrogram feature. However, the study of speech emotion either according to the characteristics of the traditional manual extraction or completely dependent on spectrogram of speech. There is still no combination of traditional features and spectrogram feature. In this paper, we propose a fusion neural network model combining the characteristics of traditional with spectrogram features. This multimodal CNN is trained with two stages. First, two CNN models pre-trained are fine-tuning respectively on the corresponding labeled audio datasets. Second, the outputs of the two CNN models are connected to a fusion network of fully-connected layers. The fusion network is trained to obtain a joint feature representation for emotion recognition. From the recognition results of emotional speech database, the proposed algorithm has higher speech emotion recognition rate and robustness.
Keyword:
Reprint Author's Address:
Email:
Source :
PROCEEDINGS OF THE 2017 2ND INTERNATIONAL CONFERENCE ON MATERIALS SCIENCE, MACHINERY AND ENERGY ENGINEERING (MSMEE 2017)
ISSN: 2352-5401
Year: 2017
Volume: 123
Page: 1071-1074
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: