Indexed by:
Abstract:
Video-based action recognition in realistic scenes is a core technology for human-computer interaction and smart surveillance. Although the trajectory features with the bag of visual words have confirmed promising performance, spatiotemporal interactive information cannot be effectively encoded which is valuable for classification. To address this issue, we propose a spatiotemporal semantic feature (ST-SF) and implement the conversion of it to the auxiliary criterion based on the information entropy theory. First, we present a text-based relevance analysis method to estimate the textual labels of objects most relevant to actions, which are employed to train the more targeted detectors based on the deep network. False detections are optimized by the inter-frame cooperativity and dynamic programming to construct the valid tubes. Then, we design the ST-SF to encode the interactive information, and the concept and calculation of feature entropy are defined based on the spatial distribution of ST-SFs on the training set. Finally, we achieve a two-stage classification strategy using the resulting decision gains. Experimental results on three publicly available datasets demonstrate that our method is robust and improves upon the state-of-the-art algorithms.
Keyword:
Reprint Author's Address:
Email:
Source :
VISUAL COMPUTER
ISSN: 0178-2789
Year: 2020
Issue: 7
Volume: 37
Page: 1673-1690
3 . 5 0 0
JCR@2022
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:132
Cited Count:
WoS CC Cited Count: 2
SCOPUS Cited Count: 2
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: