• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索
高影响力成果及被引频次趋势图 关键词云图及合作者关系图

您的检索:

学者姓名:张菁

精炼检索结果:

成果类型

应用 展开

来源

应用 展开

合作者

应用 展开

语言

应用

清除所有精炼条件

排序方式:
默认
  • 默认
  • 标题
  • 年份
  • WOS被引数
  • 影响因子
  • 正序
  • 倒序
< 页,共 13 >
A Hierarchical Scheme for Video-Based Person Re-identification Using Lightweight PCANet and Handcrafted LOMO Features SCIE CSCD
期刊论文 | 2021 , 30 (2) , 289-295 | CHINESE JOURNAL OF ELECTRONICS
WoS核心集被引次数: 1
摘要&关键词 引用

摘要 :

A two-level hierarchical scheme for video-based person re-identification (re-id) is presented, with the aim of learning a pedestrian appearance model through more complete walking cycle extraction. Specifically, given a video with consecutive frames, the objective of the first level is to detect the key frame with lightweight Convolutional neural network (CNN) of PCANet to reflect the summary of the video content. At the second level, on the basis of the detected key frame, the pedestrian walking cycle is extracted from the long video sequence. Moreover, local features of Local maximal occurrence (LOMO) of the walking cycle are extracted to represent the pedestrian' s appearance information. In contrast to the existing walking-cycle-based person re-id approaches, the proposed scheme relaxes the limit on step number for a walking cycle, thus making it flexible and less affected by noisy frames. Experiments are conducted on two benchmark datasets: PRID 2011 and iLIDS-VID. The experimental results demonstrate that our proposed scheme outperforms the six state-of-art video-based re-id methods, and is more robust to the severe video noises and variations in pose, lighting, and camera viewpoint.

关键词 :

based person re&#8208 based person re&#8208 Convolutional neural network Convolutional neural network identification identification Key frame detection Key frame detection Video&#8208 Video&#8208 Walking cycle extraction Walking cycle extraction

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Youjiao, Li , Li, Zhuo , Jiafeng, Li et al. A Hierarchical Scheme for Video-Based Person Re-identification Using Lightweight PCANet and Handcrafted LOMO Features [J]. | CHINESE JOURNAL OF ELECTRONICS , 2021 , 30 (2) : 289-295 .
MLA Youjiao, Li et al. "A Hierarchical Scheme for Video-Based Person Re-identification Using Lightweight PCANet and Handcrafted LOMO Features" . | CHINESE JOURNAL OF ELECTRONICS 30 . 2 (2021) : 289-295 .
APA Youjiao, Li , Li, Zhuo , Jiafeng, Li , Jing, Zhang . A Hierarchical Scheme for Video-Based Person Re-identification Using Lightweight PCANet and Handcrafted LOMO Features . | CHINESE JOURNAL OF ELECTRONICS , 2021 , 30 (2) , 289-295 .
导入链接 NoteExpress RIS BibTex
Crowd activity recognition in live video streaming via 3D-ResNet and region graph convolution network SCIE
期刊论文 | 2021 | IET IMAGE PROCESSING
WoS核心集被引次数: 2
摘要&关键词 引用

摘要 :

Since the era of we-media, live video industry has shown an explosive growth trend. For large-scale live video streaming, especially those containing crowd events that may cause great social impact, how to identify and supervise the crowd activity in live video streaming effectively is of great value to push the healthy development of live video industry. The existing crowd activity recognition mainly uses visual information, rarely fully exploiting and utilizing the correlation or external knowledge between crowd content. Therefore, a crowd activity recognition method in live video streaming is proposed by 3D-ResNet and regional graph convolution network (ReGCN). (1) After extracting deep spatiotemporal features from live video streaming with 3D-ResNet, the region proposals are generated by region proposal network. (2) A weakly supervised ReGCN is constructed by making region proposals as graph nodes and their correlations as edges. (3) Crowd activity in live video streaming is recognised by combining the output of ReGCN, the deep spatiotemporal features and the crowd motion intensity as external knowledge. Four experiments are conducted on the public collective activity extended dataset and a real-world dataset BJUT-CAD. The competitive results demonstrate that our method can effectively recognise crowd activity in live video streaming.

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Kang, Junpeng , Zhang, Jing , Li, Wensheng et al. Crowd activity recognition in live video streaming via 3D-ResNet and region graph convolution network [J]. | IET IMAGE PROCESSING , 2021 .
MLA Kang, Junpeng et al. "Crowd activity recognition in live video streaming via 3D-ResNet and region graph convolution network" . | IET IMAGE PROCESSING (2021) .
APA Kang, Junpeng , Zhang, Jing , Li, Wensheng , Zhuo, Li . Crowd activity recognition in live video streaming via 3D-ResNet and region graph convolution network . | IET IMAGE PROCESSING , 2021 .
导入链接 NoteExpress RIS BibTex
Streamer action recognition in live video with spatial-temporal attention and deep dictionary learning SCIE
期刊论文 | 2021 , 453 , 383-392 | NEUROCOMPUTING
WoS核心集被引次数: 13
摘要&关键词 引用

摘要 :

Live video hosted by streamer is being sought after by more and more Internet users. A few streamers show inappropriate action in normal live video content for profit and popularity, who bring great harm to the network environment. In order to effectively regulate the streamer behavior in live video, a strea-mer action recognition method in live video with spatial-temporal attention and deep dictionary learning is proposed in this paper. First, deep features with spatial context are extracted by a spatial attention net-work to focus on action region of streamer after sampling video frames from live video. Then, deep fea-tures of video are fused by assigning weights with a temporal attention network to learn the frame attention from an action. Finally, deep dictionary learning is used to sparsely represent the deep features to further recognize streamer actions. Four experiments are conducted on a real-world dataset, and the competitive results demonstrate that our method can improve the accuracy and speed of streamer action recognition in live video. (c) 2021 Elsevier B.V. All rights reserved.

关键词 :

Action recognition Action recognition Deep dictionary learning Deep dictionary learning Live video Live video Spatial-temporal attention Spatial-temporal attention Streamer Streamer

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Li, Chenhao , Zhang, Jing , Yao, Jiacheng . Streamer action recognition in live video with spatial-temporal attention and deep dictionary learning [J]. | NEUROCOMPUTING , 2021 , 453 : 383-392 .
MLA Li, Chenhao et al. "Streamer action recognition in live video with spatial-temporal attention and deep dictionary learning" . | NEUROCOMPUTING 453 (2021) : 383-392 .
APA Li, Chenhao , Zhang, Jing , Yao, Jiacheng . Streamer action recognition in live video with spatial-temporal attention and deep dictionary learning . | NEUROCOMPUTING , 2021 , 453 , 383-392 .
导入链接 NoteExpress RIS BibTex
Porn Streamer Recognition in Live Video Based on Multimodal Knowledge Distillation SCIE CSCD
期刊论文 | 2021 , 30 (6) , 1096-1102 | CHINESE JOURNAL OF ELECTRONICS
WoS核心集被引次数: 3
摘要&关键词 引用

摘要 :

Although deep learning has reached a higher accuracy for video content analysis, it is not satisfied with practical application demands of porn streamer recognition in live video because of multiple parameters, complex structures of deep network model. In order to improve the recognition efficiency of porn streamer in live video, a deep network model compression method based on multimodal knowledge distillation is proposed. First, the teacher model is trained with visual-speech deep network to obtain the corresponding porn video prediction score. Second, a lightweight student model constructed with MobileNetV2 and Xception transfers the knowledge from the teacher model by using multimodal knowledge distillation strategy. Finally, porn streamer in live video is recognized by combining the lightweight student model of visualspeech network with the bullet screen text recognition network. Experimental results demonstrate that the proposed method can effectively drop the computation cost and improve the recognition speed under the proper accuracy.

关键词 :

Knowledge distillation Knowledge distillation Lightweight student model Lightweight student model Live video Live video Multimodal Multimodal Porn streamer recognition Porn streamer recognition

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Wang Liyuan , Zhang Jing , Yao Jiacheng et al. Porn Streamer Recognition in Live Video Based on Multimodal Knowledge Distillation [J]. | CHINESE JOURNAL OF ELECTRONICS , 2021 , 30 (6) : 1096-1102 .
MLA Wang Liyuan et al. "Porn Streamer Recognition in Live Video Based on Multimodal Knowledge Distillation" . | CHINESE JOURNAL OF ELECTRONICS 30 . 6 (2021) : 1096-1102 .
APA Wang Liyuan , Zhang Jing , Yao Jiacheng , Zhuo Li . Porn Streamer Recognition in Live Video Based on Multimodal Knowledge Distillation . | CHINESE JOURNAL OF ELECTRONICS , 2021 , 30 (6) , 1096-1102 .
导入链接 NoteExpress RIS BibTex
Multiscale object detection in high-resolution remote sensing images via rotation invariant deep features driven by channel attention SCIE
期刊论文 | 2021 , 42 (15) , 5754-5773 | INTERNATIONAL JOURNAL OF REMOTE SENSING
WoS核心集被引次数: 16
摘要&关键词 引用

摘要 :

Due to the complex background and spatial distribution, it brings great challenge to object detection in high-resolution remote sensing images. In view of the characteristics of various scales, arbitrary orientations, shape variations, and dense arrangement, a multiscale object detection method in high-resolution remote sensing images is proposed by using rotation invariance deep features driven by channel attention. First, a channel attention module is added to our feature fusion and scaling-based single shot detector (FS-SSD) to strengthen the long-term semantic dependence between objects for improving the discriminative ability of the deep features. Then, an oriented response convolution is followed to generate feature maps with orientation channels to produce rotation invariant deep features. Finally, multiscale objects are predicted in a high-resolution remote sensing image by fusing various scale feature maps with multiscale feature module in FS-SSD. Five experiments are conducted on NWPU VHR-10 dataset and achieve better detection performance compared with the state-of-the-art methods.

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Zhao, Xiaolei , Zhang, Jing , Tian, Jimiao et al. Multiscale object detection in high-resolution remote sensing images via rotation invariant deep features driven by channel attention [J]. | INTERNATIONAL JOURNAL OF REMOTE SENSING , 2021 , 42 (15) : 5754-5773 .
MLA Zhao, Xiaolei et al. "Multiscale object detection in high-resolution remote sensing images via rotation invariant deep features driven by channel attention" . | INTERNATIONAL JOURNAL OF REMOTE SENSING 42 . 15 (2021) : 5754-5773 .
APA Zhao, Xiaolei , Zhang, Jing , Tian, Jimiao , Zhuo, Li , Zhang, Jie . Multiscale object detection in high-resolution remote sensing images via rotation invariant deep features driven by channel attention . | INTERNATIONAL JOURNAL OF REMOTE SENSING , 2021 , 42 (15) , 5754-5773 .
导入链接 NoteExpress RIS BibTex
Multi-level prediction Siamese network for real-time UAV visual tracking SCIE
期刊论文 | 2020 , 103 | IMAGE AND VISION COMPUTING
WoS核心集被引次数: 12
摘要&关键词 引用

摘要 :

Existing deployed Unmanned Aerial Vehicles (UAVs) visual trackers are usually based on the correlation filter framework. Although thesemethods have certain advantages of lowcomputational complexity, the tracking performance of small targets and fast motion scenarios is not satisfactory. In this paper, we present a novel multilevel prediction Siamese network (MLPS) for object tracking in UAV videos, which consists of Siamese feature extraction module and multi-level prediction module. The multi-level prediction module can make full use of the characteristics of each layer features to achieve robust evaluation of targets with different scales. Meanwhile, for small-size target tracking, we design a residual feature fusion block, which is used to constrain the low-level feature representation by using high-level abstract semantics, and obtain the improvement of the tracker's ability to distinguish scene details. In addition, we propose a layer attention fusion block which is sensitive to the informative features of each layers to achieve adaptive fusion of different levels of correlation responses by dynamically balancing the multi-layer features. Sufficient experiments on several UAV tracking benchmarks demonstrate that MLPS achieves state-of-the-art performance and runs at a speed over 97 FPS. (c) 2020 Elsevier B.V. All rights reserved.

关键词 :

Feature fusion Feature fusion Multi-level prediction Multi-level prediction Small target Small target UAV tracking UAV tracking

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Zhu, Mu , Zhang, Hui , Zhang, Jing et al. Multi-level prediction Siamese network for real-time UAV visual tracking [J]. | IMAGE AND VISION COMPUTING , 2020 , 103 .
MLA Zhu, Mu et al. "Multi-level prediction Siamese network for real-time UAV visual tracking" . | IMAGE AND VISION COMPUTING 103 (2020) .
APA Zhu, Mu , Zhang, Hui , Zhang, Jing , Zhuo, Li . Multi-level prediction Siamese network for real-time UAV visual tracking . | IMAGE AND VISION COMPUTING , 2020 , 103 .
导入链接 NoteExpress RIS BibTex
Porn Streamer Recognition in Live Video Streaming via Attention-Gated Multimodal Deep Features SCIE
期刊论文 | 2020 , 30 (12) , 4876-4886 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
WoS核心集被引次数: 13
摘要&关键词 引用

摘要 :

Live video streaming platforms have attracted millions of streamers and daily active users. For profit and popularity accumulation, some streamers mix pornography content into live content to avoid online supervision. Therefore, accurate recognition of porn streamers in live video streaming has become a challenging task. Porn streamers in live video present multimodal characteristics including visual and acoustic content. Therefore, a porn streamer recognition method in live video streaming is proposed that uses attention-gated multimodal deep features. Our contribution includes the following: (1) multimodal deep features, i.e., spatial, motion and audio, are extracted from live video streaming using convolutional neural networks (CNNs), in which the temporal context of multimodal features is obtained with a bi-directional gated recurrent unit (Bi-GRU); (2) the tri-attention gated mechanism is applied to map the associations between different modalities by assigning higher weights to important features for further reduction in the redundancy of multimodal features; (3) porn streamers in live video streaming are recognized via the attention-gated multimodal deep features. Six experiments are conducted on a real-world dataset, and the competitive results demonstrate that our method can effectively recognize porn streamers in live video streaming.

关键词 :

attention-gated attention-gated bi-directional gated recurrent unit bi-directional gated recurrent unit Computational modeling Computational modeling Feature extraction Feature extraction Live video streaming Live video streaming Logic gates Logic gates multimodal deep features multimodal deep features porn streamer recognition porn streamer recognition Redundancy Redundancy Streaming media Streaming media Task analysis Task analysis Visualization Visualization

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Wang, Liyuan , Zhang, Jing , Tian, Qi et al. Porn Streamer Recognition in Live Video Streaming via Attention-Gated Multimodal Deep Features [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2020 , 30 (12) : 4876-4886 .
MLA Wang, Liyuan et al. "Porn Streamer Recognition in Live Video Streaming via Attention-Gated Multimodal Deep Features" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 30 . 12 (2020) : 4876-4886 .
APA Wang, Liyuan , Zhang, Jing , Tian, Qi , Li, Chenhao , Zhuo, Li . Porn Streamer Recognition in Live Video Streaming via Attention-Gated Multimodal Deep Features . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2020 , 30 (12) , 4876-4886 .
导入链接 NoteExpress RIS BibTex
Multilevel fusion of multimodal deep features for porn streamer recognition in live video EI
期刊论文 | 2020 , 140 , 150-157 | Pattern Recognition Letters
摘要&关键词 引用

摘要 :

Live video hosted by streamers is being sought after by an increasing number of Internet users. Some streamers mix pornographic content with live video for profit and popularity, but this greatly harms the network environment. To effectively identify porn streamers, a multilevel fusion method of multimodal deep features for porn streamer recognition in live video is proposed in this paper. (1) Visual and audio features including spatial, audio, motion, and temporal context in live video are extracted by a multimodal deep network. (2) Audio-visual attention features are obtained by fusing visual and audio features at the feature level based on a multimodal attention mechanism. (3) Text features are extracted by using the bullet screen text network based on the BERT (bidirectional encoder representations from transformers) model after collecting text information from the viewers bullet screen comments. (4) The prediction results of the audio-visual deep network and the bullet screen text network are fused at the decision level to improve the porn streamer recognition accuracy. We build a real-world dataset of porn streamers and conduct experiments and demonstrate that our method can improve the porn streamer recognition accuracy. © 2020 Elsevier B.V.

关键词 :

Behavioral research Behavioral research Character recognition Character recognition

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Wang, Liyuan , Zhang, Jing , Wang, Meng et al. Multilevel fusion of multimodal deep features for porn streamer recognition in live video [J]. | Pattern Recognition Letters , 2020 , 140 : 150-157 .
MLA Wang, Liyuan et al. "Multilevel fusion of multimodal deep features for porn streamer recognition in live video" . | Pattern Recognition Letters 140 (2020) : 150-157 .
APA Wang, Liyuan , Zhang, Jing , Wang, Meng , Tian, Jimiao , Zhuo, Li . Multilevel fusion of multimodal deep features for porn streamer recognition in live video . | Pattern Recognition Letters , 2020 , 140 , 150-157 .
导入链接 NoteExpress RIS BibTex
In vivo toxicity of nitroaromatic compounds to rats: QSTR modelling and interspecies toxicity relationship with mouse SCIE
期刊论文 | 2020 , 399 | JOURNAL OF HAZARDOUS MATERIALS
WoS核心集被引次数: 33
摘要&关键词 引用

摘要 :

Nitroaromatic compounds (NACs) in the environment can cause serious public health and environmental problems due to their potential toxicity. This study established quantitative structure-toxicity relationship (QSTR) models for the acute oral toxicity of NACs towards rats following the stringent OECD principles for QSTR modelling. All models were assessed by various internationally accepted validation metrics and the OECD criteria. The best QSTR model contains seven simple and interpretable 2D descriptors with defined physicochemical meaning. Mechanistic interpretation indicated that van der Waals surface area, presence of C-F at topological distance 6, heteroatom content and frequency of C-N at topological distance 9 are main factors responsible for the toxicity of NACs. This proposed model was successfully applied to a true external set (295 compounds), and prediction reliability was analysed and discussed. Moreover, the rat-mouse and mouse-rat interspecies quantitative toxicity-toxicity relationship (iQTTR) models were also constructed, validated and employed in toxicity prediction for true external sets consisting of 67 and 265 compounds, respectively. These models showed good external predictivity that can be used to rapidly predict the rat oral acute toxicity of new or untested NACs falling within the applicability domain of the models, thus being beneficial in environmental risk assessment and regulatory purposes.

关键词 :

Acute oral toxicity Acute oral toxicity iQTTR iQTTR Mechanism of toxicity Mechanism of toxicity Nitroaromatic compounds Nitroaromatic compounds QSTR QSTR Risk assessment Risk assessment

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Hao, Yuxing , Sun, Guohui , Fan, Tengjiao et al. In vivo toxicity of nitroaromatic compounds to rats: QSTR modelling and interspecies toxicity relationship with mouse [J]. | JOURNAL OF HAZARDOUS MATERIALS , 2020 , 399 .
MLA Hao, Yuxing et al. "In vivo toxicity of nitroaromatic compounds to rats: QSTR modelling and interspecies toxicity relationship with mouse" . | JOURNAL OF HAZARDOUS MATERIALS 399 (2020) .
APA Hao, Yuxing , Sun, Guohui , Fan, Tengjiao , Tang, Xiaoyu , Zhang, Jing , Liu, Yongdong et al. In vivo toxicity of nitroaromatic compounds to rats: QSTR modelling and interspecies toxicity relationship with mouse . | JOURNAL OF HAZARDOUS MATERIALS , 2020 , 399 .
导入链接 NoteExpress RIS BibTex
Object Tracking in RGB-T Videos Using Modal-Aware Attention Network and Competitive Learning SCIE
期刊论文 | 2020 , 20 (2) | SENSORS
WoS核心集被引次数: 66
摘要&关键词 引用

摘要 :

Object tracking in RGB-thermal (RGB-T) videos is increasingly used in many fields due to the all-weather and all-day working capability of the dual-modality imaging system, as well as the rapid development of low-cost and miniaturized infrared camera technology. However, it is still very challenging to effectively fuse dual-modality information to build a robust RGB-T tracker. In this paper, an RGB-T object tracking algorithm based on a modal-aware attention network and competitive learning (MaCNet) is proposed, which includes a feature extraction network, modal-aware attention network, and classification network. The feature extraction network adopts the form of a two-stream network to extract features from each modality image. The modal-aware attention network integrates the original data, establishes an attention model that characterizes the importance of different feature layers, and then guides the feature fusion to enhance the information interaction between modalities. The classification network constructs a modality-egoistic loss function through three parallel binary classifiers acting on the RGB branch, the thermal infrared branch, and the fusion branch, respectively. Guided by the training strategy of competitive learning, the entire network is fine-tuned in the direction of the optimal fusion of the dual modalities. Extensive experiments on several publicly available RGB-T datasets show that our tracker has superior performance compared to other latest RGB-T and RGB tracking approaches.

关键词 :

competitive learning competitive learning cross-modal data fusion cross-modal data fusion modal-aware attention network modal-aware attention network RGB-T object tracking RGB-T object tracking

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Zhang, Hui , Zhang, Lei , Zhuo, Li et al. Object Tracking in RGB-T Videos Using Modal-Aware Attention Network and Competitive Learning [J]. | SENSORS , 2020 , 20 (2) .
MLA Zhang, Hui et al. "Object Tracking in RGB-T Videos Using Modal-Aware Attention Network and Competitive Learning" . | SENSORS 20 . 2 (2020) .
APA Zhang, Hui , Zhang, Lei , Zhuo, Li , Zhang, Jing . Object Tracking in RGB-T Videos Using Modal-Aware Attention Network and Competitive Learning . | SENSORS , 2020 , 20 (2) .
导入链接 NoteExpress RIS BibTex
每页显示 10| 20| 50 条结果
< 页,共 13 >

导出

数据:

选中

格式:
在线人数/总访问数:168/3611057
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司