• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索
高影响力成果及被引频次趋势图 关键词云图及合作者关系图

您的检索:

学者姓名:张菁

精炼检索结果:

成果类型

应用 展开

来源

应用 展开

合作者

应用 展开

语言

应用

清除所有精炼条件

排序方式:
默认
  • 默认
  • 标题
  • 年份
  • WOS被引数
  • 影响因子
  • 正序
  • 倒序
< 页,共 13 >
A Hierarchical Scheme for Video-Based Person Re-identification Using Lightweight PCANet and Handcrafted LOMO Features SCIE CSCD
期刊论文 | 2021 , 30 (2) , 289-295 | CHINESE JOURNAL OF ELECTRONICS
WoS核心集被引次数: 1
摘要&关键词 引用

摘要 :

A two-level hierarchical scheme for video-based person re-identification (re-id) is presented, with the aim of learning a pedestrian appearance model through more complete walking cycle extraction. Specifically, given a video with consecutive frames, the objective of the first level is to detect the key frame with lightweight Convolutional neural network (CNN) of PCANet to reflect the summary of the video content. At the second level, on the basis of the detected key frame, the pedestrian walking cycle is extracted from the long video sequence. Moreover, local features of Local maximal occurrence (LOMO) of the walking cycle are extracted to represent the pedestrian' s appearance information. In contrast to the existing walking-cycle-based person re-id approaches, the proposed scheme relaxes the limit on step number for a walking cycle, thus making it flexible and less affected by noisy frames. Experiments are conducted on two benchmark datasets: PRID 2011 and iLIDS-VID. The experimental results demonstrate that our proposed scheme outperforms the six state-of-art video-based re-id methods, and is more robust to the severe video noises and variations in pose, lighting, and camera viewpoint.

关键词 :

based person re&#8208 based person re&#8208 Convolutional neural network Convolutional neural network identification identification Key frame detection Key frame detection Video&#8208 Video&#8208 Walking cycle extraction Walking cycle extraction

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Youjiao, Li , Li, Zhuo , Jiafeng, Li et al. A Hierarchical Scheme for Video-Based Person Re-identification Using Lightweight PCANet and Handcrafted LOMO Features [J]. | CHINESE JOURNAL OF ELECTRONICS , 2021 , 30 (2) : 289-295 .
MLA Youjiao, Li et al. "A Hierarchical Scheme for Video-Based Person Re-identification Using Lightweight PCANet and Handcrafted LOMO Features" . | CHINESE JOURNAL OF ELECTRONICS 30 . 2 (2021) : 289-295 .
APA Youjiao, Li , Li, Zhuo , Jiafeng, Li , Jing, Zhang . A Hierarchical Scheme for Video-Based Person Re-identification Using Lightweight PCANet and Handcrafted LOMO Features . | CHINESE JOURNAL OF ELECTRONICS , 2021 , 30 (2) , 289-295 .
导入链接 NoteExpress RIS BibTex
Crowd activity recognition in live video streaming via 3D-ResNet and region graph convolution network SCIE
期刊论文 | 2021 | IET IMAGE PROCESSING
WoS核心集被引次数: 2
摘要&关键词 引用

摘要 :

Since the era of we-media, live video industry has shown an explosive growth trend. For large-scale live video streaming, especially those containing crowd events that may cause great social impact, how to identify and supervise the crowd activity in live video streaming effectively is of great value to push the healthy development of live video industry. The existing crowd activity recognition mainly uses visual information, rarely fully exploiting and utilizing the correlation or external knowledge between crowd content. Therefore, a crowd activity recognition method in live video streaming is proposed by 3D-ResNet and regional graph convolution network (ReGCN). (1) After extracting deep spatiotemporal features from live video streaming with 3D-ResNet, the region proposals are generated by region proposal network. (2) A weakly supervised ReGCN is constructed by making region proposals as graph nodes and their correlations as edges. (3) Crowd activity in live video streaming is recognised by combining the output of ReGCN, the deep spatiotemporal features and the crowd motion intensity as external knowledge. Four experiments are conducted on the public collective activity extended dataset and a real-world dataset BJUT-CAD. The competitive results demonstrate that our method can effectively recognise crowd activity in live video streaming.

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Kang, Junpeng , Zhang, Jing , Li, Wensheng et al. Crowd activity recognition in live video streaming via 3D-ResNet and region graph convolution network [J]. | IET IMAGE PROCESSING , 2021 .
MLA Kang, Junpeng et al. "Crowd activity recognition in live video streaming via 3D-ResNet and region graph convolution network" . | IET IMAGE PROCESSING (2021) .
APA Kang, Junpeng , Zhang, Jing , Li, Wensheng , Zhuo, Li . Crowd activity recognition in live video streaming via 3D-ResNet and region graph convolution network . | IET IMAGE PROCESSING , 2021 .
导入链接 NoteExpress RIS BibTex
Streamer action recognition in live video with spatial-temporal attention and deep dictionary learning SCIE
期刊论文 | 2021 , 453 , 383-392 | NEUROCOMPUTING
WoS核心集被引次数: 13
摘要&关键词 引用

摘要 :

Live video hosted by streamer is being sought after by more and more Internet users. A few streamers show inappropriate action in normal live video content for profit and popularity, who bring great harm to the network environment. In order to effectively regulate the streamer behavior in live video, a strea-mer action recognition method in live video with spatial-temporal attention and deep dictionary learning is proposed in this paper. First, deep features with spatial context are extracted by a spatial attention net-work to focus on action region of streamer after sampling video frames from live video. Then, deep fea-tures of video are fused by assigning weights with a temporal attention network to learn the frame attention from an action. Finally, deep dictionary learning is used to sparsely represent the deep features to further recognize streamer actions. Four experiments are conducted on a real-world dataset, and the competitive results demonstrate that our method can improve the accuracy and speed of streamer action recognition in live video. (c) 2021 Elsevier B.V. All rights reserved.

关键词 :

Action recognition Action recognition Deep dictionary learning Deep dictionary learning Live video Live video Spatial-temporal attention Spatial-temporal attention Streamer Streamer

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Li, Chenhao , Zhang, Jing , Yao, Jiacheng . Streamer action recognition in live video with spatial-temporal attention and deep dictionary learning [J]. | NEUROCOMPUTING , 2021 , 453 : 383-392 .
MLA Li, Chenhao et al. "Streamer action recognition in live video with spatial-temporal attention and deep dictionary learning" . | NEUROCOMPUTING 453 (2021) : 383-392 .
APA Li, Chenhao , Zhang, Jing , Yao, Jiacheng . Streamer action recognition in live video with spatial-temporal attention and deep dictionary learning . | NEUROCOMPUTING , 2021 , 453 , 383-392 .
导入链接 NoteExpress RIS BibTex
Porn Streamer Recognition in Live Video Based on Multimodal Knowledge Distillation SCIE CSCD
期刊论文 | 2021 , 30 (6) , 1096-1102 | CHINESE JOURNAL OF ELECTRONICS
WoS核心集被引次数: 3
摘要&关键词 引用

摘要 :

Although deep learning has reached a higher accuracy for video content analysis, it is not satisfied with practical application demands of porn streamer recognition in live video because of multiple parameters, complex structures of deep network model. In order to improve the recognition efficiency of porn streamer in live video, a deep network model compression method based on multimodal knowledge distillation is proposed. First, the teacher model is trained with visual-speech deep network to obtain the corresponding porn video prediction score. Second, a lightweight student model constructed with MobileNetV2 and Xception transfers the knowledge from the teacher model by using multimodal knowledge distillation strategy. Finally, porn streamer in live video is recognized by combining the lightweight student model of visualspeech network with the bullet screen text recognition network. Experimental results demonstrate that the proposed method can effectively drop the computation cost and improve the recognition speed under the proper accuracy.

关键词 :

Knowledge distillation Knowledge distillation Lightweight student model Lightweight student model Live video Live video Multimodal Multimodal Porn streamer recognition Porn streamer recognition

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Wang Liyuan , Zhang Jing , Yao Jiacheng et al. Porn Streamer Recognition in Live Video Based on Multimodal Knowledge Distillation [J]. | CHINESE JOURNAL OF ELECTRONICS , 2021 , 30 (6) : 1096-1102 .
MLA Wang Liyuan et al. "Porn Streamer Recognition in Live Video Based on Multimodal Knowledge Distillation" . | CHINESE JOURNAL OF ELECTRONICS 30 . 6 (2021) : 1096-1102 .
APA Wang Liyuan , Zhang Jing , Yao Jiacheng , Zhuo Li . Porn Streamer Recognition in Live Video Based on Multimodal Knowledge Distillation . | CHINESE JOURNAL OF ELECTRONICS , 2021 , 30 (6) , 1096-1102 .
导入链接 NoteExpress RIS BibTex
Multiscale object detection in high-resolution remote sensing images via rotation invariant deep features driven by channel attention SCIE
期刊论文 | 2021 , 42 (15) , 5754-5773 | INTERNATIONAL JOURNAL OF REMOTE SENSING
WoS核心集被引次数: 16
摘要&关键词 引用

摘要 :

Due to the complex background and spatial distribution, it brings great challenge to object detection in high-resolution remote sensing images. In view of the characteristics of various scales, arbitrary orientations, shape variations, and dense arrangement, a multiscale object detection method in high-resolution remote sensing images is proposed by using rotation invariance deep features driven by channel attention. First, a channel attention module is added to our feature fusion and scaling-based single shot detector (FS-SSD) to strengthen the long-term semantic dependence between objects for improving the discriminative ability of the deep features. Then, an oriented response convolution is followed to generate feature maps with orientation channels to produce rotation invariant deep features. Finally, multiscale objects are predicted in a high-resolution remote sensing image by fusing various scale feature maps with multiscale feature module in FS-SSD. Five experiments are conducted on NWPU VHR-10 dataset and achieve better detection performance compared with the state-of-the-art methods.

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Zhao, Xiaolei , Zhang, Jing , Tian, Jimiao et al. Multiscale object detection in high-resolution remote sensing images via rotation invariant deep features driven by channel attention [J]. | INTERNATIONAL JOURNAL OF REMOTE SENSING , 2021 , 42 (15) : 5754-5773 .
MLA Zhao, Xiaolei et al. "Multiscale object detection in high-resolution remote sensing images via rotation invariant deep features driven by channel attention" . | INTERNATIONAL JOURNAL OF REMOTE SENSING 42 . 15 (2021) : 5754-5773 .
APA Zhao, Xiaolei , Zhang, Jing , Tian, Jimiao , Zhuo, Li , Zhang, Jie . Multiscale object detection in high-resolution remote sensing images via rotation invariant deep features driven by channel attention . | INTERNATIONAL JOURNAL OF REMOTE SENSING , 2021 , 42 (15) , 5754-5773 .
导入链接 NoteExpress RIS BibTex
Multi-level prediction Siamese network for real-time UAV visual tracking SCIE
期刊论文 | 2020 , 103 | IMAGE AND VISION COMPUTING
WoS核心集被引次数: 12
摘要&关键词 引用

摘要 :

Existing deployed Unmanned Aerial Vehicles (UAVs) visual trackers are usually based on the correlation filter framework. Although thesemethods have certain advantages of lowcomputational complexity, the tracking performance of small targets and fast motion scenarios is not satisfactory. In this paper, we present a novel multilevel prediction Siamese network (MLPS) for object tracking in UAV videos, which consists of Siamese feature extraction module and multi-level prediction module. The multi-level prediction module can make full use of the characteristics of each layer features to achieve robust evaluation of targets with different scales. Meanwhile, for small-size target tracking, we design a residual feature fusion block, which is used to constrain the low-level feature representation by using high-level abstract semantics, and obtain the improvement of the tracker's ability to distinguish scene details. In addition, we propose a layer attention fusion block which is sensitive to the informative features of each layers to achieve adaptive fusion of different levels of correlation responses by dynamically balancing the multi-layer features. Sufficient experiments on several UAV tracking benchmarks demonstrate that MLPS achieves state-of-the-art performance and runs at a speed over 97 FPS. (c) 2020 Elsevier B.V. All rights reserved.

关键词 :

Feature fusion Feature fusion Multi-level prediction Multi-level prediction Small target Small target UAV tracking UAV tracking

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Zhu, Mu , Zhang, Hui , Zhang, Jing et al. Multi-level prediction Siamese network for real-time UAV visual tracking [J]. | IMAGE AND VISION COMPUTING , 2020 , 103 .
MLA Zhu, Mu et al. "Multi-level prediction Siamese network for real-time UAV visual tracking" . | IMAGE AND VISION COMPUTING 103 (2020) .
APA Zhu, Mu , Zhang, Hui , Zhang, Jing , Zhuo, Li . Multi-level prediction Siamese network for real-time UAV visual tracking . | IMAGE AND VISION COMPUTING , 2020 , 103 .
导入链接 NoteExpress RIS BibTex
Object Tracking in RGB-T Videos Using Modal-Aware Attention Network and Competitive Learning SCIE
期刊论文 | 2020 , 20 (2) | SENSORS
WoS核心集被引次数: 55
摘要&关键词 引用

摘要 :

Object tracking in RGB-thermal (RGB-T) videos is increasingly used in many fields due to the all-weather and all-day working capability of the dual-modality imaging system, as well as the rapid development of low-cost and miniaturized infrared camera technology. However, it is still very challenging to effectively fuse dual-modality information to build a robust RGB-T tracker. In this paper, an RGB-T object tracking algorithm based on a modal-aware attention network and competitive learning (MaCNet) is proposed, which includes a feature extraction network, modal-aware attention network, and classification network. The feature extraction network adopts the form of a two-stream network to extract features from each modality image. The modal-aware attention network integrates the original data, establishes an attention model that characterizes the importance of different feature layers, and then guides the feature fusion to enhance the information interaction between modalities. The classification network constructs a modality-egoistic loss function through three parallel binary classifiers acting on the RGB branch, the thermal infrared branch, and the fusion branch, respectively. Guided by the training strategy of competitive learning, the entire network is fine-tuned in the direction of the optimal fusion of the dual modalities. Extensive experiments on several publicly available RGB-T datasets show that our tracker has superior performance compared to other latest RGB-T and RGB tracking approaches.

关键词 :

competitive learning competitive learning cross-modal data fusion cross-modal data fusion modal-aware attention network modal-aware attention network RGB-T object tracking RGB-T object tracking

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Zhang, Hui , Zhang, Lei , Zhuo, Li et al. Object Tracking in RGB-T Videos Using Modal-Aware Attention Network and Competitive Learning [J]. | SENSORS , 2020 , 20 (2) .
MLA Zhang, Hui et al. "Object Tracking in RGB-T Videos Using Modal-Aware Attention Network and Competitive Learning" . | SENSORS 20 . 2 (2020) .
APA Zhang, Hui , Zhang, Lei , Zhuo, Li , Zhang, Jing . Object Tracking in RGB-T Videos Using Modal-Aware Attention Network and Competitive Learning . | SENSORS , 2020 , 20 (2) .
导入链接 NoteExpress RIS BibTex
Video Quality of Experience Metric for Dynamic Adaptive Streaming Services Using DASH Standard and Deep Spatial-Temporal Representation of Video SCIE
期刊论文 | 2020 , 10 (5) | APPLIED SCIENCES-BASEL
WoS核心集被引次数: 6
摘要&关键词 引用

摘要 :

DASH (Dynamic Adaptive Streaming over HTTP (HyperText Transfer Protocol)) as a universal unified multimedia streaming standard selects the appropriate video bitrate to improve the user's Quality of Experience (QoE) according to network conditions, client status, etc. Considering that the quantitative expression of the user's QoE is also a difficult point in itself, this paper researched the distortion caused due to video compression, network transmission and other aspects, and then proposes a video QoE metric for dynamic adaptive streaming services. Three-Dimensional Convolutional Neural Networks (3D CNN) and Long Short-Term Memory (LSTM) are used together to extract the deep spatial-temporal features to represent the content characteristics of the video. While accounting for the fluctuation in the quality of a video caused by bitrate switching on the QoE, other factors such as video content characteristics, video quality and video fluency, are combined to form the input feature vector. The ridge regression method is adopted to establish a QoE metric that enables to dynamically describe the relationship between the input feature vector and the value of the Mean Opinion Score (MOS). The experimental results on different datasets demonstrate that the prediction accuracy of the proposed method can achieve superior performance over the state-of-the-art methods, which proves the proposed QoE model can effectively guide the client's bitrate selection in dynamic adaptive streaming media services.

关键词 :

DASH DASH deep spatial-temporal representation deep spatial-temporal representation metric metric mobile video mobile video quality of experience quality of experience

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Du, Lina , Zhuo, Li , Li, Jiafeng et al. Video Quality of Experience Metric for Dynamic Adaptive Streaming Services Using DASH Standard and Deep Spatial-Temporal Representation of Video [J]. | APPLIED SCIENCES-BASEL , 2020 , 10 (5) .
MLA Du, Lina et al. "Video Quality of Experience Metric for Dynamic Adaptive Streaming Services Using DASH Standard and Deep Spatial-Temporal Representation of Video" . | APPLIED SCIENCES-BASEL 10 . 5 (2020) .
APA Du, Lina , Zhuo, Li , Li, Jiafeng , Zhang, Jing , Li, Xioguang , Zhang, Hui . Video Quality of Experience Metric for Dynamic Adaptive Streaming Services Using DASH Standard and Deep Spatial-Temporal Representation of Video . | APPLIED SCIENCES-BASEL , 2020 , 10 (5) .
导入链接 NoteExpress RIS BibTex
Residual Dense Network Based on Channel-Spatial Attention for the Scene Classification of a High-Resolution Remote Sensing Image SCIE
期刊论文 | 2020 , 12 (11) | REMOTE SENSING
WoS核心集被引次数: 31
摘要&关键词 引用

摘要 :

The scene classification of a remote sensing image has been widely used in various fields as an important task of understanding the content of a remote sensing image. Specially, a high-resolution remote sensing scene contains rich information and complex content. Considering that the scene content in a remote sensing image is very tight to the spatial relationship characteristics, how to design an effective feature extraction network directly decides the quality of classification by fully mining the spatial information in a high-resolution remote sensing image. In recent years, convolutional neural networks (CNNs) have achieved excellent performance in remote sensing image classification, especially the residual dense network (RDN) as one of the representative networks of CNN, which shows a stronger feature learning ability as it fully utilizes all the convolutional layer information. Therefore, we design an RDN based on channel-spatial attention for scene classification of a high-resolution remote sensing image. First, multi-layer convolutional features are fused with residual dense blocks. Then, a channel-spatial attention module is added to obtain more effective feature representation. Finally, softmax classifier is applied to classify the scene after adopting data augmentation strategy for meeting the training requirements of the network parameters. Five experiments are conducted on the UC Merced Land-Use Dataset (UCM) and Aerial Image Dataset (AID), and the competitive results demonstrate that our method can extract more effective features and is more conducive to classifying a scene.

关键词 :

channel-spatial attention channel-spatial attention high-resolution remote sensing image high-resolution remote sensing image residual dense network residual dense network scene classification scene classification

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Zhao, Xiaolei , Zhang, Jing , Tian, Jimiao et al. Residual Dense Network Based on Channel-Spatial Attention for the Scene Classification of a High-Resolution Remote Sensing Image [J]. | REMOTE SENSING , 2020 , 12 (11) .
MLA Zhao, Xiaolei et al. "Residual Dense Network Based on Channel-Spatial Attention for the Scene Classification of a High-Resolution Remote Sensing Image" . | REMOTE SENSING 12 . 11 (2020) .
APA Zhao, Xiaolei , Zhang, Jing , Tian, Jimiao , Zhuo, Li , Zhang, Jie . Residual Dense Network Based on Channel-Spatial Attention for the Scene Classification of a High-Resolution Remote Sensing Image . | REMOTE SENSING , 2020 , 12 (11) .
导入链接 NoteExpress RIS BibTex
In vivo toxicity of nitroaromatic compounds to rats: QSTR modelling and interspecies toxicity relationship with mouse SCIE
期刊论文 | 2020 , 399 | JOURNAL OF HAZARDOUS MATERIALS
WoS核心集被引次数: 31
摘要&关键词 引用

摘要 :

Nitroaromatic compounds (NACs) in the environment can cause serious public health and environmental problems due to their potential toxicity. This study established quantitative structure-toxicity relationship (QSTR) models for the acute oral toxicity of NACs towards rats following the stringent OECD principles for QSTR modelling. All models were assessed by various internationally accepted validation metrics and the OECD criteria. The best QSTR model contains seven simple and interpretable 2D descriptors with defined physicochemical meaning. Mechanistic interpretation indicated that van der Waals surface area, presence of C-F at topological distance 6, heteroatom content and frequency of C-N at topological distance 9 are main factors responsible for the toxicity of NACs. This proposed model was successfully applied to a true external set (295 compounds), and prediction reliability was analysed and discussed. Moreover, the rat-mouse and mouse-rat interspecies quantitative toxicity-toxicity relationship (iQTTR) models were also constructed, validated and employed in toxicity prediction for true external sets consisting of 67 and 265 compounds, respectively. These models showed good external predictivity that can be used to rapidly predict the rat oral acute toxicity of new or untested NACs falling within the applicability domain of the models, thus being beneficial in environmental risk assessment and regulatory purposes.

关键词 :

Acute oral toxicity Acute oral toxicity iQTTR iQTTR Mechanism of toxicity Mechanism of toxicity Nitroaromatic compounds Nitroaromatic compounds QSTR QSTR Risk assessment Risk assessment

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Hao, Yuxing , Sun, Guohui , Fan, Tengjiao et al. In vivo toxicity of nitroaromatic compounds to rats: QSTR modelling and interspecies toxicity relationship with mouse [J]. | JOURNAL OF HAZARDOUS MATERIALS , 2020 , 399 .
MLA Hao, Yuxing et al. "In vivo toxicity of nitroaromatic compounds to rats: QSTR modelling and interspecies toxicity relationship with mouse" . | JOURNAL OF HAZARDOUS MATERIALS 399 (2020) .
APA Hao, Yuxing , Sun, Guohui , Fan, Tengjiao , Tang, Xiaoyu , Zhang, Jing , Liu, Yongdong et al. In vivo toxicity of nitroaromatic compounds to rats: QSTR modelling and interspecies toxicity relationship with mouse . | JOURNAL OF HAZARDOUS MATERIALS , 2020 , 399 .
导入链接 NoteExpress RIS BibTex
每页显示 10| 20| 50 条结果
< 页,共 13 >

导出

数据:

选中

格式:
在线人数/总访问数:399/2884178
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司