您的检索:
学者姓名:张菁
精炼检索结果:
年份
成果类型
收录类型
来源
综合
曾用名
合作者
语言
清除所有精炼条件
摘要 :
本发明提供一种低光图像的增强方法、装置、电子设备及存储介质,该方法包括:获取待增强的低光图像;将所述低光图像输入至图像分解网络,得到所述低光图像对应的第一反射率分量图及第一光照分量图;将所述第一反射率分量图及所述第一光照分量图输入至反射率调整网络,得到第二反射率分量图,并将所述第一光照分量图输入至光照调整网络,得到第二光照分量图;根据所述第二反射率分量图及所述第二光照分量图,得到所述低光图像对应的增强图像。该方法利用图像分解网络,可准确分解低光图像,并利用反射率调整网络和光照调整网络,以从粗到细的方式调整分解后的低光图像,得到对应的反射率分量和光照分量,进而可有效提高获取增强图像的准确性。
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | 李嘉锋 , 郝帅 , 况玲艳 et al. 低光图像的增强方法、装置、电子设备及存储介质 : CN202310028246.9[P]. | 2023-01-09 . |
MLA | 李嘉锋 et al. "低光图像的增强方法、装置、电子设备及存储介质" : CN202310028246.9. | 2023-01-09 . |
APA | 李嘉锋 , 郝帅 , 况玲艳 , 张菁 , 卓力 . 低光图像的增强方法、装置、电子设备及存储介质 : CN202310028246.9. | 2023-01-09 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
A two-level hierarchical scheme for video-based person re-identification (re-id) is presented, with the aim of learning a pedestrian appearance model through more complete walking cycle extraction. Specifically, given a video with consecutive frames, the objective of the first level is to detect the key frame with lightweight Convolutional neural network (CNN) of PCANet to reflect the summary of the video content. At the second level, on the basis of the detected key frame, the pedestrian walking cycle is extracted from the long video sequence. Moreover, local features of Local maximal occurrence (LOMO) of the walking cycle are extracted to represent the pedestrian' s appearance information. In contrast to the existing walking-cycle-based person re-id approaches, the proposed scheme relaxes the limit on step number for a walking cycle, thus making it flexible and less affected by noisy frames. Experiments are conducted on two benchmark datasets: PRID 2011 and iLIDS-VID. The experimental results demonstrate that our proposed scheme outperforms the six state-of-art video-based re-id methods, and is more robust to the severe video noises and variations in pose, lighting, and camera viewpoint.
关键词 :
Video‐ Video‐ identification identification based person re‐ based person re‐ Convolutional neural network Convolutional neural network Walking cycle extraction Walking cycle extraction Key frame detection Key frame detection
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Youjiao, Li , Li, Zhuo , Jiafeng, Li et al. A Hierarchical Scheme for Video-Based Person Re-identification Using Lightweight PCANet and Handcrafted LOMO Features [J]. | CHINESE JOURNAL OF ELECTRONICS , 2021 , 30 (2) : 289-295 . |
MLA | Youjiao, Li et al. "A Hierarchical Scheme for Video-Based Person Re-identification Using Lightweight PCANet and Handcrafted LOMO Features" . | CHINESE JOURNAL OF ELECTRONICS 30 . 2 (2021) : 289-295 . |
APA | Youjiao, Li , Li, Zhuo , Jiafeng, Li , Jing, Zhang . A Hierarchical Scheme for Video-Based Person Re-identification Using Lightweight PCANet and Handcrafted LOMO Features . | CHINESE JOURNAL OF ELECTRONICS , 2021 , 30 (2) , 289-295 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Since the era of we-media, live video industry has shown an explosive growth trend. For large-scale live video streaming, especially those containing crowd events that may cause great social impact, how to identify and supervise the crowd activity in live video streaming effectively is of great value to push the healthy development of live video industry. The existing crowd activity recognition mainly uses visual information, rarely fully exploiting and utilizing the correlation or external knowledge between crowd content. Therefore, a crowd activity recognition method in live video streaming is proposed by 3D-ResNet and regional graph convolution network (ReGCN). (1) After extracting deep spatiotemporal features from live video streaming with 3D-ResNet, the region proposals are generated by region proposal network. (2) A weakly supervised ReGCN is constructed by making region proposals as graph nodes and their correlations as edges. (3) Crowd activity in live video streaming is recognised by combining the output of ReGCN, the deep spatiotemporal features and the crowd motion intensity as external knowledge. Four experiments are conducted on the public collective activity extended dataset and a real-world dataset BJUT-CAD. The competitive results demonstrate that our method can effectively recognise crowd activity in live video streaming.
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Kang, Junpeng , Zhang, Jing , Li, Wensheng et al. Crowd activity recognition in live video streaming via 3D-ResNet and region graph convolution network [J]. | IET IMAGE PROCESSING , 2021 , 15 (14) : 3476-3486 . |
MLA | Kang, Junpeng et al. "Crowd activity recognition in live video streaming via 3D-ResNet and region graph convolution network" . | IET IMAGE PROCESSING 15 . 14 (2021) : 3476-3486 . |
APA | Kang, Junpeng , Zhang, Jing , Li, Wensheng , Zhuo, Li . Crowd activity recognition in live video streaming via 3D-ResNet and region graph convolution network . | IET IMAGE PROCESSING , 2021 , 15 (14) , 3476-3486 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Due to the complex background and spatial distribution, it brings great challenge to object detection in high-resolution remote sensing images. In view of the characteristics of various scales, arbitrary orientations, shape variations, and dense arrangement, a multiscale object detection method in high-resolution remote sensing images is proposed by using rotation invariance deep features driven by channel attention. First, a channel attention module is added to our feature fusion and scaling-based single shot detector (FS-SSD) to strengthen the long-term semantic dependence between objects for improving the discriminative ability of the deep features. Then, an oriented response convolution is followed to generate feature maps with orientation channels to produce rotation invariant deep features. Finally, multiscale objects are predicted in a high-resolution remote sensing image by fusing various scale feature maps with multiscale feature module in FS-SSD. Five experiments are conducted on NWPU VHR-10 dataset and achieve better detection performance compared with the state-of-the-art methods.
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Zhao, Xiaolei , Zhang, Jing , Tian, Jimiao et al. Multiscale object detection in high-resolution remote sensing images via rotation invariant deep features driven by channel attention [J]. | INTERNATIONAL JOURNAL OF REMOTE SENSING , 2021 , 42 (15) : 5754-5773 . |
MLA | Zhao, Xiaolei et al. "Multiscale object detection in high-resolution remote sensing images via rotation invariant deep features driven by channel attention" . | INTERNATIONAL JOURNAL OF REMOTE SENSING 42 . 15 (2021) : 5754-5773 . |
APA | Zhao, Xiaolei , Zhang, Jing , Tian, Jimiao , Zhuo, Li , Zhang, Jie . Multiscale object detection in high-resolution remote sensing images via rotation invariant deep features driven by channel attention . | INTERNATIONAL JOURNAL OF REMOTE SENSING , 2021 , 42 (15) , 5754-5773 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Live video hosted by streamer is being sought after by more and more Internet users. A few streamers show inappropriate action in normal live video content for profit and popularity, who bring great harm to the network environment. In order to effectively regulate the streamer behavior in live video, a strea-mer action recognition method in live video with spatial-temporal attention and deep dictionary learning is proposed in this paper. First, deep features with spatial context are extracted by a spatial attention net-work to focus on action region of streamer after sampling video frames from live video. Then, deep fea-tures of video are fused by assigning weights with a temporal attention network to learn the frame attention from an action. Finally, deep dictionary learning is used to sparsely represent the deep features to further recognize streamer actions. Four experiments are conducted on a real-world dataset, and the competitive results demonstrate that our method can improve the accuracy and speed of streamer action recognition in live video. (c) 2021 Elsevier B.V. All rights reserved.
关键词 :
Streamer Streamer Action recognition Action recognition Live video Live video Spatial-temporal attention Spatial-temporal attention Deep dictionary learning Deep dictionary learning
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Li, Chenhao , Zhang, Jing , Yao, Jiacheng . Streamer action recognition in live video with spatial-temporal attention and deep dictionary learning [J]. | NEUROCOMPUTING , 2021 , 453 : 383-392 . |
MLA | Li, Chenhao et al. "Streamer action recognition in live video with spatial-temporal attention and deep dictionary learning" . | NEUROCOMPUTING 453 (2021) : 383-392 . |
APA | Li, Chenhao , Zhang, Jing , Yao, Jiacheng . Streamer action recognition in live video with spatial-temporal attention and deep dictionary learning . | NEUROCOMPUTING , 2021 , 453 , 383-392 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Although deep learning has reached a higher accuracy for video content analysis, it is not satisfied with practical application demands of porn streamer recognition in live video because of multiple parameters, complex structures of deep network model. In order to improve the recognition efficiency of porn streamer in live video, a deep network model compression method based on multimodal knowledge distillation is proposed. First, the teacher model is trained with visual-speech deep network to obtain the corresponding porn video prediction score. Second, a lightweight student model constructed with MobileNetV2 and Xception transfers the knowledge from the teacher model by using multimodal knowledge distillation strategy. Finally, porn streamer in live video is recognized by combining the lightweight student model of visualspeech network with the bullet screen text recognition network. Experimental results demonstrate that the proposed method can effectively drop the computation cost and improve the recognition speed under the proper accuracy.
关键词 :
Lightweight student model Lightweight student model Knowledge distillation Knowledge distillation Live video Live video Multimodal Multimodal Porn streamer recognition Porn streamer recognition
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Wang Liyuan , Zhang Jing , Yao Jiacheng et al. Porn Streamer Recognition in Live Video Based on Multimodal Knowledge Distillation [J]. | CHINESE JOURNAL OF ELECTRONICS , 2021 , 30 (6) : 1096-1102 . |
MLA | Wang Liyuan et al. "Porn Streamer Recognition in Live Video Based on Multimodal Knowledge Distillation" . | CHINESE JOURNAL OF ELECTRONICS 30 . 6 (2021) : 1096-1102 . |
APA | Wang Liyuan , Zhang Jing , Yao Jiacheng , Zhuo Li . Porn Streamer Recognition in Live Video Based on Multimodal Knowledge Distillation . | CHINESE JOURNAL OF ELECTRONICS , 2021 , 30 (6) , 1096-1102 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
一种基于CT影像的心脏二尖瓣顶点的自动定位方法属于医学图像解析领域。本发明首先基于深度神经网络对CT图像进行图像预处理,以实现CT图像关键特征提取与表达;然后,利用深度强化学习模型针对基于智能体智能体的CT影像标志点进行定位,对心脏二尖瓣顶点位置进行自动检测。本发明提出了一种最优路径的搜索策略,可以非常方便地在CT图像中实现计算机自动定位心脏二尖瓣顶点的位置供医生进行疾病诊断,同时随着人工指定定位位置的变化,也具备一定的扩展性,在医学图像解析上下文中有利地创建了机器图像理解。
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | 卓力 , 王宗浩 , 张辉 et al. 一种基于CT影像的心脏二尖瓣顶点的自动定位方法 : CN202110274467.5[P]. | 2021-03-12 . |
MLA | 卓力 et al. "一种基于CT影像的心脏二尖瓣顶点的自动定位方法" : CN202110274467.5. | 2021-03-12 . |
APA | 卓力 , 王宗浩 , 张辉 , 张菁 , 李晓光 . 一种基于CT影像的心脏二尖瓣顶点的自动定位方法 : CN202110274467.5. | 2021-03-12 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
一种基于轻型卷积神经网络的舌体姿态异常判别方法属于计算机视觉领域和中医舌诊领域。本发明设计了一个层数较浅、易于训练的卷积神经网络,用于对舌体姿态进行分类判别。该方法包括三个步骤:第一步是构建舌体姿态异常分类数据集;第二步是设计舌体姿态异常分类网络;第三步是利用构建的数据集对分类网络进行训练,得到分类模型,用于对舌体姿态异常进行判别。本发明以多轮实验的方式确定了效果最优的网络架构,用于进行舌体姿态的异常判决,所提出的网络架构在保证有较高分类准确度的同时,具有较少的卷积层和池化层,计算复杂度低,可以满足实际应用需求。
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | 卓力 , 韩翰 , 张辉 et al. 一种基于轻型卷积神经网络的舌体姿态异常判别方法 : CN202110243772.8[P]. | 2021-03-05 . |
MLA | 卓力 et al. "一种基于轻型卷积神经网络的舌体姿态异常判别方法" : CN202110243772.8. | 2021-03-05 . |
APA | 卓力 , 韩翰 , 张辉 , 李晓光 , 张菁 . 一种基于轻型卷积神经网络的舌体姿态异常判别方法 : CN202110243772.8. | 2021-03-05 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
一种基于卷积神经网络的中医舌色苔色协同分类方法属于计算机视觉和中医诊断学领域。由于舌色、苔色都是利用颜色特征来进行识别,均需要对舌体区域提取颜色特征,二个任务具有相似性。该方法首先设计一个共享的深度神经网络架构,提取中医舌图像中包含的舌色苔色共有深度特征,以及舌象的特有语义特征;然后,对舌色、苔色的标签进行编码、组合,得到舌色苔色的组合标签向量;最后,设计一个深度神经网络,通过训练,建立舌色苔色共有深度特征和组合标签向量之间的映射模型。采用这样的映射方式,可以同时实现舌色、苔色两种诊察特征的识别,不仅实现简单,而且充分利用了舌色、苔色两种属性之间的内在关联关系,可以获得更高的识别准确率。
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | 卓力 , 孙亮亮 , 张辉 et al. 一种基于卷积神经网络的中医舌色苔色协同分类方法 : CN202110216858.1[P]. | 2021-02-26 . |
MLA | 卓力 et al. "一种基于卷积神经网络的中医舌色苔色协同分类方法" : CN202110216858.1. | 2021-02-26 . |
APA | 卓力 , 孙亮亮 , 张辉 , 张菁 , 李晓光 . 一种基于卷积神经网络的中医舌色苔色协同分类方法 : CN202110216858.1. | 2021-02-26 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
本发明提供一种图像去雾方法、电子设备、存储介质和计算机程序产品,方法包括获取待去雾的目标雾霾图像;将目标雾霾图像输入至去雾模型,对目标雾霾图像进行去雾处理,获得去雾模型输出的目标去雾图像,去雾模型是基于不成对的清晰图像和雾霾图像构成的训练图像集,对待训练模型进行无监督训练得到的,待训练模型包括用于进行加雾转换处理和去雾转换处理的多尺度注意力模块,及用于区分训练图像集的真实图像和多尺度注意力模块的生成图像的判别器。本发明的去雾模型是基于不成对的清晰图像和雾霾图像构成的训练图像集进行无监督训练得到的,从而避免成对图像训练集对去雾模型训练的限制,进而提高图像去雾的性能。
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | 李嘉锋 , 李耀鹏 , 贾童瑶 et al. 图像去雾方法、电子设备、存储介质和计算机程序产品 : CN202111234337.5[P]. | 2021-10-22 . |
MLA | 李嘉锋 et al. "图像去雾方法、电子设备、存储介质和计算机程序产品" : CN202111234337.5. | 2021-10-22 . |
APA | 李嘉锋 , 李耀鹏 , 贾童瑶 , 张菁 , 卓力 . 图像去雾方法、电子设备、存储介质和计算机程序产品 : CN202111234337.5. | 2021-10-22 . |
导入链接 | NoteExpress RIS BibTex |
导出
数据: |
选中 到 |
格式: |