• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索
高影响力成果及被引频次趋势图 关键词云图及合作者关系图

您的检索:

学者姓名:毋立芳

精炼检索结果:

来源

应用 展开

合作者

应用 展开

清除所有精炼条件

排序方式:
默认
  • 默认
  • 标题
  • 年份
  • WOS被引数
  • 影响因子
  • 正序
  • 倒序
< 页,共 28 >
Rationet: Ratio prediction network for object detection EI
期刊论文 | 2021 , 21 (5) , 1-14 | Sensors
摘要&关键词 引用

摘要 :

In object detection of remote sensing images, anchor-free detectors often suffer from false boxes and sample imbalance, due to the use of single oriented features and the key point-based boxing strategy. This paper presents a simple and effective anchor-free approach-RatioNet with less parameters and higher accuracy for sensing images, which assigns all points in ground-truth boxes as positive samples to alleviate the problem of sample imbalance. In dealing with false boxes from single oriented features, global features of objects is investigated to build a novel regression to predict boxes by predicting width and height of objects and corresponding ratios of l_ratio and t_ratio, which reflect the location of objects. Besides, we introduce ratio-center to assign different weights to pixels, which successfully preserves high-quality boxes and effectively facilitates the performance. On the MS-COCO test–dev set, the proposed RatioNet achieves 49.7% AP. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.

关键词 :

Forecasting Forecasting Object detection Object detection Object recognition Object recognition Remote sensing Remote sensing

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Zhao, Kuan , Zhao, Boxuan , Wu, Lifang et al. Rationet: Ratio prediction network for object detection [J]. | Sensors , 2021 , 21 (5) : 1-14 .
MLA Zhao, Kuan et al. "Rationet: Ratio prediction network for object detection" . | Sensors 21 . 5 (2021) : 1-14 .
APA Zhao, Kuan , Zhao, Boxuan , Wu, Lifang , Jian, Meng , Liu, Xu . Rationet: Ratio prediction network for object detection . | Sensors , 2021 , 21 (5) , 1-14 .
导入链接 NoteExpress RIS BibTex
一种结合图卷积神经网络的图像情感极性分类方法 incoPat
专利 | 2021-01-07 | CN202110019810.1
摘要&关键词 引用

摘要 :

一种结合图卷积神经网络的图像情感极性分类方法,涉及智能媒体计算和计算机视觉技术领域;首先对训练样本进行物体信息的提取,并用每张图片中的物体信息、视觉特征建立图模型;其次以图卷积网络对图模型中包含的物体交互信息提取,并与卷积神经网络的特征进行融合;然后将训练样本进行预处理后传入网络中,利用损失函数和优化器对模型的参数进行迭代更新直至达到收敛,完成训练;最后将测试数据送入网络中,得到模型对测试数据的预测结果以及分类准确率。本发明通过提取图像中物体在情感空间的交互特征使分类特征更符合物体的情感特征以及人类情感触发机理,在视觉特征的基础上增加高级语义特征,有助于提升情感分类算法在实际应用场景中的性能。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 毋立芳 , 张恒 , 邓斯诺 et al. 一种结合图卷积神经网络的图像情感极性分类方法 : CN202110019810.1[P]. | 2021-01-07 .
MLA 毋立芳 et al. "一种结合图卷积神经网络的图像情感极性分类方法" : CN202110019810.1. | 2021-01-07 .
APA 毋立芳 , 张恒 , 邓斯诺 , 石戈 , 简萌 , 相叶 . 一种结合图卷积神经网络的图像情感极性分类方法 : CN202110019810.1. | 2021-01-07 .
导入链接 NoteExpress RIS BibTex
Discovering Sentimental Interaction via Graph Convolutional Network for Visual Sentiment Prediction SCIE
期刊论文 | 2021 , 11 (4) | APPLIED SCIENCES-BASEL
WoS核心集被引次数: 3
摘要&关键词 引用

摘要 :

With the popularity of online opinion expressing, automatic sentiment analysis of images has gained considerable attention. Most methods focus on effectively extracting the sentimental features of images, such as enhancing local features through saliency detection or instance segmentation tools. However, as a high-level abstraction, the sentiment is difficult to accurately capture with the visual element because of the "affective gap". Previous works have overlooked the contribution of the interaction among objects to the image sentiment. We aim to utilize interactive characteristics of objects in the sentimental space, inspired by human sentimental principles that each object contributes to the sentiment. To achieve this goal, we propose a framework to leverage the sentimental interaction characteristic based on a Graph Convolutional Network (GCN). We first utilize an off-the-shelf tool to recognize objects and build a graph over them. Visual features represent nodes, and the emotional distances between objects act as edges. Then, we employ GCNs to obtain the interaction features among objects, which are fused with the CNN output of the whole image to predict the final results. Experimental results show that our method exceeds the state-of-the-art algorithm. Demonstrating that the rational use of interaction features can improve performance for sentiment analysis.

关键词 :

convolutional neural networks convolutional neural networks graph convolutional networks graph convolutional networks sentiment classification sentiment classification visual sentiment analysis visual sentiment analysis

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Wu, Lifang , Zhang, Heng , Deng, Sinuo et al. Discovering Sentimental Interaction via Graph Convolutional Network for Visual Sentiment Prediction [J]. | APPLIED SCIENCES-BASEL , 2021 , 11 (4) .
MLA Wu, Lifang et al. "Discovering Sentimental Interaction via Graph Convolutional Network for Visual Sentiment Prediction" . | APPLIED SCIENCES-BASEL 11 . 4 (2021) .
APA Wu, Lifang , Zhang, Heng , Deng, Sinuo , Shi, Ge , Liu, Xu . Discovering Sentimental Interaction via Graph Convolutional Network for Visual Sentiment Prediction . | APPLIED SCIENCES-BASEL , 2021 , 11 (4) .
导入链接 NoteExpress RIS BibTex
Visual presentation for monitoring layer-wise curing quality in DLP 3D printing SCIE
期刊论文 | 2021 | RAPID PROTOTYPING JOURNAL
WoS核心集被引次数: 4
摘要&关键词 引用

摘要 :

Purpose This paper aims to address the problem of uncertain product quality in digital light processing (DLP) three-dimensional (3D) printing, a scheme is proposed to qualitatively estimate whether a layer is printed with the qualified quality or not cured . Design/methodology/approach A thermochromic pigment whose color fades at 45 degrees C is prepared as the indicator and it is mixed with the resin. A visual surveillance framework is proposed to monitor the visual variation in a period of the entire curing process. The exposure region is divided into 30 x 30 sub-regions; gray-level variation curves (curing curves) in all sub-regions are classified as normal or abnormal and a corresponding printing control strategy is designed to improve the percentage of qualified printed objects. Findings The temperature variation caused by the releasing reaction heat on the exposure surface is consistent in different regions under the homogenized light intensity. The temperature in depth begins to rise at different times. The temperature in the regions near the light source rises earlier, and that far from the light source rises later. Thus, the color of resin mixed with the thermochromic pigment fades gradually over a period of the entire solidification process. The color variation in the regions with defects of bubbles, insufficient material filling, etc., is much different from that in the normal curing regions. Originality/value A temperature-sensitive organic chromatic chemical pigment is prepared to present the visual variation over a period of the entire curing process. A novel 3D printing scheme with visual surveillance is proposed to monitor the layer-wise curing quality and to timely stop the possible unqualified printing resulted from bubbles, insufficient material filling, etc.

关键词 :

Curing curves Curing curves DLP 3D printing DLP 3D printing Indicator Indicator Organic thermochromic pigment Organic thermochromic pigment Solidification status Solidification status Visual surveillance Visual surveillance

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Wu, Lifang , Liu, Zechao , Guan, Yupeng et al. Visual presentation for monitoring layer-wise curing quality in DLP 3D printing [J]. | RAPID PROTOTYPING JOURNAL , 2021 .
MLA Wu, Lifang et al. "Visual presentation for monitoring layer-wise curing quality in DLP 3D printing" . | RAPID PROTOTYPING JOURNAL (2021) .
APA Wu, Lifang , Liu, Zechao , Guan, Yupeng , Cui, Kejian , Jian, Meng , Qin, Yuanyuan et al. Visual presentation for monitoring layer-wise curing quality in DLP 3D printing . | RAPID PROTOTYPING JOURNAL , 2021 .
导入链接 NoteExpress RIS BibTex
Weakly-supervised video object localization with attentive spatio-temporal correlation EI
期刊论文 | 2021 , 145 , 232-239 | Pattern Recognition Letters
摘要&关键词 引用

摘要 :

Weakly-supervised video object localization is a challenging yet important task. The system should spatially localize the object of interest in videos, where only the descriptive sentences and their corresponding video segments are given in the training stage. Recent efforts propose to apply image-based Multiple Instance Learning (MIL) theory in this video task, and propagate the supervision from the video into frames by applying different frame-weighting strategies. Despite their promising progress, the spatio-temporal correlation between different object regions in videos has been largely ignored. To fill the research gap, in this work we introduce a simple but effective feature expression and aggregation framework, which utilizes the self-attention mechanism to capture the latent spatio-temporal correlation between multimodal object features and design a multimodal interaction module to model the similarity between the semantic query in sentences and the object regions in videos. We conduct extensive experimental evaluation on the YouCookII and ActivityNet-Entities datasets, which demonstrates significant improvements over multiple competitive baselines. © 2021

关键词 :

Computation theory Computation theory Object recognition Object recognition Semantics Semantics

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Wang, Mingui , Cui, Di , Wu, Lifang et al. Weakly-supervised video object localization with attentive spatio-temporal correlation [J]. | Pattern Recognition Letters , 2021 , 145 : 232-239 .
MLA Wang, Mingui et al. "Weakly-supervised video object localization with attentive spatio-temporal correlation" . | Pattern Recognition Letters 145 (2021) : 232-239 .
APA Wang, Mingui , Cui, Di , Wu, Lifang , Jian, Meng , Chen, Yukun , Wang, Dong et al. Weakly-supervised video object localization with attentive spatio-temporal correlation . | Pattern Recognition Letters , 2021 , 145 , 232-239 .
导入链接 NoteExpress RIS BibTex
Latent label mining for group activity recognition in basketball videos SCIE
期刊论文 | 2021 | IET IMAGE PROCESSING
WoS核心集被引次数: 3
摘要&关键词 引用

摘要 :

Motion information has been widely exploited for group activity recognition in sports video. However, in order to model and extract the various motion information between the adjacent frames, existing algorithms only use the coarse video-level labels as supervision cues. This may lead to the ambiguity of extracted features and the omission of changing rules of motion patterns that are also important sports video recognition. In this paper, a latent label mining strategy for group activity recognition in basketball videos is proposed. The authors' novel strategy allows them to obtain the latent labels set for marking different frames in an unsupervised way, and build the frame-level and video-level representations with two separate levels of supervision signal. Firstly, the latent labels of motion patterns are digged using the unsupervised hierarchical clustering technique. The generated latent labels are then taken as the frame-level supervision signal to train a deep CNN for the frame-level features extraction. Lastly, the frame-level features are fed into an LSTM network to build the spatio-temporal representation for group activity recognition. Experimental results on the public NCAA dataset demonstrate that the proposed algorithm achieves state-of-the-art performance.

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Wu, Lifang , Li, Zeyu , Xiang, Ye et al. Latent label mining for group activity recognition in basketball videos [J]. | IET IMAGE PROCESSING , 2021 .
MLA Wu, Lifang et al. "Latent label mining for group activity recognition in basketball videos" . | IET IMAGE PROCESSING (2021) .
APA Wu, Lifang , Li, Zeyu , Xiang, Ye , Jian, Meng , Shen, Jialie . Latent label mining for group activity recognition in basketball videos . | IET IMAGE PROCESSING , 2021 .
导入链接 NoteExpress RIS BibTex
Identity-constrained noise modeling with metric learning for face anti-spoofing SCIE
期刊论文 | 2021 , 434 , 149-164 | NEUROCOMPUTING
WoS核心集被引次数: 8
摘要&关键词 引用

摘要 :

Face presentation attack detection (PAD) has become a key component in face-based application systems. Typical face de-spoofing algorithms estimate the noise pattern of a spoof image to detect presentation attacks. These algorithms are device-independent and have good generalization ability. However, the noise modeling is not very effective because there is no ground truth (GT) with identity information for training the noise modeling network. To address this issue, we propose using the bona fide image of the corresponding subject in the training set as a type of GT called appr-GT with the identity information of the spoof image. A metric learning module is proposed to constrain the generated bona fide images from the spoof images so that they are near the appr-GT and far from the input images. This can reduce the influence of imaging environment differences between the appr-GT and GT of a spoof image. Extensive experimental results demonstrate that the reconstructed bona fide image and noise with high discriminative quality can be clearly separated from a spoof image. The proposed algorithm achieves competitive performance . (c) 2020 Published by Elsevier B.V.

关键词 :

Identity constrain Identity constrain Metric learning Metric learning Noise modeling Noise modeling Presentation attack detection Presentation attack detection

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Xu, Yaowen , Wu, Lifang , Jian, Meng et al. Identity-constrained noise modeling with metric learning for face anti-spoofing [J]. | NEUROCOMPUTING , 2021 , 434 : 149-164 .
MLA Xu, Yaowen et al. "Identity-constrained noise modeling with metric learning for face anti-spoofing" . | NEUROCOMPUTING 434 (2021) : 149-164 .
APA Xu, Yaowen , Wu, Lifang , Jian, Meng , Zheng, Wei-Shi , Ma, Yukun , Wang, Zhuming . Identity-constrained noise modeling with metric learning for face anti-spoofing . | NEUROCOMPUTING , 2021 , 434 , 149-164 .
导入链接 NoteExpress RIS BibTex
Cosine metric supervised deep hashing with balanced similarity EI
期刊论文 | 2021 , 448 , 94-105 | Neurocomputing
摘要&关键词 引用

摘要 :

Deep supervised hashing takes prominent advantages of low storage cost, high computational efficiency and good retrieval performance, which draws attention in the field of large-scale image retrieval. However, similarity-preserving, quantization errors and imbalanced data are still great challenges in deep supervised hashing. This paper proposes a pairwise similarity-preserving deep hashing scheme to handle the aforementioned problems in a unified framework, termed as Cosine Metric Supervised Deep Hashing with Balanced Similarity (BCMDH). BCMDH integrates contrastive cosine similarity and Cosine distance entropy quantization to preserve the original semantic distribution and reduce the quantization errors simultaneously. Furthermore, a weighted similarity measure with cosine metric entropy is designed to reduce the impact of imbalanced data, which adaptively assigns weights according to sample attributes (pos/neg and easy/hard) in the embedding process of similarity-preserving. The experimental results on four widely-used datasets demonstrate that the proposed method is capable of generating hash codes of high quality and improve large-scale image retrieval performance. © 2021

关键词 :

Computational efficiency Computational efficiency Deep learning Deep learning Digital storage Digital storage Entropy Entropy Hash functions Hash functions Image enhancement Image enhancement Image retrieval Image retrieval Large dataset Large dataset Semantics Semantics

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Hu, Wenjin , Wu, Lifang , Jian, Meng et al. Cosine metric supervised deep hashing with balanced similarity [J]. | Neurocomputing , 2021 , 448 : 94-105 .
MLA Hu, Wenjin et al. "Cosine metric supervised deep hashing with balanced similarity" . | Neurocomputing 448 (2021) : 94-105 .
APA Hu, Wenjin , Wu, Lifang , Jian, Meng , Chen, Yukun , Yu, Hui . Cosine metric supervised deep hashing with balanced similarity . | Neurocomputing , 2021 , 448 , 94-105 .
导入链接 NoteExpress RIS BibTex
Semantic manifold modularization-based ranking for image recommendation SCIE
期刊论文 | 2021 , 120 | PATTERN RECOGNITION
WoS核心集被引次数: 13
摘要&关键词 引用

摘要 :

As the Internet confronts the multimedia explosion, it becomes urgent to investigate personalized recommendation for alleviating information overload and improving users' experience. Most personalized recommendation approaches pay their attention to collaborative filtering over users' interactions, which suffers greatly from the highly sparse interactions. In image recommendation, visual correlations among images that users consumed provide a piece of intrinsic evidence to reveal users' interests. It inspires us to investigate image recommendation over the dense visual graph of images instead of the sparse user interaction graph. In this paper, we propose a semantic manifold modularization-based ranking (MMR) for image recommendation. MMR leverages the dense visual manifold to propagate users' historical records and infer user-image correlations for image recommendation. Especially, it constrains interest propagation within semantic visual compact groups by manifold modularization to make a tradeoff between users' personality and graph smoothness in propagation. Experimental results demonstrate that user-consumed visual correlations play actively to capture users' interests, and the proposed MMR can infer user-image correlations via visual manifold propagation for image recommendation. (c) 2021 Elsevier Ltd. All rights reserved.

关键词 :

Image recommendation Image recommendation Manifold propagation Manifold propagation Modularization Modularization User interest User interest

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Jian, Meng , Guo, Jingjing , Zhang, Chenlin et al. Semantic manifold modularization-based ranking for image recommendation [J]. | PATTERN RECOGNITION , 2021 , 120 .
MLA Jian, Meng et al. "Semantic manifold modularization-based ranking for image recommendation" . | PATTERN RECOGNITION 120 (2021) .
APA Jian, Meng , Guo, Jingjing , Zhang, Chenlin , Jia, Ting , Wu, Lifang , Yang, Xun et al. Semantic manifold modularization-based ranking for image recommendation . | PATTERN RECOGNITION , 2021 , 120 .
导入链接 NoteExpress RIS BibTex
Key frame extraction based on global motion statistics for team-sport videos SCIE
期刊论文 | 2021 | MULTIMEDIA SYSTEMS
WoS核心集被引次数: 9
摘要&关键词 引用

摘要 :

Key frame extraction is an important manner of video summarization. It can be used to interpret video content quickly. Existing approaches first partition the entire video into video clips by shot boundary detection, and then, extract key frames by frame clustering. However, in most team-sport videos, a video clip usually includes many events, and it is difficult to extract the key frames related to all of these events accurately, because different events of a game shot can have features of similar appearance. As is well known, most events in team-sport videos are attack and defense conversions, which are related to global translation. Therefore, by using fine-grained partition based on the global motion, a shot could be further partitioned into more video clips, from which more key frames could be extracted and they are related to the events. In this study, global horizontal motion is introduced to further partition video clips into fine-grained video clips. Furthermore, global motion statistics are utilized to extract candidate key frames. Finally, the representative key frames are extracted based on the spatial-temporal consistence and hierarchical clustering, and the redundant frames are removed. A dataset called SportKF is built, which includes 25 videos of 197,878 frames in 112 min and 764 key frames from four types of sports (basketball, football, American football and field hockey). The experimental results demonstrate that the proposed scheme achieves state-of-the-art performance by introducing global motion statistics.

关键词 :

Fine-grained video partition Fine-grained video partition Global motion statistics Global motion statistics Key frame Key frame Optical flow Optical flow Redundant frame removal Redundant frame removal

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Yuan, Yuan , Lu, Zhe , Yang, Zhou et al. Key frame extraction based on global motion statistics for team-sport videos [J]. | MULTIMEDIA SYSTEMS , 2021 .
MLA Yuan, Yuan et al. "Key frame extraction based on global motion statistics for team-sport videos" . | MULTIMEDIA SYSTEMS (2021) .
APA Yuan, Yuan , Lu, Zhe , Yang, Zhou , Jian, Meng , Wu, Lifang , Li, Zeyu et al. Key frame extraction based on global motion statistics for team-sport videos . | MULTIMEDIA SYSTEMS , 2021 .
导入链接 NoteExpress RIS BibTex
每页显示 10| 20| 50 条结果
< 页,共 28 >

导出

数据:

选中

格式:
在线人数/总访问数:727/2884678
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司