您的检索:
学者姓名:毋立芳
精炼检索结果:
年份
成果类型
收录类型
来源
综合
合作者
语言
清除所有精炼条件
摘要 :
In recent years, a series of continuous fabrication technologies based on digital light processing (DLP) 3D printing have emerged, which have significantly improved the speed of 3D printing. However, limited by the resin filling speed, those technologies are only suitable to print hollow structures. In this paper, an optimized protocol for developing continuous and layer-wise hybrid DLP 3D printing mode is proposed based on computational fluid dynamics (CFD). Volume of the fluid method is used to simulate the behavior of resin flow while Poiseuille flow, Jacobs working curve, and Beer-Lambert law are used to optimize the key control parameters for continuous and layer-wise printing. This strategy provides a novel simulation-based method development scenario to establish printing control parameters that are applicable to arbitrary structures. Experiments verified that the printing control parameters obtained by simulations can effectively improve the printing efficiency and the applicability of DLP 3D printing.
关键词 :
Computational fluid dynamics Computational fluid dynamics Resin filling Resin filling Control parameters Control parameters DLP 3D printing DLP 3D printing Continuous printing Continuous printing Layer-wise printing Layer-wise printing
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Zhao, Lidong , Zhang, Yan , Wu, Lifang et al. Developing the optimized control scheme for continuous and layer-wise DLP 3D printing by CFD simulation [J]. | INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY , 2023 , 125 (3-4) : 1511-1529 . |
MLA | Zhao, Lidong et al. "Developing the optimized control scheme for continuous and layer-wise DLP 3D printing by CFD simulation" . | INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY 125 . 3-4 (2023) : 1511-1529 . |
APA | Zhao, Lidong , Zhang, Yan , Wu, Lifang , Zhao, Zhi , Men, Zening , Yang, Feng . Developing the optimized control scheme for continuous and layer-wise DLP 3D printing by CFD simulation . | INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY , 2023 , 125 (3-4) , 1511-1529 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
In object detection of remote sensing images, anchor-free detectors often suffer from false boxes and sample imbalance, due to the use of single oriented features and the key point-based boxing strategy. This paper presents a simple and effective anchor-free approach-RatioNet with less parameters and higher accuracy for sensing images, which assigns all points in ground-truth boxes as positive samples to alleviate the problem of sample imbalance. In dealing with false boxes from single oriented features, global features of objects is investigated to build a novel regression to predict boxes by predicting width and height of objects and corresponding ratios of l_ratio and t_ratio, which reflect the location of objects. Besides, we introduce ratio-center to assign different weights to pixels, which successfully preserves high-quality boxes and effectively facilitates the performance. On the MS-COCO test–dev set, the proposed RatioNet achieves 49.7% AP. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
关键词 :
Forecasting Forecasting Object detection Object detection Object recognition Object recognition Remote sensing Remote sensing
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Zhao, Kuan , Zhao, Boxuan , Wu, Lifang et al. Rationet: Ratio prediction network for object detection [J]. | Sensors , 2021 , 21 (5) : 1-14 . |
MLA | Zhao, Kuan et al. "Rationet: Ratio prediction network for object detection" . | Sensors 21 . 5 (2021) : 1-14 . |
APA | Zhao, Kuan , Zhao, Boxuan , Wu, Lifang , Jian, Meng , Liu, Xu . Rationet: Ratio prediction network for object detection . | Sensors , 2021 , 21 (5) , 1-14 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
一种结合图卷积神经网络的图像情感极性分类方法,涉及智能媒体计算和计算机视觉技术领域;首先对训练样本进行物体信息的提取,并用每张图片中的物体信息、视觉特征建立图模型;其次以图卷积网络对图模型中包含的物体交互信息提取,并与卷积神经网络的特征进行融合;然后将训练样本进行预处理后传入网络中,利用损失函数和优化器对模型的参数进行迭代更新直至达到收敛,完成训练;最后将测试数据送入网络中,得到模型对测试数据的预测结果以及分类准确率。本发明通过提取图像中物体在情感空间的交互特征使分类特征更符合物体的情感特征以及人类情感触发机理,在视觉特征的基础上增加高级语义特征,有助于提升情感分类算法在实际应用场景中的性能。
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | 毋立芳 , 张恒 , 邓斯诺 et al. 一种结合图卷积神经网络的图像情感极性分类方法 : CN202110019810.1[P]. | 2021-01-07 . |
MLA | 毋立芳 et al. "一种结合图卷积神经网络的图像情感极性分类方法" : CN202110019810.1. | 2021-01-07 . |
APA | 毋立芳 , 张恒 , 邓斯诺 , 石戈 , 简萌 , 相叶 . 一种结合图卷积神经网络的图像情感极性分类方法 : CN202110019810.1. | 2021-01-07 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Motion information used in the existed video action recognition schemes is mixing of global motion(GM) and local motion(LM). In fact, GM & LM have their respective semantic concepts. Thus, it is promising to decouple GM and LM from the mixed motions. Numerous efforts have been made on the design of global motion models for video encoding, video dejittering, video denoising, and so on. Nevertheless, some of the models are too basic to cover the camera motions in action recognition while others are over-complicated. In this paper, we focus on the characteristic of the action recognition and propose a novel independent univariate GM model. It ignores camera rotation, which appears rarely in action recognition videos, and represents the GM in x and y direction respectively. Furthermore, GM is position invariant because it is from the universal camera motion. Pixels with global motions are subjected to the same parametric model and pixels with mixed motion can be seen as outliers. Motivated by this, we develop an iterative optimization scheme for GM estimation which removes the outlier points step by step and estimates global motions in a coarse-to-fine manner. Finally, the LM is estimated through a Spatio-temporal threshold-based method. Experimental results demonstrate that the proposed GM model makes a better trade-off between the model complexity and the robustness. And the iterative optimization scheme is more effective than the existed algorithms. The compared experiments using four popular action recognition models on UCF-101 (for action recognition) and NCAA (for group activity recognition) demonstrate that local motions are more effective than the mixed motions. © 2021
关键词 :
Cameras Cameras Economic and social effects Economic and social effects Image coding Image coding Iterative methods Iterative methods Motion estimation Motion estimation Pixels Pixels Semantics Semantics Statistics Statistics Video signal processing Video signal processing
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Wu, Lifang , Yang, Zhou , Jian, Meng et al. Global motion estimation with iterative optimization-based independent univariate model for action recognition [J]. | Pattern Recognition , 2021 , 116 . |
MLA | Wu, Lifang et al. "Global motion estimation with iterative optimization-based independent univariate model for action recognition" . | Pattern Recognition 116 (2021) . |
APA | Wu, Lifang , Yang, Zhou , Jian, Meng , Shen, Jialie , Yang, Yuchen , Lang, Xianglong . Global motion estimation with iterative optimization-based independent univariate model for action recognition . | Pattern Recognition , 2021 , 116 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
With the popularity of online opinion expressing, automatic sentiment analysis of images has gained considerable attention. Most methods focus on effectively extracting the sentimental features of images, such as enhancing local features through saliency detection or instance segmentation tools. However, as a high-level abstraction, the sentiment is difficult to accurately capture with the visual element because of the "affective gap". Previous works have overlooked the contribution of the interaction among objects to the image sentiment. We aim to utilize interactive characteristics of objects in the sentimental space, inspired by human sentimental principles that each object contributes to the sentiment. To achieve this goal, we propose a framework to leverage the sentimental interaction characteristic based on a Graph Convolutional Network (GCN). We first utilize an off-the-shelf tool to recognize objects and build a graph over them. Visual features represent nodes, and the emotional distances between objects act as edges. Then, we employ GCNs to obtain the interaction features among objects, which are fused with the CNN output of the whole image to predict the final results. Experimental results show that our method exceeds the state-of-the-art algorithm. Demonstrating that the rational use of interaction features can improve performance for sentiment analysis.
关键词 :
sentiment classification sentiment classification graph convolutional networks graph convolutional networks visual sentiment analysis visual sentiment analysis convolutional neural networks convolutional neural networks
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Wu, Lifang , Zhang, Heng , Deng, Sinuo et al. Discovering Sentimental Interaction via Graph Convolutional Network for Visual Sentiment Prediction [J]. | APPLIED SCIENCES-BASEL , 2021 , 11 (4) . |
MLA | Wu, Lifang et al. "Discovering Sentimental Interaction via Graph Convolutional Network for Visual Sentiment Prediction" . | APPLIED SCIENCES-BASEL 11 . 4 (2021) . |
APA | Wu, Lifang , Zhang, Heng , Deng, Sinuo , Shi, Ge , Liu, Xu . Discovering Sentimental Interaction via Graph Convolutional Network for Visual Sentiment Prediction . | APPLIED SCIENCES-BASEL , 2021 , 11 (4) . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Deep supervised hashing takes prominent advantages of low storage cost, high computational efficiency and good retrieval performance, which draws attention in the field of large-scale image retrieval. However, similarity-preserving, quantization errors and imbalanced data are still great challenges in deep supervised hashing. This paper proposes a pairwise similarity-preserving deep hashing scheme to handle the aforementioned problems in a unified framework, termed as Cosine Metric Supervised Deep Hashing with Balanced Similarity (BCMDH). BCMDH integrates contrastive cosine similarity and Cosine distance entropy quantization to preserve the original semantic distribution and reduce the quantization errors simultaneously. Furthermore, a weighted similarity measure with cosine metric entropy is designed to reduce the impact of imbalanced data, which adaptively assigns weights according to sample attributes (pos/neg and easy/hard) in the embedding process of similarity-preserving. The experimental results on four widely-used datasets demonstrate that the proposed method is capable of generating hash codes of high quality and improve large-scale image retrieval performance. © 2021
关键词 :
Computational efficiency Computational efficiency Deep learning Deep learning Digital storage Digital storage Entropy Entropy Hash functions Hash functions Image enhancement Image enhancement Image retrieval Image retrieval Large dataset Large dataset Semantics Semantics
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Hu, Wenjin , Wu, Lifang , Jian, Meng et al. Cosine metric supervised deep hashing with balanced similarity [J]. | Neurocomputing , 2021 , 448 : 94-105 . |
MLA | Hu, Wenjin et al. "Cosine metric supervised deep hashing with balanced similarity" . | Neurocomputing 448 (2021) : 94-105 . |
APA | Hu, Wenjin , Wu, Lifang , Jian, Meng , Chen, Yukun , Yu, Hui . Cosine metric supervised deep hashing with balanced similarity . | Neurocomputing , 2021 , 448 , 94-105 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Motion information has been widely exploited for group activity recognition in sports video. However, in order to model and extract the various motion information between the adjacent frames, existing algorithms only use the coarse video-level labels as supervision cues. This may lead to the ambiguity of extracted features and the omission of changing rules of motion patterns that are also important sports video recognition. In this paper, a latent label mining strategy for group activity recognition in basketball videos is proposed. The authors' novel strategy allows them to obtain the latent labels set for marking different frames in an unsupervised way, and build the frame-level and video-level representations with two separate levels of supervision signal. Firstly, the latent labels of motion patterns are digged using the unsupervised hierarchical clustering technique. The generated latent labels are then taken as the frame-level supervision signal to train a deep CNN for the frame-level features extraction. Lastly, the frame-level features are fed into an LSTM network to build the spatio-temporal representation for group activity recognition. Experimental results on the public NCAA dataset demonstrate that the proposed algorithm achieves state-of-the-art performance.
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Wu, Lifang , Li, Zeyu , Xiang, Ye et al. Latent label mining for group activity recognition in basketball videos [J]. | IET IMAGE PROCESSING , 2021 . |
MLA | Wu, Lifang et al. "Latent label mining for group activity recognition in basketball videos" . | IET IMAGE PROCESSING (2021) . |
APA | Wu, Lifang , Li, Zeyu , Xiang, Ye , Jian, Meng , Shen, Jialie . Latent label mining for group activity recognition in basketball videos . | IET IMAGE PROCESSING , 2021 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Face presentation attack detection (PAD) has become a key component in face-based application systems. Typical face de-spoofing algorithms estimate the noise pattern of a spoof image to detect presentation attacks. These algorithms are device-independent and have good generalization ability. However, the noise modeling is not very effective because there is no ground truth (GT) with identity information for training the noise modeling network. To address this issue, we propose using the bona fide image of the corresponding subject in the training set as a type of GT called appr-GT with the identity information of the spoof image. A metric learning module is proposed to constrain the generated bona fide images from the spoof images so that they are near the appr-GT and far from the input images. This can reduce the influence of imaging environment differences between the appr-GT and GT of a spoof image. Extensive experimental results demonstrate that the reconstructed bona fide image and noise with high discriminative quality can be clearly separated from a spoof image. The proposed algorithm achieves competitive performance . (c) 2020 Published by Elsevier B.V.
关键词 :
Presentation attack detection Presentation attack detection Metric learning Metric learning Identity constrain Identity constrain Noise modeling Noise modeling
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Xu, Yaowen , Wu, Lifang , Jian, Meng et al. Identity-constrained noise modeling with metric learning for face anti-spoofing [J]. | NEUROCOMPUTING , 2021 , 434 : 149-164 . |
MLA | Xu, Yaowen et al. "Identity-constrained noise modeling with metric learning for face anti-spoofing" . | NEUROCOMPUTING 434 (2021) : 149-164 . |
APA | Xu, Yaowen , Wu, Lifang , Jian, Meng , Zheng, Wei-Shi , Ma, Yukun , Wang, Zhuming . Identity-constrained noise modeling with metric learning for face anti-spoofing . | NEUROCOMPUTING , 2021 , 434 , 149-164 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Weakly-supervised video object localization is a challenging yet important task. The system should spatially localize the object of interest in videos, where only the descriptive sentences and their corresponding video segments are given in the training stage. Recent efforts propose to apply image-based Multiple Instance Learning (MIL) theory in this video task, and propagate the supervision from the video into frames by applying different frame-weighting strategies. Despite their promising progress, the spatio-temporal correlation between different object regions in videos has been largely ignored. To fill the research gap, in this work we introduce a simple but effective feature expression and aggregation framework, which utilizes the self-attention mechanism to capture the latent spatio-temporal correlation between multimodal object features and design a multimodal interaction module to model the similarity between the semantic query in sentences and the object regions in videos. We conduct extensive experimental evaluation on the YouCookII and ActivityNet-Entities datasets, which demonstrates significant improvements over multiple competitive baselines. © 2021
关键词 :
Computation theory Computation theory Semantics Semantics Object recognition Object recognition
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Wang, Mingui , Cui, Di , Wu, Lifang et al. Weakly-supervised video object localization with attentive spatio-temporal correlation [J]. | Pattern Recognition Letters , 2021 , 145 : 232-239 . |
MLA | Wang, Mingui et al. "Weakly-supervised video object localization with attentive spatio-temporal correlation" . | Pattern Recognition Letters 145 (2021) : 232-239 . |
APA | Wang, Mingui , Cui, Di , Wu, Lifang , Jian, Meng , Chen, Yukun , Wang, Dong et al. Weakly-supervised video object localization with attentive spatio-temporal correlation . | Pattern Recognition Letters , 2021 , 145 , 232-239 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Face anti-spoofing plays a vital role in face recognition systems. The existed deep learning approaches have effectively improved the performance of presentation attack detection (PAD). However, they learn a uniform feature for different types of presentation attacks, which ignore the diversity of the inherent cues presented in different spoofing types. As a result, they can not effectively represent the intrinsic difference between different spoof faces and live faces, and the performance drops on the cross-domain databases. In this paper, we introduce the inherent cues of different spoofing types by non-uniform learning as complements to uniform features. Two lightweight sub-networks are designed to learn inherent motion patterns from photo attacks and the inherent texture cues from video attacks. Furthermore, an element-wise weighting fusion strategy is proposed to integrate the non-uniform inherent cues and uniform features. Extensive experiments on four public databases demonstrate that our approach outperforms the state-of-the-art methods and achieves a superior performance of 3.7% ACER in the cross-domain Protocol 4 of the Oulu-NPU database.
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Xu, Yaowen , Wang, Zhuming , Han, Hu et al. Exploiting Non-uniform Inherent Cues to Improve Presentation Attack Detection [J]. | 2021 INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2021) , 2021 . |
MLA | Xu, Yaowen et al. "Exploiting Non-uniform Inherent Cues to Improve Presentation Attack Detection" . | 2021 INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2021) (2021) . |
APA | Xu, Yaowen , Wang, Zhuming , Han, Hu , Wu, Lifang , Liu, Yongluo . Exploiting Non-uniform Inherent Cues to Improve Presentation Attack Detection . | 2021 INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2021) , 2021 . |
导入链接 | NoteExpress RIS BibTex |
导出
数据: |
选中 到 |
格式: |