Query:
学者姓名:毋立芳
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-Author
Language
Clean All
Abstract :
Due to their high reliability, security, and anti-counterfeiting, finger-based biometrics (such as finger vein and finger knuckle print) have recently received considerable attention. Despite recent advances in finger-based biometrics, most of these approaches leverage much prior information and are non-robust for different modalities or different scenarios. To address this problem, we propose a structured Robust and Sparse Least Square Regression (RSLSR) framework to adaptively learn discriminative features for personal identification. To achieve the powerful representation capacity of the input data, RSLSR synchronously integrates robust projection learning, noise decomposition, and discriminant sparse representation into a unified learning framework. Specifically, RSLSR jointly learns the most discriminative information from the original pixels of the finger images by introducing the l(2,1) norm. A sparse transformation matrix and reconstruction error are simultaneously enforced to enhance its robustness to noise, thus making RSLSR adaptable to multi-scenarios. Extensive experiments on five contact-based and contactless-based finger databases demonstrate the clear superiority of the proposed RSLSR in terms of recognition accuracy and computational efficiency.
Keyword :
Feature extraction Feature extraction Biometrics (access control) Biometrics (access control) sparse transformation matrix sparse transformation matrix least square regression (LSR) least square regression (LSR) Representation learning Representation learning projection learning projection learning Matrix converters Matrix converters Sparse matrices Sparse matrices Matrix decomposition Matrix decomposition Finger-based biometrics Finger-based biometrics Image recognition Image recognition
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Shuyi , Zhang, Bob , Wu, Lifang et al. Robust and Sparse Least Square Regression for Finger Vein and Finger Knuckle Print Recognition [J]. | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , 2024 , 19 : 2709-2719 . |
MLA | Li, Shuyi et al. "Robust and Sparse Least Square Regression for Finger Vein and Finger Knuckle Print Recognition" . | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 19 (2024) : 2709-2719 . |
APA | Li, Shuyi , Zhang, Bob , Wu, Lifang , Ma, Ruijun , Ning, Xin . Robust and Sparse Least Square Regression for Finger Vein and Finger Knuckle Print Recognition . | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , 2024 , 19 , 2709-2719 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Over the last decades, finger vein biometric recognition has generated increasing attention because of its high security, accuracy, and natural anti-counterfeiting. However, most of the existing finger vein recognition approaches rely on image enhancement or require much prior knowledge, which limits their generalization ability to different databases and different scenarios. Additionally, these methods rarely take into account the interference of noise elements in feature representation, which is detrimental to the final recognition results. To tackle these problems, we propose a novel jointly embedding model, called Joint Discriminative Analysis with Low-Rank Projection (JDA-LRP), to simultaneously extract noise component and salient information from the raw image pixels. Specifically, JDA-LRP decomposes the input image into noise and clean components via low-rank representation and transforms the clean data into a subspace to adaptively learn salient features. To further extract the most representative features, the proposed JDA-LRP enforces the discriminative class-induced constraint of the training samples as well as the sparse constraint of the embedding matrix to aggregate the embedded data of each class in their respective subspace. In this way, the discriminant ability of the jointly embedding model is greatly improved, such that JDA-LRP can be adapted to multiple scenarios. Comprehensive experiments conducted on three commonly used finger vein databases and four palm-based biometric databases illustrate the superiority of our proposed model in recognition accuracy, computational efficiency, and domain adaptation.
Keyword :
discriminative analysis discriminative analysis Image recognition Image recognition Databases Databases low-rank representation low-rank representation domain adaptation domain adaptation jointly embedding jointly embedding Data models Data models Sparse matrices Sparse matrices Biometrics (access control) Biometrics (access control) Finger vein recognition Finger vein recognition Adaptation models Adaptation models Feature extraction Feature extraction
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Shuyi , Ma, Ruijun , Zhou, Jianhang et al. Joint Discriminative Analysis With Low-Rank Projection for Finger Vein Feature Extraction [J]. | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , 2024 , 19 : 959-969 . |
MLA | Li, Shuyi et al. "Joint Discriminative Analysis With Low-Rank Projection for Finger Vein Feature Extraction" . | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 19 (2024) : 959-969 . |
APA | Li, Shuyi , Ma, Ruijun , Zhou, Jianhang , Zhang, Bob , Wu, Lifang . Joint Discriminative Analysis With Low-Rank Projection for Finger Vein Feature Extraction . | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , 2024 , 19 , 959-969 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Vat photopolymerization 3D printing has gained significant attention due to its fast printing speed and high precision. However, the absence of effective quality assurance methods greatly limits its applications. Current vat photopolymerization cannot deal with commonly occurring defects during the printing process, such as random bubbles and resin underfill, making it challenging to consistently produce products that match designed geometry and functions. To address this, we propose an innovative vat photopolymerization solution via visual-guided in-situ repair to effectively eliminate printing defects. By utilizing an enhanced YOLOv5 network and K-means algorithm, real-time detection of bubbles and resin underfill can be achieved using image analysis. The optimal method for defect repair was then automatically generated via the adaptive scraper control and the repair slice edge smoothing generation algorithm, which was immediately received by hardware to adjust the ongoing printing parameters without interrupting the continuous printing process. Experimental results demonstrate that the aforementioned strategy can accurately differentiate typical defects such as bubbles and resin underfill, and precisely carry out in-situ defect repairs. Comparisons between repaired samples and defect-free samples show minimal differences in surface morphology and fracture strength (<0.6%). The proposed solution is also applicable to various precursors. Clearly, the strategy above offers a universal approach to avoid defects in vat photopolymerization, which effectively enhances printing efficiency, reduces material wastage, and ensures the quality, accuracy, and reliability of printed products.
Keyword :
Printing defects Printing defects Visual-guided Visual-guided Vat photopolymerization Vat photopolymerization In-situ repair In-situ repair
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhao, Lidong , Zhao, Zhi , Ma, Limin et al. Limiting defect in vat photopolymerization via visual-guided in-situ repair [J]. | ADDITIVE MANUFACTURING , 2024 , 79 . |
MLA | Zhao, Lidong et al. "Limiting defect in vat photopolymerization via visual-guided in-situ repair" . | ADDITIVE MANUFACTURING 79 (2024) . |
APA | Zhao, Lidong , Zhao, Zhi , Ma, Limin , Men, Zening , Ma, Yukun , Wu, Lifang . Limiting defect in vat photopolymerization via visual-guided in-situ repair . | ADDITIVE MANUFACTURING , 2024 , 79 . |
Export to | NoteExpress RIS BibTex |
Abstract :
The information era brings both opportunities and challenges to information services. Confronting information overload, recommendation technology is dedicated to filtering personalized content to meet users' requirements. The extremely sparse interaction records and their imbalanced distribution become a big obstacle to building a high-quality recommendation model. In this article, we propose a swarm self-supervised hypergraph embedding (SHE) model to predict users' interests by hypergraph convolution and self-supervised discrimination. SHE builds a hypergraph with multiple interest clues to alleviate the interaction sparsity issue and performs interest propagation to embed CF signals in hybrid learning on the hypergraph. It follows an auxiliary local view by similar hypergraph construction and interest propagation to restrain unnecessary propagation between user swarms. Besides, interest contrast further inserts self-discrimination to deal with long-tail bias issue and enhance interest modeling, which aid recommendation by a multi-task learning optimization. Experiments on public datasets show that the proposed SHE outperforms the state-of-the-art models demonstrating the effectiveness of hypergraph-based interest propagation and swarm-aware interest contrast to enhance embedding for recommendation.
Keyword :
graph convolution graph convolution contrastive learning contrastive learning hypergraph hypergraph Recommendation Recommendation user interest user interest
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Jian, Meng , Bai, Yulong , Guo, Jingjing et al. Swarm Self-supervised Hypergraph Embedding for Recommendation [J]. | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA , 2024 , 18 (4) . |
MLA | Jian, Meng et al. "Swarm Self-supervised Hypergraph Embedding for Recommendation" . | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA 18 . 4 (2024) . |
APA | Jian, Meng , Bai, Yulong , Guo, Jingjing , Wu, Lifang . Swarm Self-supervised Hypergraph Embedding for Recommendation . | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA , 2024 , 18 (4) . |
Export to | NoteExpress RIS BibTex |
Abstract :
Face Anti-Spoofing is critical to secure face recognition systems from presentation attacks. Existing methods often suffer from performance degradation due to image quality issues, such as blurring, overexposure, or varied background, which cause distribution deviations of face images in the quality space, and hinder the learning of effective liveness features. In this paper, we propose a novel method that interactively co-reinforces the liveness and Face Quality representations for Face Anti-Spoofing (FQ-FAS). Specifically, to enhance the discrimination of face quality representation, FQ-FAS first designs a face quality learning module that naturally mitigates the interference from background. Subsequently, a quality-spoofing feature interaction module is devised to co-reinforce both liveness and face quality representations. Meanwhile, we propose a quality aware triplet loss to align the distribution of face images from two aspects: one is to pull the homogeneous face images with different quality together, while the other is to push the inhomogeneous samples with similar quality away in the feature space. In this way, FQ-FAS can learn reliable and discriminative representations for face anti-spoofing. Extensive intra-dataset and cross-dataset experiments clearly demonstrate that our method obtains better performance than previous state-of-the-art methods.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Liu, Yongluo , Li, Zun , Li, Shuyi et al. Face Anti-spoofing via Interaction Learning with Face Image Quality Alignment [J]. | 2024 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, FG 2024 , 2024 . |
MLA | Liu, Yongluo et al. "Face Anti-spoofing via Interaction Learning with Face Image Quality Alignment" . | 2024 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, FG 2024 (2024) . |
APA | Liu, Yongluo , Li, Zun , Li, Shuyi , Wang, Zhuming , Wu, Lifang . Face Anti-spoofing via Interaction Learning with Face Image Quality Alignment . | 2024 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, FG 2024 , 2024 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Over the past few decades, hand -based multimodal biometrics systems have achieved significant attention because of their high security, accuracy, and anti -counterfeiting. Various hand physiological biometric modalities have been explored for identity authentication, i.e., fingerprint, finger knuckle print, palmprint, palm vein, and dorsal hand vein traits. This study provides a comprehensive review focusing on the interface of different hand biometric traits and presents an overview of hand -based multimodal biometrics methods. The framework of this paper is divided into three main categories. Firstly, we introduce the characteristics of four levels of hand -based biometrics in detail. Following this, several typical image capturing devices and image preprocessing techniques of various hand -based biometrics are reviewed. Moreover, existing publicly available and widely used hand -based multimodal biometrics databases are then summarized. Subsequently, the hand -based multimodal biometrics methods are categorized into sensor -level fusion, feature -level fusion, score -level fusion, rank -level fusion, and decision -level fusion. Additionally, the recent hybrid fusion -based and deep learning -based hand multimodal biometrics approaches are analyzed and discussed. Furthermore, we conduct a performance analysis of the abovementioned algorithms from the recent literature. At last, challenges, trends, and some recommendations related to hand -based multimodal biometrics are drawn to give some research directions.
Keyword :
Survey Survey Hand-based biometrics Hand-based biometrics Feature fusion Feature fusion Multimodal Multimodal
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Shuyi , Fei, Lunke , Zhang, Bob et al. Hand-based multimodal biometric fusion: A review [J]. | INFORMATION FUSION , 2024 , 109 . |
MLA | Li, Shuyi et al. "Hand-based multimodal biometric fusion: A review" . | INFORMATION FUSION 109 (2024) . |
APA | Li, Shuyi , Fei, Lunke , Zhang, Bob , Ning, Xin , Wu, Lifang . Hand-based multimodal biometric fusion: A review . | INFORMATION FUSION , 2024 , 109 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Motions in videos are typically a mixture of local dynamic object motions and global camera motion, which are inconsistent in some cases, and even interfere with each other, causing difficulties in various downstream applications, such as video stabilization that requires the global motion, and action recognition that consumes local motions. Therefore, it is crucial to estimate them separately. Existing methods separate two motions from the mixed motion fields, such as optical flow. However, the quality of mixed motion determines the higher bounds of the performance. In this work, we propose a framework, GLOCAL, to directly estimate global and local motions simultaneously from adjacent frames in a self-supervised manner. Our GLOCAL consists of a Global Motion Estimation (GME) module and a Local Motion Estimation (LME) module. The GME module involves a mixed motion estimation backbone, an implicit bottleneck structure for feature dimension reduction, and an explicit bottleneck for global motion recovery based on the global motion bases with foreground mask under the training guidance of proposed global reconstruction loss. An attention U-Net is adopted for LME which produces local motions while excluding motion of irrelevant regions under the guidance of proposed local reconstruction loss. Our method can achieve better performance than the existing methods on the homography estimation dataset DHE and the action recognition dataset NCAA and UCF-101.
Keyword :
motion estimation motion estimation motion pattern motion pattern Video understanding Video understanding optical flow optical flow
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zheng, Yihao , Luo, Kunming , Liu, Shuaicheng et al. GLOCAL: A self-supervised learning framework for global and local motion estimation [J]. | PATTERN RECOGNITION LETTERS , 2024 , 178 : 91-97 . |
MLA | Zheng, Yihao et al. "GLOCAL: A self-supervised learning framework for global and local motion estimation" . | PATTERN RECOGNITION LETTERS 178 (2024) : 91-97 . |
APA | Zheng, Yihao , Luo, Kunming , Liu, Shuaicheng , Li, Zun , Xiang, Ye , Wu, Lifang et al. GLOCAL: A self-supervised learning framework for global and local motion estimation . | PATTERN RECOGNITION LETTERS , 2024 , 178 , 91-97 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Recommender systems filter information to meet users' personalized interests actively. Existing graph -based models typically extract users' interests from a heterogeneous interaction graph. They do not distinguish learning between users and items, ignoring the heterogeneous property. In addition, the interaction sparsity and long -tail bias issues still limit the recommendation performance significantly. Fortunately, hidden homogeneous correlations that have a considerable volume can entangle abundant CF signals. In this paper, we propose a light dual hypergraph convolution (LDHC) for collaborative filtering, which designs a hypergraph to involve heterogeneous and homogeneous correlations with more CF signals confronting the challenges. Over the integrated hypergraph, a two -level interest propagation is performed within the heterogeneous interaction graph and between the homogeneous user/item graphs to model users' interests, where learning on users and items is distinguished and collaborated by the homogeneous propagation. Specifically, hypergraph convolution is lightened by removing unnecessary parameters to propagate users' interests. Extensive experiments on publicly available datasets demonstrate that the proposed LDHC outperforms the state-of-the-art baselines.
Keyword :
Graph convolution Graph convolution Personalized recommendation Personalized recommendation User interest User interest Collaborative filtering Collaborative filtering Hypergraph Hypergraph
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Jian, Meng , Lang, Langchen , Guo, Jingjing et al. Light dual hypergraph convolution for collaborative filtering [J]. | PATTERN RECOGNITION , 2024 , 154 . |
MLA | Jian, Meng et al. "Light dual hypergraph convolution for collaborative filtering" . | PATTERN RECOGNITION 154 (2024) . |
APA | Jian, Meng , Lang, Langchen , Guo, Jingjing , Li, Zun , Wang, Tuo , Wu, Lifang . Light dual hypergraph convolution for collaborative filtering . | PATTERN RECOGNITION , 2024 , 154 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Previous recommendation models build interest embeddings heavily relying on the observed interactions and optimize the embeddings with a contrast between the interactions and randomly sampled negative instances. To our knowledge, the negative interest signals remain unexplored in interest encoding, which merely serves losses for backpropagation. Besides, the sparse undifferentiated interactions inherently bring implicit bias in revealing users' interests, leading to suboptimal interest prediction. The negative interest signals would be a piece of promising evidence to support detailed interest modeling. In this work, we propose a perturbed graph contrastive learning with negative propagation (PCNP) for recommendation, which introduces negative interest to assist interest modeling in a contrastive learning (CL) architecture. An auxiliary channel of negative interest learning generates a contrastive graph by negative sampling and propagates complementary embeddings of users and items to encode negative signals. The proposed PCNP contrasts positive and negative embeddings to promote interest modeling for recommendation. Extensive experiments demonstrate the capability of PCNP using two-level CL to alleviate interaction sparsity and bias issues for recommendation.
Keyword :
graph convolution graph convolution Contrastive learning Contrastive learning user interest user interest recommender system recommender system
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Liu, Meishan , Jian, Meng , Bai, Yulong et al. Graph Contrastive Learning With Negative Propagation for Recommendation [J]. | IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS , 2024 , 11 (3) : 4255-4266 . |
MLA | Liu, Meishan et al. "Graph Contrastive Learning With Negative Propagation for Recommendation" . | IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS 11 . 3 (2024) : 4255-4266 . |
APA | Liu, Meishan , Jian, Meng , Bai, Yulong , Wu, Jiancan , Wu, Lifang . Graph Contrastive Learning With Negative Propagation for Recommendation . | IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS , 2024 , 11 (3) , 4255-4266 . |
Export to | NoteExpress RIS BibTex |
Abstract :
一种基于自注意力神经网络的社交媒体用户认知状态刻画方法属于用户认知在大数据、数据挖掘、深度学习等领域。本发明首先利用特定的爬虫技术获得热点事件的微博数据,根据用户筛选规则得到用户id。然后下载用户id微博主页页面地址对应的网页文件html,通过xpath对html进行解析及清洗。其次使用自注意力神经网络提取文本数据特征,进一步,利用前馈神经网络模型得到场景和情感标签。根据情绪的ABC理论,由场景和情感标签估计用户的认知状态,将同一用户不同行为对应的认知状态进行累加得到其个体认知状态。最后,根据认知状态的不同维度相似度进行用户群体划分,并通过可视化的方法多维度展示用户认知状态。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 毋立芳 , 陈研 , 石戈 et al. 一种基于自注意力神经网络的社交网络用户认知状态刻画方法 : CN202310199714.9[P]. | 2023-03-05 . |
MLA | 毋立芳 et al. "一种基于自注意力神经网络的社交网络用户认知状态刻画方法" : CN202310199714.9. | 2023-03-05 . |
APA | 毋立芳 , 陈研 , 石戈 , 邓斯诺 , 邢乐豪 . 一种基于自注意力神经网络的社交网络用户认知状态刻画方法 : CN202310199714.9. | 2023-03-05 . |
Export to | NoteExpress RIS BibTex |
Export
Results: |
Selected to |
Format: |