• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:杨金福

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 19 >
A Level Set Annotation Framework With Single-Point Supervision for Infrared Small Target Detection SCIE
期刊论文 | 2024 , 31 , 451-455 | IEEE SIGNAL PROCESSING LETTERS
Abstract&Keyword Cite

Abstract :

Infrared Small Target Detection is a challenging task to separate small targets from infrared clutter background. Recently, deep learning paradigms have achieved promising results. However, these data-driven methods need plenty of manual annotations. Due to the small size of infrared targets, manual annotation consumes more resources and restricts the development of this field. This letter proposed a labor-efficient annotation framework with level set, which obtains a high-quality pseudo mask with only one cursory click. A variational level set formulation with an expectation difference energy functional is designed, in which the zero level contour is intrinsically maintained during the level set evolution. It solves the issue that zero level contour disappearing due to small target size and excessive regularization. Experiments on the NUAA-SIRST and IRSTD-1k datasets demonstrate that our approach achieves superior performance.

Keyword :

infrared small target detection infrared small target detection level set level set Deep learning Deep learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Haoqing , Yang, Jinfu , Xu, Yifei et al. A Level Set Annotation Framework With Single-Point Supervision for Infrared Small Target Detection [J]. | IEEE SIGNAL PROCESSING LETTERS , 2024 , 31 : 451-455 .
MLA Li, Haoqing et al. "A Level Set Annotation Framework With Single-Point Supervision for Infrared Small Target Detection" . | IEEE SIGNAL PROCESSING LETTERS 31 (2024) : 451-455 .
APA Li, Haoqing , Yang, Jinfu , Xu, Yifei , Wang, Runshi . A Level Set Annotation Framework With Single-Point Supervision for Infrared Small Target Detection . | IEEE SIGNAL PROCESSING LETTERS , 2024 , 31 , 451-455 .
Export to NoteExpress RIS BibTex
Multi-scale Structural Asymmetric Convolution for Wireframe Parsing CPCI-S
期刊论文 | 2024 , 14450 , 239-251 | NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV
Abstract&Keyword Cite

Abstract :

Extracting salient line segments with their corresponding junctions is a promising method for structural environment recognition. However, conventional methods extract these structural features using square convolution, which greatly restricts the model performance and leads to unthoughtful wireframes due to the incompatible geometric properties with these primitives. In this paper, we propose a Multi-scale Structural Asymmetric Convolution for Wireframe Parsing (MSACWP) to simultaneously infer prominent junctions and line segments from images. Benefiting from the similar geometric properties of asymmetric convolution and line segment, the proposed Multi-Scale Asymmetric Convolution (MSAC) effectively captures long-range context feature and prevents the irrelevant information from adjacent pixels. Besides, feature maps obtained from different stages in decoder layers are combined using Multi-Scale Feature Combination module (MSFC) to promote the multi-scale feature representation capacity of the backbone network. Sufficient experiments on two public datasets (Wireframe and YorkUrban) are conducted to demonstrate the advantages of our proposed MSACWP compared with previous state-of-the-art methods.

Keyword :

Asymmetric Convolution Asymmetric Convolution Multi-Scale Feature Multi-Scale Feature Wireframe Parsing Wireframe Parsing Long-Range Context Long-Range Context

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Jiahui , Yang, Jinfu , Fu, Fuji et al. Multi-scale Structural Asymmetric Convolution for Wireframe Parsing [J]. | NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV , 2024 , 14450 : 239-251 .
MLA Zhang, Jiahui et al. "Multi-scale Structural Asymmetric Convolution for Wireframe Parsing" . | NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV 14450 (2024) : 239-251 .
APA Zhang, Jiahui , Yang, Jinfu , Fu, Fuji , Ma, Jiaqi . Multi-scale Structural Asymmetric Convolution for Wireframe Parsing . | NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV , 2024 , 14450 , 239-251 .
Export to NoteExpress RIS BibTex
Multi-branch evolutionary generative adversarial networks based on covariance crossover operators SCIE
期刊论文 | 2024 , 304 | KNOWLEDGE-BASED SYSTEMS
Abstract&Keyword Cite

Abstract :

Generative Adversarial Networks (GANs) have brought surprises in image generation and processing, particularly excelling in alleviating the problem of sparse samples in few-shot learning. Evolutionary Generative Adversarial Networks (EGANs) aim to convert the generation task into an optimization problem by integrating multiple loss functions to minimize the gap between the generated distribution and the data distribution. However, EGANs' heavy reliance on mutation operations introduces excessive randomness, resulting in unstable generator updates and affecting the diversity of generated samples. This leads to challenges such as model collapse and gradient vanishing. In this paper, inspired by human grafting mechanism, we propose a novel Multi-branch Evolutionary Generative Adversarial Network (ME-GAN) based covariance crossover operator, which includes two distinct branches - Evolutionary GAN (E-GAN) and Conditional Evolutionary GAN (CEGAN). Specifically, the E-GAN branch involves a mutation process to introduce randomness to the generated samples, while the CE-GAN branch aims to provide more realistic and diversify generated samples through conditional enhancement and relevant mutation processes. We propose a crossover operation that utilizes covariance similarity metric to transfer different feature attributes between offspring of different generations, thereby generating a diverse sample. Extensive experiments on CIFAR-10, STL-10, and CelebA datasets demonstrate the effectiveness and superiority of the proposed framework.

Keyword :

Evolutionary algorithm Evolutionary algorithm Covariance similarity Covariance similarity Generative adversarial networks Generative adversarial networks Crossover operation Crossover operation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shang, Qingzhen , Yang, Jinfu , Ma, Jiaqi . Multi-branch evolutionary generative adversarial networks based on covariance crossover operators [J]. | KNOWLEDGE-BASED SYSTEMS , 2024 , 304 .
MLA Shang, Qingzhen et al. "Multi-branch evolutionary generative adversarial networks based on covariance crossover operators" . | KNOWLEDGE-BASED SYSTEMS 304 (2024) .
APA Shang, Qingzhen , Yang, Jinfu , Ma, Jiaqi . Multi-branch evolutionary generative adversarial networks based on covariance crossover operators . | KNOWLEDGE-BASED SYSTEMS , 2024 , 304 .
Export to NoteExpress RIS BibTex
Dynamic visual SLAM based on probability screening and weighting for deep features SCIE
期刊论文 | 2024 , 236 | MEASUREMENT
Abstract&Keyword Cite

Abstract :

Most Simultaneous Localization and Mapping (SLAM) systems highly rely on static environments assumption, leading to low pose estimation accuracy in dynamic environments. Dynamic Visual SLAM (VSLAM) methods have exhibited remarkable advantages in eliminating negative effects of dynamic elements. However, most current methods, only built on traditional indirect VSLAM using hand-crafted features, are still inadequate in utilizing and processing deep features. To this end, this paper proposes a dynamic VSLAM algorithm based on probability screening and weighting for deep features. Specifically, a deep feature extraction module is designed to generate deep features leveraged in the overall pipeline. Then, probability screening and weighting scheme is proposed for processing deep features, through which the dynamic deep feature points are eliminated in a coarse-to-fine manner and the various contributions of static ones is distinguished. Sufficient quantitative and qualitative experiments prove that our proposed method is superior to other counterparts in terms of localization accuracy.

Keyword :

Probability screening and weighting Probability screening and weighting Deep feature Deep feature Visual SLAM Visual SLAM Dynamic environments Dynamic environments Pose estimation Pose estimation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Fu, Fuji , Yang, Jinfu , Ma, Jiaqi et al. Dynamic visual SLAM based on probability screening and weighting for deep features [J]. | MEASUREMENT , 2024 , 236 .
MLA Fu, Fuji et al. "Dynamic visual SLAM based on probability screening and weighting for deep features" . | MEASUREMENT 236 (2024) .
APA Fu, Fuji , Yang, Jinfu , Ma, Jiaqi , Zhang, Jiahui . Dynamic visual SLAM based on probability screening and weighting for deep features . | MEASUREMENT , 2024 , 236 .
Export to NoteExpress RIS BibTex
Mitigate Target-Level Insensitivity of Infrared Small Target Detection via Posterior Distribution Modeling SCIE
期刊论文 | 2024 , 17 , 13188-13201 | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING
Abstract&Keyword Cite

Abstract :

Infrared small target detection (IRSTD) aims to segment small targets from infrared clutter background. Existing methods mainly focus on discriminative approaches, i.e., a pixel-level front-background binary segmentation. Since infrared small targets are small and low signal-to-clutter ratio, empirical risk has few disturbances when a certain false alarm and missed detection exist, which seriously affect the further improvement of such methods. Motivated by the dense prediction generative methods, in this article, we compensate pixel-level discriminant with mask posterior distribution modeling. Specifically, we propose a diffusion model framework for IRSTD. This generative framework maximizes the posterior distribution of the small target mask to surmount the performance bottleneck associated with minimizing discriminative empirical risk. This transition from the discriminative paradigm to generative one enables us to bypass the target-level insensitivity. Furthermore, we design a low-frequency isolation in wavelet domain to suppress the interference of intrinsic infrared noise on the diffusion noise estimation. The low-frequency component of the infrared image in the wavelet domain is processed by a neural network, and the high-frequency component is utilized to restore the targets information, to estimate the residuals of the enhanced features. Experiments show that the proposed method achieves competitive performance gains over state-of-the-art methods on NUAA-SIRST, NUDT-SIRST, and IRSTD-1 k datasets.

Keyword :

Task analysis Task analysis diffusion model diffusion model Training Training Noise Noise infrared small target detection (IRSTD) infrared small target detection (IRSTD) Deep learning Deep learning generative model generative model Wavelet domain Wavelet domain Object detection Object detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Haoqing , Yang, Jinfu , Xu, Yifei et al. Mitigate Target-Level Insensitivity of Infrared Small Target Detection via Posterior Distribution Modeling [J]. | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2024 , 17 : 13188-13201 .
MLA Li, Haoqing et al. "Mitigate Target-Level Insensitivity of Infrared Small Target Detection via Posterior Distribution Modeling" . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 17 (2024) : 13188-13201 .
APA Li, Haoqing , Yang, Jinfu , Xu, Yifei , Wang, Runshi . Mitigate Target-Level Insensitivity of Infrared Small Target Detection via Posterior Distribution Modeling . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2024 , 17 , 13188-13201 .
Export to NoteExpress RIS BibTex
PlaneAC: Line-guided planar 3D reconstruction based on self-attention and convolution hybrid model SCIE
期刊论文 | 2024 , 153 | PATTERN RECOGNITION
Abstract&Keyword Cite

Abstract :

Planar 3D reconstruction aims to simultaneously extract plane instances and reconstruct the local 3D model through the estimated plane parameters. Existing methods achieve promising results either through selfattention or convolution neural network (CNNs), but usually ignore the complementary properties of them. In this paper, we propose a line -guided planar 3D reconstruction method PlaneAC, which leverages the advantage of self -attention and CNNs to capture long-range dependencies and alleviate the computational burden. In addition, explicit connection between two adjacent attention layers is built for better leveraging the transferable knowledge and facilitating the information flow between tokenized feature from different layers. Therefore, the subsequent attention layer can directly interact with previous results. Finally, a line segment filtering method is presented to remove irrelevant guiding information from indistinctive line segments extracted from the image. Extensive experiments on ScanNet and NYUv2 public datasets demonstrate the preferable performance of our proposed method, and the results show that PlaneAC achieves a better trade-off between accuracy and computation cost compared with other state-of-the-art methods.

Keyword :

Transferable knowledge Transferable knowledge Line segments Line segments Self-attention Self-attention Planar 3D reconstruction Planar 3D reconstruction Convolution neural network Convolution neural network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Jiahui , Yang, Jinfu , Fu, Fuji et al. PlaneAC: Line-guided planar 3D reconstruction based on self-attention and convolution hybrid model [J]. | PATTERN RECOGNITION , 2024 , 153 .
MLA Zhang, Jiahui et al. "PlaneAC: Line-guided planar 3D reconstruction based on self-attention and convolution hybrid model" . | PATTERN RECOGNITION 153 (2024) .
APA Zhang, Jiahui , Yang, Jinfu , Fu, Fuji , Ma, Jiaqi . PlaneAC: Line-guided planar 3D reconstruction based on self-attention and convolution hybrid model . | PATTERN RECOGNITION , 2024 , 153 .
Export to NoteExpress RIS BibTex
PLI-VIO: Real-time Monocular Visual-inertial Odometry Using Point and Line Interrelated Features SCIE
期刊论文 | 2023 , 21 (6) , 2004-2019 | INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS
Abstract&Keyword Cite

Abstract :

As a popular technology, visual-inertial odometry (VIO) has been widely applied in many fields such as autonomous robots and unmanned aerial vehicle (UAV). However, the trade-off between localization accuracy and real-time performance still needs to be optimized. This paper presents a real-time tightly-coupled monocular VIO system using point and line interrelated features (PLI-VIO) under the sliding window optimization framework. In line feature extraction part of PLI-VIO, a line segment extraction and coalescence algorithm based on EDlines is proposed, which extracts line features in real-time without concession on feature quality. At the same time, in order to get efficient and robust line tracking effect, PLI-VIO presents a line-to-point tracking method that fully utilizes the interrelation between point and line. Specifically, line features are divided as a group of points and tracked by pyramidal implementation of Lucas Kanade feature tracker. The proposed line feature tracking method can effectively reduce time consumption on tracking process in a robust way. Extensive evaluations on Euroc and TUM-VI public datasets are performed to demonstrate the preferable performance of our proposed system, and the results show that PLI-VIO obtains better localization accuracy with less computation cost compared against other state-of-the-art VIO algorithms.

Keyword :

point and line interrelated feature point and line interrelated feature Autonomous robots Autonomous robots localization localization visual-inertial odometry visual-inertial odometry

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Jiahui , Yang, Jinfu , Shang, Qingzhen et al. PLI-VIO: Real-time Monocular Visual-inertial Odometry Using Point and Line Interrelated Features [J]. | INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS , 2023 , 21 (6) : 2004-2019 .
MLA Zhang, Jiahui et al. "PLI-VIO: Real-time Monocular Visual-inertial Odometry Using Point and Line Interrelated Features" . | INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS 21 . 6 (2023) : 2004-2019 .
APA Zhang, Jiahui , Yang, Jinfu , Shang, Qingzhen , Li, Mingai . PLI-VIO: Real-time Monocular Visual-inertial Odometry Using Point and Line Interrelated Features . | INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS , 2023 , 21 (6) , 2004-2019 .
Export to NoteExpress RIS BibTex
Structural asymmetric convolution for wireframe parsing SCIE
期刊论文 | 2023 , 128 | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
Abstract&Keyword Cite

Abstract :

Simultaneously extracting junctions and their corresponding line segments from images presents a promising approach to structural environment cognition. However, conventional methods employ square convolution for line feature extraction, resulting in the exclusion of long-range dependencies and the generation of suboptimal wireframe predictions. In this paper, we introduce an efficient and concise parsing method named Structural Asymmetric Convolution-based Wireframe Parser (SACWP). Taking advantage of the inherent similarities between structural asymmetric convolution and the predominant distribution of line segments in man-made environments, we propose a Structural Asymmetric Convolution module (SAC) that captures long-range contextual features while efficiently filtering out irrelevant information from neighboring pixels. Additionally, we introduce a feature aggregation module based on dilated convolution (DCFA) to seamlessly integrate contextual information from multiple receptive fields. We thoroughly evaluate our approach on the Wireframe and YorkUrban datasets, achieving preferable results of 69.3% and 29.7% msAP respectively. On the other hand, the promising results adequately demonstrate the effectiveness of SACWP to Wireframe Parsing task.

Keyword :

Wireframe parsing Wireframe parsing Neural network Neural network Structural asymmetric convolution Structural asymmetric convolution Contextual information Contextual information

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Jiahui , Yang, Jinfu , Fu, Fuji et al. Structural asymmetric convolution for wireframe parsing [J]. | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2023 , 128 .
MLA Zhang, Jiahui et al. "Structural asymmetric convolution for wireframe parsing" . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 128 (2023) .
APA Zhang, Jiahui , Yang, Jinfu , Fu, Fuji , Ma, Jiaqi . Structural asymmetric convolution for wireframe parsing . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2023 , 128 .
Export to NoteExpress RIS BibTex
Online multiple object tracker with enhanced features SCIE
期刊论文 | 2023 , 32 (1) | JOURNAL OF ELECTRONIC IMAGING
Abstract&Keyword Cite

Abstract :

The online multiobject tracking (MOT), which integrates object detection and tracking into a single network, has made breakthroughs in recent years. However, most online trackers have a more monotonous prediction of the tracking offset in two consecutive frames, which may not be reliable when facing extreme situations such as occlusion and object deformation. Once the tracking offset of an object is biased, its corresponding tracklet will no longer maintain temporal consistency, which will seriously affect the tracking performance. In this paper, we propose a new online multiple object tracker with feature enhancement mechanism, namely En-Tracker. In En-Tracker, a multibranch kinematic analysis network (MKANet) is designed to address the above problems. MKANet estimates the pixel offset and instance offset of the object in parallel based on imitating the human thinking to joint position and appearance representations. Note that these two types of offsets compensate and facilitate each other to effectively deal with some extreme scenarios. In addition, we propose a kinematic-assisted feature synthesis enhancement (KFSE) module, which has a more comprehensive enhancement mechanism. Specifically, KFSE propagates previous tracking information to the current frame based on kinematic trend analysis while enhancing the characterization of detection features and appearance embeddings, which not only assists in object detection but also ensures the uniqueness of the appearance embeddings. Extensive experiments on MOT16 and MOT17 verify the effectiveness and advantage of our model.

Keyword :

multiobject tracking multiobject tracking online tracking online tracking kinematic analysis kinematic analysis feature fusion feature fusion

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ma, Jiaqi , Yang, Jinfu , Zhang, Jiahui et al. Online multiple object tracker with enhanced features [J]. | JOURNAL OF ELECTRONIC IMAGING , 2023 , 32 (1) .
MLA Ma, Jiaqi et al. "Online multiple object tracker with enhanced features" . | JOURNAL OF ELECTRONIC IMAGING 32 . 1 (2023) .
APA Ma, Jiaqi , Yang, Jinfu , Zhang, Jiahui , Fu, Fuji . Online multiple object tracker with enhanced features . | JOURNAL OF ELECTRONIC IMAGING , 2023 , 32 (1) .
Export to NoteExpress RIS BibTex
Monocular Visual SLAM with Robust and Efficient Line Features CPCI-S
期刊论文 | 2023 , 2325-2330 | 2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC
Abstract&Keyword Cite

Abstract :

Point features are predominantly used in Visual Simultaneous Localization and Mapping(VSLAM). However, only using point features usually lacks the robustness in the scenes such as low texture, illumination variance, and structured environments. To overcome the above limitations, line features were actively employed in previous studies. But not all line features are helpful for improving the robustness of SLAM, since they are easily affected by image noise and feature detection strategies. Meanwhile, line features are easier to degenerate than point features due to parallel to epipolar lines. In this paper, we proposed a monocular visual SLAM using robust line features. First, a Robust Line Segment Detector (RLSD) algorithm is proposed to reduce the influence of noise and outliers by using gradient operators, length suppression, and line segment fusion. And a line feature matching method based on optical-flow is employed to reduce the instability in similar and structured environments. Then, a direction constraint strategy is proposed to solve the problem of triangulation failure due to degradation, which is conducive to subsequent mapping and tracking. Finally, a Visual-Inertial Odometry(VIO) with combined point-line features is implemented. Experimental results on the EuRoC dataset demonstrate that our proposed system outperforms other state-of-the-art methods.

Keyword :

line feature degeneracy line feature degeneracy Visual-Inertial Odometry(VIO) Visual-Inertial Odometry(VIO) point and line feature point and line feature line matching line matching

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Long, Limin , Yang, Jinfu . Monocular Visual SLAM with Robust and Efficient Line Features [J]. | 2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC , 2023 : 2325-2330 .
MLA Long, Limin et al. "Monocular Visual SLAM with Robust and Efficient Line Features" . | 2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC (2023) : 2325-2330 .
APA Long, Limin , Yang, Jinfu . Monocular Visual SLAM with Robust and Efficient Line Features . | 2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC , 2023 , 2325-2330 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 19 >

Export

Results:

Selected

to

Format:
Online/Total:845/5234598
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.