• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:朱青

Refining:

Source

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 4 >
DualFLAT: Dual Flat-Lattice Transformer for domain-specific Chinese named entity recognition SCIE SSCI
期刊论文 | 2024 , 62 (1) | INFORMATION PROCESSING & MANAGEMENT
Abstract&Keyword Cite

Abstract :

Recently, lexicon-enhanced methods for Chinese Named Entity Recognition (NER) have achieved great success which requires a high-quality lexicon. However, for the domain-specific Chinese NER, it is challenging to obtain such a high-quality lexicon due to the different distribution between the general lexicon and domain-specific data, and the high construction cost of the domain lexicon. To address these challenges, we introduce dual-source lexicons ( i.e., a general lexicon and a domain lexicon) to acquire enriched lexical knowledge. Considering that the general lexicon often contains more noise compared to its domain counterparts, we further propose a dual-stream model, Dual Flat-LAttice Transformer (DualFLAT), designed to mitigate the impact of noise originating from the general lexicon while comprehensively harnessing the knowledge contained within the dual-source lexicons. Experimental results on three public domain-specific Chinese NER datasets ( i.e., News, Novel and E-commerce) demonstrate that our method consistently outperforms the single-source lexicon-enhanced approaches, achieving state-of-the-art results. Specifically, our proposed DualFLAT model consistently outperforms the baseline FLAT, with an increase of up to 1.52%, 4.84% and 1.34% in F1 score for the News, Novel and E-commerce datasets, respectively.

Keyword :

Attention mechanism Attention mechanism Lattice structure Lattice structure Chinese named entity recognition Chinese named entity recognition Domain-specific Domain-specific Transformer Transformer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xiao, Yinlong , Ji, Zongcheng , Li, Jianqiang et al. DualFLAT: Dual Flat-Lattice Transformer for domain-specific Chinese named entity recognition [J]. | INFORMATION PROCESSING & MANAGEMENT , 2024 , 62 (1) .
MLA Xiao, Yinlong et al. "DualFLAT: Dual Flat-Lattice Transformer for domain-specific Chinese named entity recognition" . | INFORMATION PROCESSING & MANAGEMENT 62 . 1 (2024) .
APA Xiao, Yinlong , Ji, Zongcheng , Li, Jianqiang , Zhu, Qing . DualFLAT: Dual Flat-Lattice Transformer for domain-specific Chinese named entity recognition . | INFORMATION PROCESSING & MANAGEMENT , 2024 , 62 (1) .
Export to NoteExpress RIS BibTex
CLART: A cascaded lattice-and-radical transformer network for Chinese medical named entity recognition SCIE
期刊论文 | 2023 , 9 (10) | HELIYON
Abstract&Keyword Cite

Abstract :

Chinese medical named entity recognition (NER) is a fundamental task in Chinese medical natural language processing, aiming to recognize Chinese medical entities within unstructured medical texts. However, it poses significant challenges mainly due to the extensive usage of medical terms in Chinese medical texts. Although previous studies have made attempts to incorporate lexical or radical knowledge in order to improve the comprehension of medical texts, these studies either focus solely on one of these aspects or utilize a basic concatenation operation to combine these features, which fails to fully utilize the potential of lexical and radical knowledge. In this paper, we propose a novel Cascaded LAttice-and-Radical Transformer (CLART) network to exploit both lexical and radical information for Chinese medical NER. Specifically, given a sentence, a medical lexicon, and a radical dictionary, we first construct a flat lattice (i.e., character-word sequence) for the sentence and radical components of each Chinese character through word matching and radical parsing, respectively. We then employ a lattice Transformer module to capture the dense interactions between characters and matched words, facilitating the enhanced utilization of lexical knowledge. Subsequently, we design a radical Transformer module to model the dense interactions between the lattice and radical features, facilitating better fusion of the lexical and radical knowledge. Finally, we feed the updated lattice-and-radical-aware character representations into a Conditional Random Fields (CRF) decoder to obtain the predicted labels. Experimental results conducted on two publicly available Chinese medical NER datasets show the effectiveness of the proposed method.

Keyword :

Transformer Transformer Lattice structure Lattice structure Attention mechanism Attention mechanism Radical information Radical information Chinese medical named entity recognition Chinese medical named entity recognition

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xiao, Yinlong , Ji, Zongcheng , Li, Jianqiang et al. CLART: A cascaded lattice-and-radical transformer network for Chinese medical named entity recognition [J]. | HELIYON , 2023 , 9 (10) .
MLA Xiao, Yinlong et al. "CLART: A cascaded lattice-and-radical transformer network for Chinese medical named entity recognition" . | HELIYON 9 . 10 (2023) .
APA Xiao, Yinlong , Ji, Zongcheng , Li, Jianqiang , Zhu, Qing . CLART: A cascaded lattice-and-radical transformer network for Chinese medical named entity recognition . | HELIYON , 2023 , 9 (10) .
Export to NoteExpress RIS BibTex
Boundary Attentive Spatial Multi-scale Network for Cardiac MRI Image Segmentation CPCI-S
期刊论文 | 2023 , 14257 , 75-86 | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IV
Abstract&Keyword Cite

Abstract :

Accurate automatic segmentation of cardiac MRI images can be used for clinical parameter calculation and provide visual guidance for surgery, which is important for both diagnosis and treatment of cardiac diseases. Existing automatic segmentation methods for cardiac MRI are based on U-shaped network structure introducing global pooling, attention and etc. operations to extract more effective features. These approaches, however, suffer from a mismatch between sensing receptive field and resolution and neglect to pay attention to object boundaries. In this paper, we propose a new boundary attentive multi-scale network based on U-shaped network for automatic segmentation of cardiac MRI images. Effective features are extracted based on channel attention for shallow features. With the goal of increasing segmentation accuracy, multi-scale features are extracted using densely coupled multi-scale dilated convolutions. In order to improve the ability to learn the precise boundary of the objects, a gated boundary-aware branch is introduced and utilized to concentrate on the object border region. The effectiveness and robustness of the network are confirmed by evaluating this method on the ACDC cardiac MRI dataset to produce segmentation predictions for the left ventricle, right ventricle, and myocardial. Comparative studies demonstrate that our suggested method produces superior segmentation outcomes when compared to other cardiac MRI segmentation methods.

Keyword :

Cardiac MRI Image Cardiac MRI Image Multi-Scale Multi-Scale Image Segmentation Image Segmentation Dilated Convolutions Dilated Convolutions

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 You, Ran , Zhu, Qing , Wang, Zhiqiang . Boundary Attentive Spatial Multi-scale Network for Cardiac MRI Image Segmentation [J]. | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IV , 2023 , 14257 : 75-86 .
MLA You, Ran et al. "Boundary Attentive Spatial Multi-scale Network for Cardiac MRI Image Segmentation" . | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IV 14257 (2023) : 75-86 .
APA You, Ran , Zhu, Qing , Wang, Zhiqiang . Boundary Attentive Spatial Multi-scale Network for Cardiac MRI Image Segmentation . | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IV , 2023 , 14257 , 75-86 .
Export to NoteExpress RIS BibTex
A Semantic Segmentation Method with Emphasis on the Edges for Automatic Vessel Wall Analysis SCIE
期刊论文 | 2022 , 12 (14) | APPLIED SCIENCES-BASEL
WoS CC Cited Count: 3
Abstract&Keyword Cite

Abstract :

Featured Application Cerebrovascular. To develop a precise semantic segmentation method with an emphasis on the edges for automated segmentation of the arterial vessel wall and plaque based on the convolutional neural network (CNN) in order to facilitate the quantitative assessment of plaque in patients with ischemic stroke. A total of 124 subjects' MR vessel wall images were used to train, validate, and test the model using deep learning. An end-to-end architecture network that can emphasize the edge information, namely the Edge Vessel Segmentation Network (EVSegNet) for automated segmentation of the arterial vessel wall, is proposed. The EVSegNet network consists of two workflows: one is implemented to achieve finely and multiscale segmentation by combining Dense Upsampling Convolution (DUC) and Hybrid Dilated Convolution (HDC) with different dilation rates modules, and the other utilizes edge information and is fused with another workflow to finally segment the vessel wall. The proposed network demonstrates robust segmentation of the vessel wall and better performance with a Dice (%) of 87.5, compared with the traditional U-net that has a Dice (%) of 81.0 and other U-net-based models on the test dataset. The results suggest that the proposed segmentation method with an emphasis on the edges improves segmentation accuracy effectively and will facilitate the quantitative assessment of atherosclerosis.

Keyword :

MR vessel wall image MR vessel wall image automated segmentation automated segmentation edge information edge information

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Wenjing , Zhu, Qing . A Semantic Segmentation Method with Emphasis on the Edges for Automatic Vessel Wall Analysis [J]. | APPLIED SCIENCES-BASEL , 2022 , 12 (14) .
MLA Xu, Wenjing et al. "A Semantic Segmentation Method with Emphasis on the Edges for Automatic Vessel Wall Analysis" . | APPLIED SCIENCES-BASEL 12 . 14 (2022) .
APA Xu, Wenjing , Zhu, Qing . A Semantic Segmentation Method with Emphasis on the Edges for Automatic Vessel Wall Analysis . | APPLIED SCIENCES-BASEL , 2022 , 12 (14) .
Export to NoteExpress RIS BibTex
Progressive GAN-Based Transfer Network for Low-Light Image Enhancement CPCI-S
期刊论文 | 2022 , 13142 , 292-304 | MULTIMEDIA MODELING, MMM 2022, PT II
WoS CC Cited Count: 3
Abstract&Keyword Cite

Abstract :

Images captured in low-light conditions usually suffer from very low contrast and underexpose, which cannot be directly utilized in the subsequent computer vision tasks, such as object recognition, detection, identification and tracking. Existing methods include HE based method, Retinex theory based method and deep learning method which may generate undesirable enhanced results including amplified noise, biased colors and extreme boundary. To address this problem, we utilize prior knowledge of Retinex theory and GAN based on data statistic to propose a progressive GAN-based Transfer network to realize the low-light enhancement. In this paper, the image is decomposed by JieP method based on the Retinex model to obtain the reflection and light components, and learn the relationship between the reflection component of the low-light image and normal light image via a reflection decomposition on network (RefDecN), and then generate the reflection component of the low-light image. Then, another illumination transfering network (IllumTransN) is utilized to transfer the light of normal light image to the reflection component to realize low-light enhancement. Experimental results of low-light image enhancement on RAISE, LOL and MEF datasets demonstrate our ProGAN can outperform state-of-the-art methods in terms of objective and subjective quality.

Keyword :

Generative adversarial network Generative adversarial network Retinex model Retinex model Reflection component Reflection component Low-light enhancement Low-light enhancement Illumination component Illumination component

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Jin, Shuang , Qi, Na , Zhu, Qing et al. Progressive GAN-Based Transfer Network for Low-Light Image Enhancement [J]. | MULTIMEDIA MODELING, MMM 2022, PT II , 2022 , 13142 : 292-304 .
MLA Jin, Shuang et al. "Progressive GAN-Based Transfer Network for Low-Light Image Enhancement" . | MULTIMEDIA MODELING, MMM 2022, PT II 13142 (2022) : 292-304 .
APA Jin, Shuang , Qi, Na , Zhu, Qing , Ouyang, Haoran . Progressive GAN-Based Transfer Network for Low-Light Image Enhancement . | MULTIMEDIA MODELING, MMM 2022, PT II , 2022 , 13142 , 292-304 .
Export to NoteExpress RIS BibTex
SUnet plus plus :Joint Demosaicing and Denoising of Extreme Low-Light Raw Image CPCI-S
期刊论文 | 2022 , 13142 , 171-181 | MULTIMEDIA MODELING, MMM 2022, PT II
Abstract&Keyword Cite

Abstract :

Despite the rapid development of photography equipment, shooting high-definition RAW images in extreme low-light environments has always been a difficult problem to solve. Existing methods use neural networks to automatically learn the mapping from extreme low-light noise RAW images to long-exposure RGB images for jointly denoising and demosaicing of extreme low-light images, but the performance on other datasets is unpleasant. In order to address this problem, we present a separable Unet++ (SUnet++) network structure to improve the generalization ability of the joint denoising and demosaicing method for extreme low-light images. We introduce Unet++ to adapt the model to other datasets, and then replace the conventional convolutions of Unet++ with M sets of depthwise separable convolutions, which greatly reduced the number of parameters without losing performance. Experimental results on SID and ELD dataset demonstrate our proposed SUnet++ outperform the state-of-the-arts methods in term of subjective and objective results, which further validates the robust generalization of our proposed method.

Keyword :

Unet plus Unet plus Raw image Raw image Extreme low-light image Extreme low-light image Unet Unet Joint denoising and demosaicing Joint denoising and demosaicing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Qi, Jingzhong , Qi, Na , Zhu, Qing . SUnet plus plus :Joint Demosaicing and Denoising of Extreme Low-Light Raw Image [J]. | MULTIMEDIA MODELING, MMM 2022, PT II , 2022 , 13142 : 171-181 .
MLA Qi, Jingzhong et al. "SUnet plus plus :Joint Demosaicing and Denoising of Extreme Low-Light Raw Image" . | MULTIMEDIA MODELING, MMM 2022, PT II 13142 (2022) : 171-181 .
APA Qi, Jingzhong , Qi, Na , Zhu, Qing . SUnet plus plus :Joint Demosaicing and Denoising of Extreme Low-Light Raw Image . | MULTIMEDIA MODELING, MMM 2022, PT II , 2022 , 13142 , 171-181 .
Export to NoteExpress RIS BibTex
TENSOR-BASED LIGHT FIELD DENOISING BY EXPLOITING NON-LOCAL SIMILARITIES ACROSS MULTIPLE RESOLUTIONS CPCI-S
会议论文 | 2020 , 1078-1082 | IEEE International Conference on Image Processing (ICIP)
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Light field is a kind of 4D signal that contains rich information about position and angle of light, which can express the scene more accurately. Light field is easily affected by noise for the hardware sensitivity. This paper utilizes the intrinsic tensor sparsity model and integrates super-resolution(SR) into a unified light field denoising method based on tensor operation. Avoiding vectorization, we make full use of correlation of light field. By exploiting SR method, we avoid sub-pixel mis-alignment in the searching process of similar patch. Experimental results validate that our proposed method outperforms the state-of-art methods in terms of both objective and subjective quality on the HCI light field old dataset.

Keyword :

Light field Light field super-resolution super-resolution tensor sparsity tensor sparsity image denoising image denoising

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Chen , Qi, Na , Zhu, Qing . TENSOR-BASED LIGHT FIELD DENOISING BY EXPLOITING NON-LOCAL SIMILARITIES ACROSS MULTIPLE RESOLUTIONS [C] . 2020 : 1078-1082 .
MLA Wang, Chen et al. "TENSOR-BASED LIGHT FIELD DENOISING BY EXPLOITING NON-LOCAL SIMILARITIES ACROSS MULTIPLE RESOLUTIONS" . (2020) : 1078-1082 .
APA Wang, Chen , Qi, Na , Zhu, Qing . TENSOR-BASED LIGHT FIELD DENOISING BY EXPLOITING NON-LOCAL SIMILARITIES ACROSS MULTIPLE RESOLUTIONS . (2020) : 1078-1082 .
Export to NoteExpress RIS BibTex
Light Field Image Compression Using Multi-Branch Spatial Transformer Networks Based View Synthesis CPCI-S
会议论文 | 2020 , 397-397 | Data Compression Conference (DCC)
WoS CC Cited Count: 6
Abstract&Keyword Cite

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Jin , Wang, Qianwen , Xiong, Ruiqin et al. Light Field Image Compression Using Multi-Branch Spatial Transformer Networks Based View Synthesis [C] . 2020 : 397-397 .
MLA Wang, Jin et al. "Light Field Image Compression Using Multi-Branch Spatial Transformer Networks Based View Synthesis" . (2020) : 397-397 .
APA Wang, Jin , Wang, Qianwen , Xiong, Ruiqin , Zhu, Qing , Yin, Baocai . Light Field Image Compression Using Multi-Branch Spatial Transformer Networks Based View Synthesis . (2020) : 397-397 .
Export to NoteExpress RIS BibTex
A Research of ORB Feature Matching Algorithm Based on Fusion Descriptor CPCI-S
会议论文 | 2020 , 417-420 | IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC)
WoS CC Cited Count: 7
Abstract&Keyword Cite

Abstract :

In the past few decades, visual SLAM has been successfully applied to technologies such as virtual reality and robot positioning. Among them, feature detection and matching technology is the key technology in SLAM. Aiming at the problems of large scale matching error and high mismatch rate of the binary description algorithm (Oriented fast and Rotated Brief (ORB)), an improved ORB feature matching algorithm in terms of scale and descriptors is proposed. Based on the binary description algorithm ORB, the algorithm constructs a pyramid-like scale space, and detects oFAST key points on each layer to improve the scale invariance of the algorithm. In terms of descriptors, the 128-bit improved FREAK description operator is used instead of the last 128 bits of the small variance in the rBRIEF description operator, which makes full use of image information to improve the matching accuracy and robustness. The experimental results show that the algorithm in this paper has greatly improved the feature matching rate and robustness in terms of scale change, rotation, and brightness change compared with the traditional ORB, and meets the requirements for fast and accurate matching of complex images.

Keyword :

orb orb descriptor descriptor scale invariance scale invariance feature detection feature detection slam slam

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Shuo , Wang, Zhiqiang , Zhu, Qing . A Research of ORB Feature Matching Algorithm Based on Fusion Descriptor [C] . 2020 : 417-420 .
MLA Li, Shuo et al. "A Research of ORB Feature Matching Algorithm Based on Fusion Descriptor" . (2020) : 417-420 .
APA Li, Shuo , Wang, Zhiqiang , Zhu, Qing . A Research of ORB Feature Matching Algorithm Based on Fusion Descriptor . (2020) : 417-420 .
Export to NoteExpress RIS BibTex
Generative image completion with image-to-image translation (vol 32, pg 7333, 2020) SCIE
期刊论文 | 2020 , 32 (23) , 17809-17809 | NEURAL COMPUTING & APPLICATIONS
WoS CC Cited Count: 1
Abstract&Keyword Cite

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Shuzhen , Zhu, Qing , Wang, Jin . Generative image completion with image-to-image translation (vol 32, pg 7333, 2020) [J]. | NEURAL COMPUTING & APPLICATIONS , 2020 , 32 (23) : 17809-17809 .
MLA Xu, Shuzhen et al. "Generative image completion with image-to-image translation (vol 32, pg 7333, 2020)" . | NEURAL COMPUTING & APPLICATIONS 32 . 23 (2020) : 17809-17809 .
APA Xu, Shuzhen , Zhu, Qing , Wang, Jin . Generative image completion with image-to-image translation (vol 32, pg 7333, 2020) . | NEURAL COMPUTING & APPLICATIONS , 2020 , 32 (23) , 17809-17809 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 4 >

Export

Results:

Selected

to

Format:
Online/Total:1396/5221298
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.