• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索
高影响力成果及被引频次趋势图 关键词云图及合作者关系图

您的检索:

学者姓名:段立娟

精炼检索结果:

来源

应用 展开

合作者

应用 展开

清除所有精炼条件

排序方式:
默认
  • 默认
  • 标题
  • 年份
  • WOS被引数
  • 影响因子
  • 正序
  • 倒序
< 页,共 32 >
Enhancing zero-shot object detection with external knowledge-guided robust contrast learning SCIE
期刊论文 | 2024 , 185 , 152-159 | PATTERN RECOGNITION LETTERS
摘要&关键词 引用

摘要 :

Zero-shot object detection aims to identify objects from unseen categories not present during training. Existing methods rely on category labels to create pseudo-features for unseen categories, but they face limitations in exploring semantic information and lack robustness. To address these issues, we introduce a novel framework, EKZSD, enhancing zero-shot object detection by incorporating external knowledge and contrastive paradigms. This framework enriches semantic diversity, enhancing discriminative ability and robustness. Specifically, we introduce a novel external knowledge extraction module that leverages attribute and relationship prompts to enrich semantic information. Moreover, a novel external knowledge contrastive learning module is proposed to enhance the model's discriminative and robust capabilities by exploring pseudo- visual features. Additionally, we use cycle consistency learning to align generated visual features with original semantic features and adversarial learning to align visual features with semantic features. Collaboratively trained with contrast learning loss, cycle consistency loss, adversarial learning loss, and classification loss, our framework outperforms superior performance on the MSCOCO and Ship-43 datasets, as demonstrated in experimental results.

关键词 :

External knowledge External knowledge Zero-shot object detection Zero-shot object detection Supervised contrastive learning Supervised contrastive learning

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Duan, Lijuan , Liu, Guangyuan , En, Qing et al. Enhancing zero-shot object detection with external knowledge-guided robust contrast learning [J]. | PATTERN RECOGNITION LETTERS , 2024 , 185 : 152-159 .
MLA Duan, Lijuan et al. "Enhancing zero-shot object detection with external knowledge-guided robust contrast learning" . | PATTERN RECOGNITION LETTERS 185 (2024) : 152-159 .
APA Duan, Lijuan , Liu, Guangyuan , En, Qing , Liu, Zhaoying , Gong, Zhi , Ma, Bian . Enhancing zero-shot object detection with external knowledge-guided robust contrast learning . | PATTERN RECOGNITION LETTERS , 2024 , 185 , 152-159 .
导入链接 NoteExpress RIS BibTex
Link Prediction Based on Data Augmentation and Metric Learning Knowledge Graph Embedding SCIE
期刊论文 | 2024 , 14 (8) | APPLIED SCIENCES-BASEL
摘要&关键词 引用

摘要 :

A knowledge graph is a repository that represents a vast amount of information in the form of triplets. In the training process of completing the knowledge graph, the knowledge graph only contains positive examples, which makes reliable link prediction difficult, especially in the setting of complex relations. At the same time, current techniques that rely on distance models encapsulate entities within Euclidean space, limiting their ability to depict nuanced relationships and failing to capture their semantic importance. This research offers a unique strategy based on Gibbs sampling and connection embedding to improve the model's competency in handling link prediction within complex relationships. Gibbs sampling is initially used to obtain high-quality negative samples. Following that, the triplet entities are mapped onto a hyperplane defined by the connection. This procedure produces complicated relationship embeddings loaded with semantic information. Through metric learning, this process produces complex relationship embeddings imbued with semantic meaning. Finally, the method's effectiveness is demonstrated on three link prediction benchmark datasets FB15k-237, WN11RR and FB15k.

关键词 :

knowledge graph embedding knowledge graph embedding metric learning metric learning negative sampling negative sampling relation fusion relation fusion semantic extraction semantic extraction link prediction link prediction

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Duan, Lijuan , Han, Shengwen , Jiang, Wei et al. Link Prediction Based on Data Augmentation and Metric Learning Knowledge Graph Embedding [J]. | APPLIED SCIENCES-BASEL , 2024 , 14 (8) .
MLA Duan, Lijuan et al. "Link Prediction Based on Data Augmentation and Metric Learning Knowledge Graph Embedding" . | APPLIED SCIENCES-BASEL 14 . 8 (2024) .
APA Duan, Lijuan , Han, Shengwen , Jiang, Wei , He, Meng , Qiao, Yuanhua . Link Prediction Based on Data Augmentation and Metric Learning Knowledge Graph Embedding . | APPLIED SCIENCES-BASEL , 2024 , 14 (8) .
导入链接 NoteExpress RIS BibTex
MSAug: Multi-Strategy Augmentation for rare classes in semantic segmentation of remote sensing images SCIE
期刊论文 | 2024 , 84 | DISPLAYS
摘要&关键词 引用

摘要 :

Recently, remote sensing images have been widely used in many scenarios, gradually becoming the focus of social attention. Nevertheless, the limited annotation of scarce classes severely reduces segmentation performance. This phenomenon is more prominent in remote sensing image segmentation. Given this, we focus on image fusion and model feedback, proposing a multi-strategy method called MSAug to address the remote sensing imbalance problem. Firstly, we crop rare class images multiple times based on prior knowledge at the image patch level to provide more balanced samples. Secondly, we design an adaptive image enhancement module at the model feedback level to accurately classify rare classes at each stage and dynamically paste and mask different classes to further improve the model's recognition capabilities. The MSAug method is highly flexible and can be plug-and-play. Experimental results on remote sensing image segmentation datasets show that adding MSAug to any remote sensing image semantic segmentation network can bring varying degrees of performance improvement.

关键词 :

Semantic segmentation Semantic segmentation Remote sensing images Remote sensing images Data augmentation Data augmentation Rare classes Rare classes

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Gong, Zhi , Duan, Lijuan , Xiao, Fengjin et al. MSAug: Multi-Strategy Augmentation for rare classes in semantic segmentation of remote sensing images [J]. | DISPLAYS , 2024 , 84 .
MLA Gong, Zhi et al. "MSAug: Multi-Strategy Augmentation for rare classes in semantic segmentation of remote sensing images" . | DISPLAYS 84 (2024) .
APA Gong, Zhi , Duan, Lijuan , Xiao, Fengjin , Wang, Yuxi . MSAug: Multi-Strategy Augmentation for rare classes in semantic segmentation of remote sensing images . | DISPLAYS , 2024 , 84 .
导入链接 NoteExpress RIS BibTex
基于知识蒸馏和域自适应的双教师睡眠分期特征迁移方法 incoPat
专利 | 2023-02-22 | CN202310189447.7
摘要&关键词 引用

摘要 :

基于知识蒸馏和域自适应的双教师睡眠分期特征迁移方法,属于信号处理和模式识别领域。首先对睡眠脑电和眼电信号进行预处理,获得若干多模态睡眠信号数据样本。接下来对源域和目标域数据的每一个样本包含的每个通道依次使用不同分辨率的Morlet小波变换提取时频特征,随后输入源域教师和目标域教师进行预训练。在对学生的训练优化时,引入冻结住特征提取器的两个教师进行指导,约束学生学习源域和目标域通用特征和目标域的域特定特征。实验证明本发明提出的模型充分利用了数据的特征进行特征迁移,在目标域数据量较少时也能得到良好效果,可以有效应对现有的自动化睡眠分期方法在面对新数据集时准确率下降的问题。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 段立娟 , 张岩 . 基于知识蒸馏和域自适应的双教师睡眠分期特征迁移方法 : CN202310189447.7[P]. | 2023-02-22 .
MLA 段立娟 et al. "基于知识蒸馏和域自适应的双教师睡眠分期特征迁移方法" : CN202310189447.7. | 2023-02-22 .
APA 段立娟 , 张岩 . 基于知识蒸馏和域自适应的双教师睡眠分期特征迁移方法 : CN202310189447.7. | 2023-02-22 .
导入链接 NoteExpress RIS BibTex
一种改善N1期类别混淆的多模态多尺度睡眠分期方法 incoPat
专利 | 2023-02-22 | CN202310152184.2
摘要&关键词 引用

摘要 :

本发明公开了一种改善N1期类别混淆的多模态多尺度睡眠分期方法。对原始睡眠数据进行预处理,获得睡眠数据样本。针对N1期睡眠数据少的情况,使用基于叠取策略的数据增强算法生成N1期,减轻了N1期少对睡眠分期的影响。针对睡眠数据利用不充分的问题,设计了多模态多尺度特征提取模块,对不同模态的数据进行不同处理,且使用多尺度特征提取方式对EEG模态进行细粒度特征提取,提高特征的有效性,初步解决N1期难区分问题,提高N1期的分类准确率。针对N1期容易与N2期和REM期混淆的问题,使用对比学习的方法,使得同一分期睡眠数据特征相似度更高,不同分期睡眠数据特征相似度相对降低,进一步提高N1期的可区分性。本发明在睡眠分期任务中N1期准确率最高。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 段立娟 , 尹悦 . 一种改善N1期类别混淆的多模态多尺度睡眠分期方法 : CN202310152184.2[P]. | 2023-02-22 .
MLA 段立娟 et al. "一种改善N1期类别混淆的多模态多尺度睡眠分期方法" : CN202310152184.2. | 2023-02-22 .
APA 段立娟 , 尹悦 . 一种改善N1期类别混淆的多模态多尺度睡眠分期方法 : CN202310152184.2. | 2023-02-22 .
导入链接 NoteExpress RIS BibTex
一种基于图注意力网络和稀疏编码的静息态多通道脑电信号识别方法 incoPat
专利 | 2023-02-22 | CN202310152183.8
摘要&关键词 引用

摘要 :

本发明涉及一种基于图注意力网络和稀疏编码的多通道EEG信号识别方法。首先对多通道脑电信号进行预处理,获得若干多通道脑电信号数据样本。接下来对每一个数据样本进行分频带处理,分为五种子频带,采用上述两种特征提取方式分别构造五种脑功能网络。接下来对五种脑功能网络进行融合,将其脑功能节点特征进行拼接作为融合后的脑功能节点特征;对五种脑功能连接特征取平均值,然后进行进行阈值处理去除无效连接,作为融合后的脑功能连接特征。将融合后的脑功能网络通过图注意力网络模型来还原真实的脑功能连接特征,并使用自编码器对脑功能连接稀疏特征进行降维和特征增强,进行特征降维,最终将两种特征融合并进行分类。本发明分类准确率最高。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 段立娟 , 邹鑫宇 , 乔元华 . 一种基于图注意力网络和稀疏编码的静息态多通道脑电信号识别方法 : CN202310152183.8[P]. | 2023-02-22 .
MLA 段立娟 et al. "一种基于图注意力网络和稀疏编码的静息态多通道脑电信号识别方法" : CN202310152183.8. | 2023-02-22 .
APA 段立娟 , 邹鑫宇 , 乔元华 . 一种基于图注意力网络和稀疏编码的静息态多通道脑电信号识别方法 : CN202310152183.8. | 2023-02-22 .
导入链接 NoteExpress RIS BibTex
MMT: Cross Domain Few-Shot Learning via Meta-Memory Transfer SCIE
期刊论文 | 2023 , 45 (12) , 15018-15035 | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
摘要&关键词 引用

摘要 :

Few-shot learning aims to recognize novel categories solely relying on a few labeled samples, with existing few-shot methods primarily focusing on the categories sampled from the same distribution. Nevertheless, this assumption cannot always be ensured, and the actual domain shift problem significantly reduces the performance of few-shot learning. To remedy this problem, we investigate an interesting and challenging cross-domain few-shot learning task, where the training and testing tasks employ different domains. Specifically, we propose aMeta-Memory scheme to bridge the domain gap between source and target domains, leveraging style-memory and content-memory components. The former stores intra-domain style information from source domain instances and provides a richer feature distribution. The latter stores semantic information through exploration of knowledge of different categories. Under the contrastive learning strategy, our model effectively alleviates the cross-domain problem in few-shot learning. Extensive experiments demonstrate that our proposed method achieves state-of-the-art performance on cross-domain few-shot semantic segmentation tasks on the COCO-20(i), PASCAL-5(i), FSS-1000, and SUIM datasets and positively affects few-shot classification tasks on Meta-Dataset.

关键词 :

Memory Memory few-shot learning few-shot learning cross-domain cross-domain semantic segmentation semantic segmentation

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Wang, Wenjian , Duan, Lijuan , Wang, Yuxi et al. MMT: Cross Domain Few-Shot Learning via Meta-Memory Transfer [J]. | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2023 , 45 (12) : 15018-15035 .
MLA Wang, Wenjian et al. "MMT: Cross Domain Few-Shot Learning via Meta-Memory Transfer" . | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 45 . 12 (2023) : 15018-15035 .
APA Wang, Wenjian , Duan, Lijuan , Wang, Yuxi , Fan, Junsong , Zhang, Zhaoxiang . MMT: Cross Domain Few-Shot Learning via Meta-Memory Transfer . | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2023 , 45 (12) , 15018-15035 .
导入链接 NoteExpress RIS BibTex
一种基于图注意力网络的MOBA游戏装备推荐方法 incoPat
专利 | 2022-02-22 | CN202210164356.3
摘要&关键词 引用

摘要 :

本发明属于推荐系统领域,针对多人在线战术竞技类型MultiplayerOnlineBattleArena, MOBA游戏装备推荐问题,提出了一种基于图注意力网络的MOBA游戏装备推荐方法。首先使用基于Transformer的局部和全局注意力特征提取方法,针对局内对战队伍的多属性特征进行细粒度提取,促使模型在装备推荐时既考虑己方协助信息也考虑敌方制约信息,进行有效信息互通。其次,基于图注意力网络的全局多重聚合方法通过计算影响因子权重深入更新聚合特征,不断强化英雄‑英雄、英雄‑装备间的交互影响。本发明在Precision和MAP指标上明显优于先前方法,对MOBA游戏的装备推荐更准确有效。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 段立娟 , 李舒欣 , 张文博 et al. 一种基于图注意力网络的MOBA游戏装备推荐方法 : CN202210164356.3[P]. | 2022-02-22 .
MLA 段立娟 et al. "一种基于图注意力网络的MOBA游戏装备推荐方法" : CN202210164356.3. | 2022-02-22 .
APA 段立娟 , 李舒欣 , 张文博 , 乔元华 . 一种基于图注意力网络的MOBA游戏装备推荐方法 : CN202210164356.3. | 2022-02-22 .
导入链接 NoteExpress RIS BibTex
一种应用于人脸识别的自适应快速无监督特征选择方法 incoPat
专利 | 2022-02-25 | CN202210183736.1
摘要&关键词 引用

摘要 :

本发明涉及一种应用于人脸识别的自适应快速无监督特征选择方法,用于解决高维度人脸图像中往往存在大量无意义和冗余特征导致分析困难的问题。具体方案为首先提出一种自适应快速密度峰值聚类方法对人脸图像特征进行聚类操作,然后定义特征重要性评价函数,在每个特征簇中选择出最具代表性特征,加入特征子集,完成特征选择。实施本发明能够达到得到的特征子集更精确,特征选择更快速的效果。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 段立娟 , 解晨瑶 , 张文博 et al. 一种应用于人脸识别的自适应快速无监督特征选择方法 : CN202210183736.1[P]. | 2022-02-25 .
MLA 段立娟 et al. "一种应用于人脸识别的自适应快速无监督特征选择方法" : CN202210183736.1. | 2022-02-25 .
APA 段立娟 , 解晨瑶 , 张文博 , 乔元华 . 一种应用于人脸识别的自适应快速无监督特征选择方法 : CN202210183736.1. | 2022-02-25 .
导入链接 NoteExpress RIS BibTex
Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer CPCI-S
期刊论文 | 2022 , 7055-7064 | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022)
WoS核心集被引次数: 20
摘要&关键词 引用

摘要 :

Few-shot semantic segmentation intends to predict pixel-level categories using only a few labeled samples. Existing few-shot methods focus primarily on the categories sampled from the same distribution. Nevertheless, this assumption cannot always be ensured. The actual domain shift problem significantly reduces the performance of few-shot learning. To remedy this problem, we propose an interesting and challenging cross-domain few-shot semantic segmentation task, where the training and test tasks perform on different domains. Specifically, we first propose a meta-memory bank to improve the generalization of the segmentation network by bridging the domain gap between source and target domains. The meta-memory stores the intra-domain style information from source domain instances and transfers it to target samples. Subsequently, we adopt a new contrastive learning strategy to explore the knowledge of different categories during the training stage. The negative and positive pairs are obtained from the proposed memory-based style augmentation. Comprehensive experiments demonstrate that our proposed method achieves promising results on cross-domain few-shot semantic segmentation tasks on C000-20(i) , PASCAL-5(i), FSS-1000, and SHIM datasets.

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Wang, Wenjian , Duan, Lijuan , Wang, Yuxi et al. Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer [J]. | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) , 2022 : 7055-7064 .
MLA Wang, Wenjian et al. "Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer" . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) (2022) : 7055-7064 .
APA Wang, Wenjian , Duan, Lijuan , Wang, Yuxi , En, Qing , Fan, Junsong , Zhang, Zhaoxiang . Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) , 2022 , 7055-7064 .
导入链接 NoteExpress RIS BibTex
每页显示 10| 20| 50 条结果
< 页,共 32 >

导出

数据:

选中

格式:
在线人数/总访问数:530/4922657
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司