• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索
高影响力成果及被引频次趋势图 关键词云图及合作者关系图

您的检索:

学者姓名:王立春

精炼检索结果:

来源

应用 展开

合作者

应用 展开

语言

应用

清除所有精炼条件

排序方式:
默认
  • 默认
  • 标题
  • 年份
  • WOS被引数
  • 影响因子
  • 正序
  • 倒序
< 页,共 16 >
Learning From Teacher's Failure: A Reflective Learning Paradigm for Knowledge Distillation SCIE
期刊论文 | 2024 , 34 (1) , 384-396 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
摘要&关键词 引用

摘要 :

Knowledge Distillation transfers knowledge learned by a teacher network to a student network. A common mode of knowledge transfer is directly using the teacher network's experience for all samples without differentiating whether the experience of teacher is successful or not. According to common sense, experience varies with its nature. Successful experience is used for guidance, and failed experience is used for correction. Inspired by that, this paper analyzes the failure of teacher and proposes a reflective learning paradigm, which additionally uses heuristic knowledge extracted from the teacher's failure besides following the authority of teacher. Specifically, this paper defines Mutual Error Distance (MED) based on the teacher's wrong predictions. MED measures the adequacy of the decision boundary learned by teacher, which concretizes the failure of teacher. Then, this paper proposes DCGD (divide-and-conquer grouping distillation) to critically transfer the teacher's knowledge by grouping the target task into small-scale subtasks and designing multi-branch networks on the basis of MED. Finally, a switchable training mechanism is designed to integrate a regular student which provides an option of student network without parameter addition compared with the multi-branch student network. Extensive experiments on three image classification benchmarks (CIFAR-10, CIFAR-100 and TinyImageNet) show the effectiveness of the proposed paradigm. Especially on CIFAR-100 dataset, the average error of students using DCGD+DKD decreased by 4.28%. In addition, the experiment results show that the paradigm is also applicable to self-distillation.

关键词 :

decision boundary decision boundary Training Training mutual error distance mutual error distance divide-and-conquer divide-and-conquer Knowledge engineering Knowledge engineering Task analysis Task analysis Birds Birds Dogs Dogs Marine vehicles Marine vehicles reflective learning paradigm reflective learning paradigm Automobiles Automobiles Knowledge distillation Knowledge distillation

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Xu, Kai , Wang, Lichun , Xin, Jianjia et al. Learning From Teacher's Failure: A Reflective Learning Paradigm for Knowledge Distillation [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (1) : 384-396 .
MLA Xu, Kai et al. "Learning From Teacher's Failure: A Reflective Learning Paradigm for Knowledge Distillation" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 34 . 1 (2024) : 384-396 .
APA Xu, Kai , Wang, Lichun , Xin, Jianjia , Li, Shuang , Yin, Baocai . Learning From Teacher's Failure: A Reflective Learning Paradigm for Knowledge Distillation . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (1) , 384-396 .
导入链接 NoteExpress RIS BibTex
Self-Distillation With Augmentation in Feature Space SCIE
期刊论文 | 2024 , 34 (10) , 9578-9590 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
摘要&关键词 引用

摘要 :

Compared with traditional knowledge distillation, self-distillation does not require a pre-trained teacher network, which is more concise. Among them, data augmentation-based methods provide an elegant solution without modifying the network structure or additional memory consumption. However, when employing data augmentation in the input space, the forward propagations for augmented data bring additional computation costs and the augmentation methods need to be adaptive to the modality of input data. Meanwhile, we note that from a generalization perspective, under the condition of being able to distinguish from other classes, a dispersed intra-class feature distribution is superior to compact intra-class feature distribution, especially for categories with larger sample differences. Based on the above considerations, this paper proposes a feature augmentation-based self-distillation method (FASD) based on the idea of feature extrapolation. For each source feature, two augmentations are generated by subtraction between features. The one is subtracting the temporary class center computed with samples belonging to the same category, and another one is subtracting a sample feature belonging to other categories with the closest distance. Then, the predicted outputs of the augmented features are constrained to be consistent with that of the source feature. The consistent constraint on the previous augmented feature expands the learned class feature distribution, leading to greater overlap with the unknown feature distribution of test samples, thereby improving the generalization performance of the network. The consistent constraint on the latter augmented feature increases the distance between samples from different categories, which enhances the distinguishability between categories. Experimental results on image classification task demonstrate the effectiveness and efficiency of the proposed method. Meanwhile, experiments on text and audio tasks prove the universality of the method for classification tasks with different modalities.

关键词 :

Knowledge distillation Knowledge distillation classification task classification task Training Training Predictive models Predictive models generalization performance generalization performance feature augmentation feature augmentation Knowledge engineering Knowledge engineering Feature extraction Feature extraction self-distillation self-distillation Extrapolation Extrapolation Data augmentation Data augmentation Task analysis Task analysis

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Xu, Kai , Wang, Lichun , Li, Shuang et al. Self-Distillation With Augmentation in Feature Space [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (10) : 9578-9590 .
MLA Xu, Kai et al. "Self-Distillation With Augmentation in Feature Space" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 34 . 10 (2024) : 9578-9590 .
APA Xu, Kai , Wang, Lichun , Li, Shuang , Xin, Jianjia , Yin, Baocai . Self-Distillation With Augmentation in Feature Space . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (10) , 9578-9590 .
导入链接 NoteExpress RIS BibTex
Area-keywords cross-modal alignment for referring image segmentation SCIE
期刊论文 | 2024 , 581 | NEUROCOMPUTING
摘要&关键词 引用

摘要 :

Referring image segmentation aims to segment the instance corresponding to the given language description, which requires aligning information from two modalities. Existing approaches usually align the cross -modal information based on different forms of feature units, such as pixel -sentence, pixel -word and patch -word. The problem is that the semantic information embodied by these feature units may be mismatched, for example, the semantics transferred by a pixel is a part of the semantics of a sentence. When using this inconsistent information to model the relationship between feature units from two modalities, the obtained relationship between the modes are imprecise, resulting in inaccurate cross -modal features. In this paper, we propose to generate scalable area and keywords features to ensure that the feature units from the two modalities have comparable semantic granularity. Meanwhile, the scalable features provide sparse representation for image and text, which reduces computation complexity for computing cross -modal features. In addition, we design a novel multi -source driven dynamic convolution to inversely map the area -keywords cross -modal features to image to predicate mask. The experimental results on three benchmark datasets demonstrate that our proposed framework achieves advanced performance, and the calculation amount of the model has been greatly reduced.

关键词 :

Cross-modal alignment Cross-modal alignment Referring image segmentation Referring image segmentation Dynamic convolution Dynamic convolution

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Zhang, Huiyong , Wang, Lichun , Li, Shuang et al. Area-keywords cross-modal alignment for referring image segmentation [J]. | NEUROCOMPUTING , 2024 , 581 .
MLA Zhang, Huiyong et al. "Area-keywords cross-modal alignment for referring image segmentation" . | NEUROCOMPUTING 581 (2024) .
APA Zhang, Huiyong , Wang, Lichun , Li, Shuang , Xu, Kai , Yin, Baocai . Area-keywords cross-modal alignment for referring image segmentation . | NEUROCOMPUTING , 2024 , 581 .
导入链接 NoteExpress RIS BibTex
A Novel Encoder and Label Assignment for Instance Segmentation CPCI-S
期刊论文 | 2023 , 14259 , 305-316 | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VI
摘要&关键词 引用

摘要 :

SparseInst, a recent lightweight instance segmentation network, achieves better balance between efficiency and precision. However, the information contained in the single-layer features output by the encoder is not rich enough and the label assignment strategy leads to imbalance between positive and negative samples. In order to further improve the instance segmentation performance, we propose LAIS network including a novel feature encoding module and a Multi-Step Hungarian matching strategy (MSH). By combining multi-scale feature extraction and inter-layer information fusion, the encoder outputs features with more detailed and comprehensive information. By performing multiple rounds of one-to-one Hungarian matching, MSH eliminates the imbalance and duplication during the sample allocation. Experiments show that LAIS is more accurate than SparseInst without significantly increasing parameters and GFLOPs. In particular, LAIS reached 33.8% AP on the COCO val, 1.0% higher than SparseInst when using the same ResNet-50 Backbone.

关键词 :

Feature Extraction Feature Extraction Single-Output Single-Output Instance Segmentation Instance Segmentation Label Assignment Label Assignment

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Zhang, Huiyong , Wang, Lichun , Li, Shuang et al. A Novel Encoder and Label Assignment for Instance Segmentation [J]. | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VI , 2023 , 14259 : 305-316 .
MLA Zhang, Huiyong et al. "A Novel Encoder and Label Assignment for Instance Segmentation" . | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VI 14259 (2023) : 305-316 .
APA Zhang, Huiyong , Wang, Lichun , Li, Shuang , Xu, Kai , Yin, Baocai . A Novel Encoder and Label Assignment for Instance Segmentation . | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VI , 2023 , 14259 , 305-316 .
导入链接 NoteExpress RIS BibTex
Hierarchical Coupled Discriminative Dictionary Learning for Zero-Shot Learning SCIE
期刊论文 | 2023 , 33 (9) , 4973-4984 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
摘要&关键词 引用

摘要 :

Zero-shot learning (ZSL) aims to recognize images of novel classes, but does not use any images belonging to the novel classes during model training, which is realized by exploiting the auxiliary semantic information. Recently, most ZSL methods focus on learning visual-semantic embeddings to transfer knowledge from the seen classes to the novel classes. Visual-semantic embedding is usually established based on the visual features of images and the semantic information of classes, i.e., class attributes. However, image features are extracted at the individual level, while class attributes are obtained at the group level, so the granularity of these features is different, which makes it difficult to match the two kinds of features. To tackle such problem, we propose hierarchical coupled discriminative dictionary learning (HCDDL) method to hierarchically establish visual-semantic embedding at class-level and image-level with a coarse-to-fine way. Firstly, a class-level coupled dictionary is trained to build basic and coarse-grained connection between visual space and semantic space. Using the class-level coupled dictionary, image attributes are generated. Based on the fine-grained image attributes and images features, an image-level coupled dictionary is learned. In addition, during the learning of hierarchical coupled dictionaries, the discriminative losses are adopted to ensure dictionaries learn more accurate representation, which is beneficial to the recognition task. Recognition of unseen images is performed through searching the class nearest to the unseen image in multiple spaces. Experiments on four widely used benchmark datasets show the effectiveness of the proposed method, and sufficient ablation experiments demonstrate that the coarse-to-fine way leads to good performances.

关键词 :

Zero-shot learning Zero-shot learning image attribute generation image attribute generation embedding-based embedding-based hierar-chical coupled dictionary learning hierar-chical coupled dictionary learning

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Li, Shuang , Wang, Lichun , Wang, Shaofan et al. Hierarchical Coupled Discriminative Dictionary Learning for Zero-Shot Learning [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2023 , 33 (9) : 4973-4984 .
MLA Li, Shuang et al. "Hierarchical Coupled Discriminative Dictionary Learning for Zero-Shot Learning" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 33 . 9 (2023) : 4973-4984 .
APA Li, Shuang , Wang, Lichun , Wang, Shaofan , Kong, Dehui , Yin, Baocai . Hierarchical Coupled Discriminative Dictionary Learning for Zero-Shot Learning . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2023 , 33 (9) , 4973-4984 .
导入链接 NoteExpress RIS BibTex
A Balanced Relation Prediction Framework for Scene Graph Generation CPCI-S
期刊论文 | 2023 , 14257 , 216-228 | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IV
摘要&关键词 引用

摘要 :

It has become a consensus that regular scene graph generation (SGG) is limited in actual applications due to the overfitting of head predicates. A series of debiasing methods, i.e. unbiased SGG, have been proposed to solve the problem. However, existing unbiased SGG methods have a tendency to fit the tail predicates, which is another type of bias. This paper aims to eliminate the one-way overfitting of head or tail predicates. In order to provide more balanced relationship prediction, we propose a new framework DCL (Dual-branch Cumulative Learning) which integrates regular relation prediction process and debiasing relation prediction process by employing cumulative learning mechanism. The learning process of DCL enhances the discrimination of tail predicates without reducing the discrimination performance of the model on head predicates. DCL is model-agnostic and compatible with existed different type of debiasing methods. Experiments on Visual Genome dataset show that, among all the model-agnostic methods, DCL achieves the best comprehensive performance while considering both R@K and mR@K.

关键词 :

Long-tailed Problem Long-tailed Problem Cumulative Learning Cumulative Learning Scene Graph Generation Scene Graph Generation

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Xu, Kai , Wang, Lichun , Li, Shuang et al. A Balanced Relation Prediction Framework for Scene Graph Generation [J]. | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IV , 2023 , 14257 : 216-228 .
MLA Xu, Kai et al. "A Balanced Relation Prediction Framework for Scene Graph Generation" . | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IV 14257 (2023) : 216-228 .
APA Xu, Kai , Wang, Lichun , Li, Shuang , Zhang, Huiyong , Yin, Baocai . A Balanced Relation Prediction Framework for Scene Graph Generation . | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IV , 2023 , 14257 , 216-228 .
导入链接 NoteExpress RIS BibTex
Learning Interaction Regions and Motion Trajectories Simultaneously From Egocentric Demonstration Videos SCIE
期刊论文 | 2023 , 8 (10) , 6635-6642 | IEEE ROBOTICS AND AUTOMATION LETTERS
摘要&关键词 引用

摘要 :

Learning to interact with objects is significant for robots to integrate into human environments. When the interaction semantic is definite, manually guiding the manipulator is a commonly used method to teach robots how to interact with objects. However, the learning results are robot-dependent because the mechanical parameters are different for different robots, which means the learning process must be executed again. Moreover, during the manual guiding process, operators are responsible for recognizing the region being contacted and providing expert motion programming, which limits the robot's intelligence. To enhance the level of automation in object interaction for robots, this letter proposes IRMT-Net (Interaction Region and Motion Trajectory prediction Network) to predict the interaction region and motion trajectory simultaneously based on images. IRMT-Net achieves state-of-the-art interaction region prediction results on Epic-kitchens dataset, generates reasonable motion trajectories and can support robot interaction in actual situations.

关键词 :

deep learning for visual perception deep learning for visual perception Computer vision for automation Computer vision for automation dataset for robotic vision dataset for robotic vision

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Xin, Jianjia , Wang, Lichun , Xu, Kai et al. Learning Interaction Regions and Motion Trajectories Simultaneously From Egocentric Demonstration Videos [J]. | IEEE ROBOTICS AND AUTOMATION LETTERS , 2023 , 8 (10) : 6635-6642 .
MLA Xin, Jianjia et al. "Learning Interaction Regions and Motion Trajectories Simultaneously From Egocentric Demonstration Videos" . | IEEE ROBOTICS AND AUTOMATION LETTERS 8 . 10 (2023) : 6635-6642 .
APA Xin, Jianjia , Wang, Lichun , Xu, Kai , Yang, Chao , Yin, Baocai . Learning Interaction Regions and Motion Trajectories Simultaneously From Egocentric Demonstration Videos . | IEEE ROBOTICS AND AUTOMATION LETTERS , 2023 , 8 (10) , 6635-6642 .
导入链接 NoteExpress RIS BibTex
一种基于点空洞方向卷积的点云语义分割方法 incoPat
专利 | 2022-04-17 | CN202210400811.5
摘要&关键词 引用

摘要 :

一种基于点空洞方向卷积的点云语义分割方法适用于计算机视觉领域。它是一种交替使用点空洞方向卷积模块,边缘保持池化模块和边缘保持非池化模块的分层编解码网络。点空洞方向编码单元能通过改变空洞率来对邻域点进行等效稀疏采样,同时考虑了局部邻域点的方向信息和距离信息,可以在编码八个方向特征信息的同时任意的扩大其感受野,从而更全面地捕捉局部邻域信息。然后,将多个点空洞方向编码单元堆叠在一起组成点空洞方向卷积模块,该模块具有尺度感知能力和可移植性。边缘保持池化模块和边缘保持非池化模块用来保留边缘特征,恢复点云的高维特征,提高点云语义分割精度。该方法包括点云的局部邻域选择与特征提取,以获得更好的语义分割性能。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 王少帆 , 刘蓥 , 王立春 et al. 一种基于点空洞方向卷积的点云语义分割方法 : CN202210400811.5[P]. | 2022-04-17 .
MLA 王少帆 et al. "一种基于点空洞方向卷积的点云语义分割方法" : CN202210400811.5. | 2022-04-17 .
APA 王少帆 , 刘蓥 , 王立春 , 孙艳丰 , 尹宝才 . 一种基于点空洞方向卷积的点云语义分割方法 : CN202210400811.5. | 2022-04-17 .
导入链接 NoteExpress RIS BibTex
一种用于场景图生成的自适应上下文建模方法及装置 incoPat
专利 | 2022-08-22 | CN202211008807.0
摘要&关键词 引用

摘要 :

一种用于场景图生成的自适应上下文建模方法及装置,可以根据场景内容自适应地序列化其中的物体,从而改善生成的场景图的效果。方法包括:(1)使用经过预训练的目标检测器对输入图像进行目标检测,输出一系列物体提议,选择置信度较高的前80个,将其视为该场景中存在的物体;(2)将细化后的语义标签映射为200维的向量表示,然后将其与物体的视觉特征以及上下文特征拼接起来作为物体的完整特征表示,将图像中n个物体的特征O分别输入物体选择位置分支和位置选择物体分支,衡量物体与其在链式结构中的位置的匹配程度,计算得到物体与位置的匹配分数矩阵,对物体的序列化问题看作指派问题来求解;(3)上下文信息融合以及关系预测。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 王立春 , 徐凯 , 尹宝才 . 一种用于场景图生成的自适应上下文建模方法及装置 : CN202211008807.0[P]. | 2022-08-22 .
MLA 王立春 et al. "一种用于场景图生成的自适应上下文建模方法及装置" : CN202211008807.0. | 2022-08-22 .
APA 王立春 , 徐凯 , 尹宝才 . 一种用于场景图生成的自适应上下文建模方法及装置 : CN202211008807.0. | 2022-08-22 .
导入链接 NoteExpress RIS BibTex
一种无监督领域自适应语义分割方法 incoPat
专利 | 2021-01-08 | CN202110026447.6
摘要&关键词 引用

摘要 :

本发明公开了一种无监督领域自适应语义分割方法,基于源域图像训练神经网络;利用已训练网络计算目标域图像伪标签;利用源域图像和有伪标签的目标域图像重训练网络,进一步提高伪标签准确性,优化网络的泛化能力。本方法通过利用自训练方法,利用已训练网络提取高置信度的目标域伪标签,弥补了目标域缺少监督信息的缺点,与其他方法相比,丰富了目标域数据的信息,提升网络对目标域数据的学习能力;本方法着重考虑了基于类别的域间差异,针对源域和目标域的预测进行类相关性度量,约束两个域的类相关性一致,减小了两个域类级别的域间差异,提高了网络的泛化能力,本发明的性能优于其他无监督领域自适应语义分割方法。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 王立春 , 高宁 , 王少帆 et al. 一种无监督领域自适应语义分割方法 : CN202110026447.6[P]. | 2021-01-08 .
MLA 王立春 et al. "一种无监督领域自适应语义分割方法" : CN202110026447.6. | 2021-01-08 .
APA 王立春 , 高宁 , 王少帆 , 孔德慧 , 李敬华 , 尹宝才 . 一种无监督领域自适应语义分割方法 : CN202110026447.6. | 2021-01-08 .
导入链接 NoteExpress RIS BibTex
每页显示 10| 20| 50 条结果
< 页,共 16 >

导出

数据:

选中

格式:
在线人数/总访问数:580/4922607
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司