• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:尹宝才

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 101 >
Domain-Aware Prototype Network for Generalized Zero-Shot Learning SCIE
期刊论文 | 2024 , 34 (5) , 3180-3191 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Abstract&Keyword Cite

Abstract :

Generalized zero-shot learning(GZSL) aims to recognize images from seen and unseen classes with side information, such as manually annotated attribute vectors. Traditional methods focus on mapping images and semantics into a common latent space, thus achieving the visual-semantics alignment. Since the unseen classes are unavailable during training, there is a serious problem of recognition bias, which will tend to recognize unseen classes as seen classes. To solve this problem, we propose a Domain-aware Prototype Network(DPN), which splits the GZSL problem into the seen class recognition and unseen class recognition problem. For the seen classes, we design a domain-aware prototype learning branch with a dual attention feature encoder to capture the essential visual information, which aims to recognize the seen classes and discriminate the novel categories. To further recognize the fine-grained unseen classes, a visual-semantic embedding branch is designed, which aims to align the visual and semantic information for unseen-class recognition. Through the multi-task learning of the prototype learning branch and visual-semantic embedding branch, our model can achieve excellent performance on three popular GZSL datasets.

Keyword :

transformer-based dual attention transformer-based dual attention Semantics Semantics domain detection domain detection Generalized zero-shot learning Generalized zero-shot learning Visualization Visualization Task analysis Task analysis Prototypes Prototypes Feature extraction Feature extraction Image recognition Image recognition Transformers Transformers

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Hu, Yongli , Feng, Lincong , Jiang, Huajie et al. Domain-Aware Prototype Network for Generalized Zero-Shot Learning [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (5) : 3180-3191 .
MLA Hu, Yongli et al. "Domain-Aware Prototype Network for Generalized Zero-Shot Learning" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 34 . 5 (2024) : 3180-3191 .
APA Hu, Yongli , Feng, Lincong , Jiang, Huajie , Liu, Mengting , Yin, Baocai . Domain-Aware Prototype Network for Generalized Zero-Shot Learning . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (5) , 3180-3191 .
Export to NoteExpress RIS BibTex
Generating Graph-Based Rules for Enhancing Logical Reasoning CPCI-S
期刊论文 | 2024 , 14873 , 143-156 | ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XII, ICIC 2024
Abstract&Keyword Cite

Abstract :

Inductive Knowledge Graph Completion (KGC) poses challenges due to the absence of emerging entities during training. Current methods utilize Graph Neural Networks (GNNs) to learn and propagate entity representations, achieving notable performance. However, these approaches primarily focus on chain-based logical rules, limiting their ability to capture the rich semantics of knowledge graphs. To address this challenge, we propose to generate Graph-based Rules for Enhancing Logical Reasoning (GRELR), a novel framework that leverages graph-based rules for enhanced reasoning. GRELR formulates graph-based rules by extracting relevant subgraphs and fuses them to construct comprehensive relation representations. This approach, combined with subgraph reasoning, significantly improves inference capabilities and showcases the potential of graph-based rules in inductive KGC. To demonstrate the effectiveness of the GRELR framework, we conduct experiments on three benchmark datasets, and our approach achieves state-of-the-art performance.

Keyword :

Graph-based Rules Graph-based Rules Knowledge Graphs Knowledge Graphs Inductive Knowledge Graph Completion Inductive Knowledge Graph Completion Subgraph reasoning Subgraph reasoning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Sun, Kai , Jiang, Huajie , Hu, Yongli et al. Generating Graph-Based Rules for Enhancing Logical Reasoning [J]. | ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XII, ICIC 2024 , 2024 , 14873 : 143-156 .
MLA Sun, Kai et al. "Generating Graph-Based Rules for Enhancing Logical Reasoning" . | ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XII, ICIC 2024 14873 (2024) : 143-156 .
APA Sun, Kai , Jiang, Huajie , Hu, Yongli , Yin, Baocai . Generating Graph-Based Rules for Enhancing Logical Reasoning . | ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XII, ICIC 2024 , 2024 , 14873 , 143-156 .
Export to NoteExpress RIS BibTex
VQA-PDF: Purifying Debiased Features for Robust Visual Question Answering Task CPCI-S
期刊论文 | 2024 , 14873 , 264-277 | ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XII, ICIC 2024
Abstract&Keyword Cite

Abstract :

With the widespread adoption of deep learning, the performance of Visual Question Answering (VQA) tasks has seen significant improvements. Nonetheless, this progress has unveiled significant challenges concerning their credibility, primarily due to the susceptibility of linguistic biases. Such biases can result in considerable declines in performance when faced with out-of-distribution scenarios. Therefore, various debiasing methods have been developed to reduce the impact of linguistic biases, where causal theory-based methods have attracted great attention due to their theoretical underpinnings and superior performance. However, traditional debiased causal strategies typically remove biases through simple subtraction, which neglects the fine-grained bias information, resulting in incomplete debiasing. To tackle this issue, we propose a fine-grained debiasing method named as VQA-PDF, which utilizes the features of the base model to guide the identification of biased features, purifying the debiased features and aiding the base learning process. This method has shown significant improvements on VQA-CP v2, VQA v2 and VQA-CE datasets.

Keyword :

Visual Question Answering Visual Question Answering Language Bias Language Bias Causal Strategy Causal Strategy

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Bi, Yandong , Jiang, Huajie , Liu, Jing et al. VQA-PDF: Purifying Debiased Features for Robust Visual Question Answering Task [J]. | ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XII, ICIC 2024 , 2024 , 14873 : 264-277 .
MLA Bi, Yandong et al. "VQA-PDF: Purifying Debiased Features for Robust Visual Question Answering Task" . | ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XII, ICIC 2024 14873 (2024) : 264-277 .
APA Bi, Yandong , Jiang, Huajie , Liu, Jing , Liu, Mengting , Hu, Yongli , Yin, Baocai . VQA-PDF: Purifying Debiased Features for Robust Visual Question Answering Task . | ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XII, ICIC 2024 , 2024 , 14873 , 264-277 .
Export to NoteExpress RIS BibTex
Referring Image Segmentation Without Text Annotations CPCI-S
期刊论文 | 2024 , 14873 , 278-293 | ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XII, ICIC 2024
Abstract&Keyword Cite

Abstract :

Referring Image Segmentation (RIS) is an essential topic in visual language understanding that aims to segment the target instance in the image referred to by the language description. Conventional RIS methods have relied on expensive manual annotations involving the triplet (image-text-mask), with the acquisition of text annotations posing the most formidable challenge. To eliminate the heavy dependence on human annotations, we propose a novel RIS method, the Referring Image Segmentation without Text Annotations (WoTA), which substitutes textual annotations by generating the pseudo-query through the utilization of visual information. Specifically, we design a novel training-testing scheme that introduces a Pseudo-Query Generation Scheme (PQGS) in the training phase, which relies on the pre-trained cross-modal knowledge in CLIP to generate the pseudo-query related to global and local visual information. In the testing phase, the CLIP text encoder is directly applied to the test statements to generate real query language features. Extensive experiments on several benchmark datasets demonstrate the advantage of the proposed WoTA over several zero-shot baselines of the task and even the weakly supervised referring image segmentation method.

Keyword :

Without Text Annotation Without Text Annotation Pseudo-Query Pseudo-Query Referring Image Segmentation Referring Image Segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Jing , Jiang, Huajie , Bi, Yandong et al. Referring Image Segmentation Without Text Annotations [J]. | ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XII, ICIC 2024 , 2024 , 14873 : 278-293 .
MLA Liu, Jing et al. "Referring Image Segmentation Without Text Annotations" . | ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XII, ICIC 2024 14873 (2024) : 278-293 .
APA Liu, Jing , Jiang, Huajie , Bi, Yandong , Hu, Yongli , Yin, Baocai . Referring Image Segmentation Without Text Annotations . | ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XII, ICIC 2024 , 2024 , 14873 , 278-293 .
Export to NoteExpress RIS BibTex
Cross-modal Multiple Granularity Interactive Fusion Network for Long Document Classification SCIE
期刊论文 | 2024 , 18 (4) | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA
Abstract&Keyword Cite

Abstract :

Long Document Classification (LDC) has attracted great attention in Natural Language Processing and achieved considerable progress owing to the large-scale pre-trained language models. In spite of this, as a different problem from the traditional text classification, LDC is far from being settled. Long documents, such as news and articles, generally have more than thousands of words with complex structures. Moreover, compared with flat text, long documents usually contain multi-modal content of images, which provide rich information but not yet being utilized for classification. In this article, we propose a novel cross-modal method for long document classification, in which multiple granularity feature shifting networks are proposed to integrate the multi-scale text and visual features of long documents adaptively. Additionally, a multi-modal collaborative pooling block is proposed to eliminate redundant fine-grained text features and simultaneously reduce the computational complexity. To verify the effectiveness of the proposed model, we conduct experiments on the Food101 dataset and two constructed multi-modal long document datasets. The experimental results show that the proposed cross-modal method outperforms the single-modal text methods and defeats the state-of-the-art related multi-modal baselines.

Keyword :

Long document classification Long document classification multi-modal collaborative pooling multi-modal collaborative pooling cross-modal multi-granularity interactive fusion cross-modal multi-granularity interactive fusion

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Tengfei , Hu, Yongli , Gao, Junbin et al. Cross-modal Multiple Granularity Interactive Fusion Network for Long Document Classification [J]. | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA , 2024 , 18 (4) .
MLA Liu, Tengfei et al. "Cross-modal Multiple Granularity Interactive Fusion Network for Long Document Classification" . | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA 18 . 4 (2024) .
APA Liu, Tengfei , Hu, Yongli , Gao, Junbin , Sun, Yanfeng , Yin, Baocai . Cross-modal Multiple Granularity Interactive Fusion Network for Long Document Classification . | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA , 2024 , 18 (4) .
Export to NoteExpress RIS BibTex
Image-Based Structured Vehicle Behavior Analysis Inspired by Interactive Cognition SCIE
期刊论文 | 2024 , 26 , 9121-9134 | IEEE TRANSACTIONS ON MULTIMEDIA
Abstract&Keyword Cite

Abstract :

Vehicle behavior analysis has gradually developed by utilizing trajectories and motion features to characterize on-road behavior. However, the existing methods analyze the behavior of each vehicle individually, ignoring the interaction between vehicles. According to the theory of interactive cognition, vehicle-to-vehicle interaction is an indispensable feature for future autonomous driving, just as interaction is universally required for traditional driving. Therefore, we place the vehicle behavior analysis in the context of the vehicle interaction scene, where the self-vehicle should observe the behavior category and degree of the other-vehicle that is about to interact with itself, in order to predict whether the other-vehicle will pass through the intersection first or later, and then decide to pass through or wait. Inspired by the interactive cognition, we develop a general framework of Structured Vehicle Behavior Analysis (StruVBA) and derive a new model of Structured Fully Convolutional Networks (StruFCN). Moreover, both Intersection over Union (IoU) and False Negative Rate (FNR) are adopted to measure the similarity between the predicted behavior degree and the ground truth. Experimental results illustrate that the proposed method achieves higher prediction accuracy than most existing methods, while predicting vehicle behavior with richer visual meaning. In addition, it also provides an example of modeling the interaction between vehicles and a verification for interaction cognition theory as well.

Keyword :

Cognition Cognition vehicle-to-vehicle interaction vehicle-to-vehicle interaction Structured vehicle behavior analysis Structured vehicle behavior analysis Analytical models Analytical models Roads Roads Junctions Junctions Vehicular ad hoc networks Vehicular ad hoc networks structured fully convolutional networks structured fully convolutional networks structured label structured label Trajectory Trajectory interactive cognition interactive cognition Turning Turning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Mou, Luntian , Xie, Haitao , Mao, Shasha et al. Image-Based Structured Vehicle Behavior Analysis Inspired by Interactive Cognition [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 : 9121-9134 .
MLA Mou, Luntian et al. "Image-Based Structured Vehicle Behavior Analysis Inspired by Interactive Cognition" . | IEEE TRANSACTIONS ON MULTIMEDIA 26 (2024) : 9121-9134 .
APA Mou, Luntian , Xie, Haitao , Mao, Shasha , Yan, Dandan , Ma, Nan , Yin, Baocai et al. Image-Based Structured Vehicle Behavior Analysis Inspired by Interactive Cognition . | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 , 9121-9134 .
Export to NoteExpress RIS BibTex
Mixed-Modality Clustering via Generative Graph Structure Matching SCIE
期刊论文 | 2024 , 36 (12) , 8773-8786 | IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
Abstract&Keyword Cite

Abstract :

The goal of mixed-modality clustering, which differs from typical multi-modality/view clustering, is to divide samples derived from various modalities into several clusters. This task has to solve two critical semantic gap problems: i) how to generate the missing modalities without the pairwise-modality data; and ii) how to align the representations of heterogeneous modalities. To tackle the above problems, this paper proposes a novel mixed-modality clustering model, which integrates the missing-modality generation and the heterogeneous modality alignment into a unified framework. During the missing-modality generation process, a bidirectional mapping is established between different modalities, enabling generation of preliminary representations for the missing-modality using information from another modality. Then the intra-modality bipartite graphs are constructed to help generate better missing-modality representations by weighted aggregating existing intra-modality neighbors. In this way, a pairwise-modality representation for each sample can be obtained. In the process of heterogeneous modality alignment, each modality is modelled as a graph to capture the global structure among intra-modality samples and is aligned against the heterogeneous modality representations through the adaptive heterogeneous graph matching module. Experimental results on three public datasets show the effectiveness of the proposed model compared to multiple state-of-the-art multi-modality/view clustering methods.

Keyword :

multi-view clustering multi-view clustering Web sites Web sites adaptive graph structure learning adaptive graph structure learning Data models Data models Bipartite graph Bipartite graph Semantics Semantics Correlation Correlation heterogeneous graph matching heterogeneous graph matching Task analysis Task analysis Feature extraction Feature extraction Mixed-modality clustering Mixed-modality clustering

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 He, Xiaxia , Wang, Boyue , Gao, Junbin et al. Mixed-Modality Clustering via Generative Graph Structure Matching [J]. | IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING , 2024 , 36 (12) : 8773-8786 .
MLA He, Xiaxia et al. "Mixed-Modality Clustering via Generative Graph Structure Matching" . | IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 36 . 12 (2024) : 8773-8786 .
APA He, Xiaxia , Wang, Boyue , Gao, Junbin , Wang, Qianqian , Hu, Yongli , Yin, Baocai . Mixed-Modality Clustering via Generative Graph Structure Matching . | IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING , 2024 , 36 (12) , 8773-8786 .
Export to NoteExpress RIS BibTex
VIG: Visual Information-Guided Knowledge-Based Visual Question Answering CPCI-S
期刊论文 | 2024 , 1086-1091 | PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024
Abstract&Keyword Cite

Abstract :

A better knowledge-based visual question answering (KBVQA) model needs to rely on visual features, question features, and related external knowledge to solve an open visual question answering task. Although the existing knowledge-based visual question answering works have achieved some accomplishments, there are still the following challenges: 1) There is a serious lack of visual feature information. Image information is worth a thousand words. Only relying on the converted salient text information is difficult to express the original rich information of the image. 2) The external knowledge acquired is not comprehensive enough, and there is a lack of relevant knowledge directly retrieved by visual feature information. To solve these challenges, we propose a Visual Information-Guided knowledge-based visual question answering (VIG) model. It fully considers the utilization of visual features information. Specifically: 1) We introduce multi-granularity visual information that can comprehensively characterize visual feature information. 2) We consider not only the knowledge retrieved through text information but also the knowledge directly retrieved from visual feature information. Finally, we feed the visual features and retrieved multiple text knowledge into an encoder-decoder module to generate an answer. We perform extensive experiments on the OKVQA dataset and achieve state-of-the-art performance of 60.27% accuracy.

Keyword :

Visual Information-Guided Visual Information-Guided External Knowledge External Knowledge Knowledge-Based VQA Knowledge-Based VQA

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Heng , Wang, Boyue , Sun, Yanfeng et al. VIG: Visual Information-Guided Knowledge-Based Visual Question Answering [J]. | PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024 , 2024 : 1086-1091 .
MLA Liu, Heng et al. "VIG: Visual Information-Guided Knowledge-Based Visual Question Answering" . | PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024 (2024) : 1086-1091 .
APA Liu, Heng , Wang, Boyue , Sun, Yanfeng , Li, Xiaoyan , Hu, Yongli , Yin, Baocai . VIG: Visual Information-Guided Knowledge-Based Visual Question Answering . | PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024 , 2024 , 1086-1091 .
Export to NoteExpress RIS BibTex
Multi-graph Fusion and Virtual Node Enhanced Graph Neural Networks CPCI-S
期刊论文 | 2024 , 15020 , 190-201 | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT V
Abstract&Keyword Cite

Abstract :

Graph Neural Networks (GNNs) have emerged as a dominant tool for effectively learning from graph data, leveraging their remarkable learning capabilities. However, many GNN-based techniques assume complete and accurate graph relations. Unfortunately, this assumption often diverges from reality, as real-world scenarios frequently exhibit missing and erroneous edges within graphs. Consequently, GNNs that rely solely on the original graph structure inevitably lead to suboptimal results. To address this challenge, we propose a novel approach known as Multi-graph fusion and Virtual node enhanced Graph Neural Networks (MVGNN). Initially, we introduce an adaptive graph that complements the original and feature graphs. This adaptive graph serves to bridge gaps in the original and feature graphs, capturing missing edges and refining the graph's structure. Subsequently, we merge the original, feature, and adaptive graphs by applying attention mechanisms. In addition, MVGNN strategically designs virtual nodes, which act as auxiliary elements, changing the propagation mode between low-weighted edges and further enhancing the robustness of the model. The proposed MVGNN is evaluated on six benchmark datasets, demonstrating its superiority over existing state-of-the-art classification methodologies.

Keyword :

Graph Convolutional Networks Graph Convolutional Networks Robustness Robustness Virtual nodes Virtual nodes Classification Classification

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Yachao , Sun, Yanfeng , Guo, Jipeng et al. Multi-graph Fusion and Virtual Node Enhanced Graph Neural Networks [J]. | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT V , 2024 , 15020 : 190-201 .
MLA Yang, Yachao et al. "Multi-graph Fusion and Virtual Node Enhanced Graph Neural Networks" . | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT V 15020 (2024) : 190-201 .
APA Yang, Yachao , Sun, Yanfeng , Guo, Jipeng , Wang, Shaofan , Yin, Baocai . Multi-graph Fusion and Virtual Node Enhanced Graph Neural Networks . | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT V , 2024 , 15020 , 190-201 .
Export to NoteExpress RIS BibTex
Hierarchical Multi-Granularity Interaction Graph Convolutional Network for Long Document Classification SCIE
期刊论文 | 2024 , 32 , 1762-1775 | IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING
Abstract&Keyword Cite

Abstract :

With the growing demand for text analytics, long document classification (LDC) has received extensive attention, and great progress has been made. To reveal the complex structure and extract the intrinsic feature, the current approaches focus on modeling a long sequence with sparse attention or representing word-sentence or word-section relations partially. However, the thorough hierarchical structure from words, sentences to sections of long documents remains relatively unexplored. For this purpose, we propose a novel Hierarchical Multi-granularity Interaction Graph Convolutional Network (HMIGCN) for long document classification, in which three different granularity graphs, i.e., section graph, sentence graph and word graph, are constructed hierarchically. The section graph encapsulates the macrostructure of a long document, while the sentence and word graphs delve into the document's microstructure. Notably, within the sentence graph, we introduce a Global-Local Graph Convolutional (GLGC) block to adaptively capture both global and local dependency structures among sentence nodes. Additionally, to integrate the three graph networks as a whole, two well-designed techniques, namely section-guided pooling block and transfer fusion block, are proposed to train the model jointly by promoting each other. Extensive experiments on five long document datasets show that our model outperforms the existing state-of-the-art LDC models.

Keyword :

Speech processing Speech processing Computational modeling Computational modeling Long document classification Long document classification Convolutional neural networks Convolutional neural networks hierarchical multi-granularity interaction graph convolutional network hierarchical multi-granularity interaction graph convolutional network Context modeling Context modeling Adaptation models Adaptation models Task analysis Task analysis hierarchical graph pooling hierarchical graph pooling global-local graph convolution global-local graph convolution Transformers Transformers

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Tengfei , Hu, Yongli , Gao, Junbin et al. Hierarchical Multi-Granularity Interaction Graph Convolutional Network for Long Document Classification [J]. | IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING , 2024 , 32 : 1762-1775 .
MLA Liu, Tengfei et al. "Hierarchical Multi-Granularity Interaction Graph Convolutional Network for Long Document Classification" . | IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING 32 (2024) : 1762-1775 .
APA Liu, Tengfei , Hu, Yongli , Gao, Junbin , Sun, Yanfeng , Yin, Baocai . Hierarchical Multi-Granularity Interaction Graph Convolutional Network for Long Document Classification . | IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING , 2024 , 32 , 1762-1775 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 101 >

Export

Results:

Selected

to

Format:
Online/Total:536/6482189
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.