Abstract
Existing multi-hop knowledge graph question answering (KGQA) methods, which attempt to mitigate knowledge graph (KG) sparsity by introducing external text repositories instead of leveraging the question-answer information itself, ignore the semantic gap between the question modality and the knowledge graph modality as well as the role played by neighboring entities in the best answer selection. To address the above problems, we propose a Joint Reasoning-based Embedded Multi-hop KGQA (JREM-KGQA) method, which addresses these issues through three key innovations: 1) Early Joint Embedding. We construct a Question Answering-Knowledge Graph-Collaborative Work Diagram (QA-KG-CWD) and train the diagram using a knowledge graph embedding (KGE) model. This not only alleviates the knowledge graph sparsity but also effectively enhances the model’s long-path reasoning ability. 2) Semantic Fusion Module. We narrowed the semantic gap between the question modality and the knowledge graph modality through the semantic fusion module to achieve more effective reasoning. 3) Node Relevance Scoring. We employ three node relevance scoring strategies to ensure that the best answer is selected from the huge knowledge graph. We evaluated our model on MetaQA as well as PQL datasets and compared it with other methods. The results demonstrate that our proposed model outperforms existing methods in terms of long-path reasoning ability, effective mitigation of knowledge graph sparsity, and overall performance. We have made our models source code available at github: https://github.com/feixiongfeixiong/JREM-KGQA
Get full access to this article
View all access options for this article.
