This website requires JavaScript.

Improving Complex Knowledge Base Question Answering via Question-to-Action and Question-to-Question Alignment

Yechun TangXiaoxia ChengWeiming Lu
Dec 2022
Complex knowledge base question answering can be achieved by convertingquestions into sequences of predefined actions. However, there is a significantsemantic and structural gap between natural language and action sequences,which makes this conversion difficult. In this paper, we introduce analignment-enhanced complex question answering framework, called ALCQA, whichmitigates this gap through question-to-action alignment andquestion-to-question alignment. We train a question rewriting model to alignthe question and each action, and utilize a pretrained language model toimplicitly align the question and KG artifacts. Moreover, considering thatsimilar questions correspond to similar action sequences, we retrieve top-ksimilar question-answer pairs at the inference stage throughquestion-to-question alignment and propose a novel reward-guided actionsequence selection strategy to select from candidate action sequences. Weconduct experiments on CQA and WQSP datasets, and the results show that ourapproach outperforms state-of-the-art methods and obtains a 9.88\% improvementsin the F1 metric on CQA dataset. Our source code is available at