This website requires JavaScript.

A Survey of Knowledge-Enhanced Pre-trained Language Models

Linmei HuZeyi LiuZiwang ZhaoLei HouLiqiang NieJuanzi Li
Nov 2022
摘要
Pre-trained Language Models (PLMs) which are trained on large text corpusthrough the self-supervised learning method, have yielded promising performanceon various tasks in Natural Language Processing (NLP). However, though PLMswith huge parameters can effectively possess rich knowledge learned frommassive training text and benefit downstream tasks at the fine-tuning stage,they still have some limitations such as poor reasoning ability due to the lackof external knowledge. Incorporating knowledge into PLMs has been tried totackle these issues. In this paper, we present a comprehensive review ofKnowledge-Enhanced Pre-trained Language Models (KE-PLMs) to provide a clearinsight into this thriving field. We introduce appropriate taxonomiesrespectively for Natural Language Understanding (NLU) and Natural LanguageGeneration (NLG) to highlight the focus of these two kinds of tasks. For NLU,we take several types of knowledge into account and divide them into fourcategories: linguistic knowledge, text knowledge, knowledge graph (KG), andrule knowledge. The KE-PLMs for NLG are categorized into KG-based andretrieval-based methods. Finally, we point out some promising future directionsof KE-PLMs.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答