This website requires JavaScript.

Towards Linguistically Informed Multi-Objective Pre-Training for Natural Language Inference

Maren PielkaSvetlana SchmidtLisa PucknatRafet Sifa
Dec 2022
摘要
We introduce a linguistically enhanced combination of pre-training methodsfor transformers. The pre-training objectives include POS-tagging, synsetprediction based on semantic knowledge graphs, and parent prediction based ondependency parse trees. Our approach achieves competitive results on theNatural Language Inference task, compared to the state of the art. Specificallyfor smaller models, the method results in a significant performance boost,emphasizing the fact that intelligent pre-training can make up for fewerparameters and help building more efficient models. Combining POS-tagging andsynset prediction yields the overall best results.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答