Question Answering

Question Answering is the task of answering questions (typically reading comprehension questions), but abstaining when presented with a question that cannot be answered based on the provided context. Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include [SQuAD](/dataset/squad), [HotPotQA](/dataset/hotpotqa), [bAbI](/dataset/babi-1), [TriviaQA](/dataset/triviaqa), [WikiQA](/dataset/wikiqa), and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet. Source: [SQuAD](https://rajpurkar.github.io/mlx/qa-and-squad/) )
相关学科: BERTKnowledge GraphsNERMachine TranslationPassage RetrievalReading ComprehensionAnswer SelectionText SummarizationRelation ExtractionOpen-Domain Question Answering

学科讨论

讨论Icon

暂无讨论内容,你可以

推荐文献

按被引用数

学科管理组

暂无学科课代表,你可以申请成为课代表

重要学者

Yoshua Bengio

429868 被引用,1063 篇论文

Yi Chen

267689 被引用,4684 篇论文

Ilya Sutskever

165856 被引用,113 篇论文

Ross Girshick

150810 被引用,165 篇论文

Michael I. Jordan

150356 被引用,1056 篇论文

Lotfi A. Zadeh

128623 被引用,362 篇论文

Christopher D. Manning

123173 被引用,515 篇论文

Jiawei Han

121361 被引用,1269 篇论文

Trevor Darrell

121211 被引用,688 篇论文

Jitendra Malik

118374 被引用,531 篇论文