This website requires JavaScript.

Exploring the sequence length bottleneck in the Transformer for Image Captioning

Jia Cheng HuRoberto CavicchioliAlessandro Capotondi
Jul 2022
摘要
Most recent state of art architectures rely on combinations and variations ofthree approaches: convolutional, recurrent and self-attentive methods. Our workattempts in laying the basis for a new research direction for sequence modelingbased upon the idea of modifying the sequence length. In order to do that, wepropose a new method called "Expansion Mechanism" which transforms eitherdynamically or statically the input sequence into a new one featuring adifferent sequence length. Furthermore, we introduce a novel architecture thatexploits such method and achieves competitive performances on the MS-COCO 2014data set, yielding 134.6 and 131.4 CIDEr-D on the Karpathy test split in theensemble and single model configuration respectively and 130 CIDEr-D in theofficial online evaluation server, despite being neither recurrent nor fullyattentive. At the same time we address the efficiency aspect in our design andintroduce a convenient training strategy suitable for most computationalresources in contrast to the standard one. Source code is available athttps://github.com/jchenghu/ExpansionNet
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答