This website requires JavaScript.

Jump to Conclusions: Short-Cutting Transformers With Linear Transformations

Alexander Yom DinTaelin KaridiLeshem ChoshenMor Geva
Mar 2023
摘要
Transformer-based language models (LMs) create hidden representations oftheir inputs at every layer, but only use final-layer representations forprediction. This obscures the internal decision-making process of the model andthe utility of its intermediate representations. One way to elucidate this isto cast the hidden representations as final representations, bypassing thetransformer computation in-between. In this work, we suggest a simple methodfor such casting, by using linear transformations. We show that our approachproduces more accurate approximations than the prevailing practice ofinspecting hidden representations from all layers in the space of the finallayer. Moreover, in the context of language modeling, our method allows"peeking" into early layer representations of GPT-2 and BERT, showing thatoften LMs already predict the final output in early layers. We then demonstratethe practicality of our method to recent early exit strategies, showing thatwhen aiming, for example, at retention of 95% accuracy, our approach savesadditional 7.9% layers for GPT-2 and 5.4% layers for BERT, on top of thesavings of the original approach. Last, we extend our method to linearlyapproximate sub-modules, finding that attention is most tolerant to thischange.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?