This website requires JavaScript.

DAE-Former: Dual Attention-guided Efficient Transformer for Medical Image Segmentation

Reza AzadRen\'e ArimondEhsan Khodapanah AghdamAmirhosein KazerouniDorit Merhof
Dec 2022
摘要
Transformers have recently gained attention in the computer vision domain dueto their ability to model long-range dependencies. However, the self-attentionmechanism, which is the core part of the Transformer model, usually suffersfrom quadratic computational complexity with respect to the number of tokens.Many architectures attempt to reduce model complexity by limiting theself-attention mechanism to local regions or by redesigning the tokenizationprocess. In this paper, we propose DAE-Former, a novel method that seeks toprovide an alternative perspective by efficiently designing the self-attentionmechanism. More specifically, we reformulate the self-attention mechanism tocapture both spatial and channel relations across the whole feature dimensionwhile staying computationally efficient. Furthermore, we redesign the skipconnection path by including the cross-attention module to ensure the featurereusability and enhance the localization power. Our method outperformsstate-of-the-art methods on multi-organ cardiac and skin lesion segmentationdatasets without requiring pre-training weights. The code is publicly availableat https://github.com/mindflow-institue/DAEFormer.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答