This website requires JavaScript.

Masked autoencoders is an effective solution to transformer data-hungry

Jiawei MaoHonggu ZhouXuesong YinYuanqi Chang. Binling Nie. Rui Xu
Dec 2022
摘要
Vision Transformers (ViTs) outperforms convolutional neural networks (CNNs)in several vision tasks with its global modeling capabilities. However, ViTlacks the inductive bias inherent to convolution making it require a largeamount of data for training. This results in ViT not performing as well as CNNson small datasets like medicine and science. We experimentally found thatmasked autoencoders (MAE) can make the transformer focus more on the imageitself, thus alleviating the data-hungry issue of ViT to some extent. Yet thecurrent MAE model is too complex resulting in over-fitting problems on smalldatasets. This leads to a gap between MAEs trained on small datasets andadvanced CNNs models still. Therefore, we investigated how to reduce thedecoder complexity in MAE and found a more suitable architectural configurationfor it with small datasets. Besides, we additionally designed a locationprediction task and a contrastive learning task to introduce localization andinvariance characteristics for MAE. Our contrastive learning task not onlyenables the model to learn high-level visual information but also allows thetraining of MAE's class token. This is something that most MAE improvementefforts do not consider. Extensive experiments have shown that our method showsstate-of-the-art performance on standard small datasets as well as medicaldatasets with few samples compared to the current popular masked image modeling(MIM) and vision transformers for small datasets.The code and models areavailable at https://github.com/Talented-Q/SDMAE.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答