This website requires JavaScript.

Continual Learning with Invertible Generative Models

Jary PomponiSimone ScardapaneAurelio Uncini
摘要
Catastrophic forgetting (CF) happens whenever a neural network overwritespast knowledge while being trained on new tasks. Common techniques to handle CFinclude regularization of the weights (using, e.g., their importance on pasttasks), and rehearsal strategies, where the network is constantly re-trained onpast data. Generative models have also been applied for the latter, in order tohave endless sources of data. In this paper, we propose a novel method thatcombines the strengths of regularization and generative-based rehearsalapproaches. Our generative model consists of a normalizing flow (NF), aprobabilistic and invertible neural network, trained on the internal embeddingsof the network. By keeping a single NF throughout the training process, we showthat our memory overhead remains constant. In addition, exploiting theinvertibility of the NF, we propose a simple approach to regularize thenetwork's embeddings with respect to past tasks. We show that our methodperforms favorably with espect to state-of-the-art approaches in theliterature, with bounded computational power and memory overheads.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答