This website requires JavaScript.

Why neural networks find simple solutions: the many regularizers of geometric complexity

Benoit DherinMichael MunnMihaela C. RoscaDavid G.T. Barrett
Sep 2022
摘要
In many contexts, simpler models are preferable to more complex models andthe control of this model complexity is the goal for many methods in machinelearning such as regularization, hyperparameter tuning and architecture design.In deep learning, it has been difficult to understand the underlying mechanismsof complexity control, since many traditional measures are not naturallysuitable for deep neural networks. Here we develop the notion of geometriccomplexity, which is a measure of the variability of the model function,computed using a discrete Dirichlet energy. Using a combination of theoreticalarguments and empirical results, we show that many common training heuristicssuch as parameter norm regularization, spectral norm regularization, flatnessregularization, implicit gradient regularization, noise regularization and thechoice of parameter initialization all act to control geometric complexity,providing a unifying framework in which to characterize the behavior of deeplearning models.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答