This website requires JavaScript.

Gaussian Pre-Activations in Neural Networks: Myth or Reality?

Pierre WolinskiJulyan Arbel
May 2022
摘要
The study of feature propagation at initialization in neural networks lies atthe root of numerous initialization designs. An assumption very commonly madein the field states that the pre-activations are Gaussian. Although thisconvenient Gaussian hypothesis can be justified when the number of neurons perlayer tends to infinity, it is challenged by both theoretical and experimentalworks for finite-width neural networks. Our major contribution is to constructa family of pairs of activation functions and initialization distributions thatensure that the pre-activations remain Gaussian throughout the network's depth,even in narrow neural networks. In the process, we discover a set ofconstraints that a neural network should fulfill to ensure Gaussianpre-activations. Additionally, we provide a critical review of the claims ofthe Edge of Chaos line of works and build an exact Edge of Chaos analysis. Wealso propose a unified view on pre-activations propagation, encompassing theframework of several well-known initialization procedures. Finally, our workprovides a principled framework for answering the much-debated question: is itdesirable to initialize the training of a neural network whose pre-activationsare ensured to be Gaussian?
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答