This website requires JavaScript.

Generalization Bounds for Noisy Iterative Algorithms Using Properties of Additive Noise Channels

Hao WangRui GaoFlavio P. Calmon
摘要
Machine learning models trained by different optimization algorithms underdifferent data distributions can exhibit distinct generalization behaviors. Inthis paper, we analyze the generalization of models trained by noisy iterativealgorithms. We derive distribution-dependent generalization bounds byconnecting noisy iterative algorithms to additive noise channels found incommunication and information theory. Our generalization bounds shed light onseveral applications, including differentially private stochastic gradientdescent (DP-SGD), federated learning, and stochastic gradient Langevin dynamics(SGLD). We demonstrate our bounds through numerical experiments, showing thatthey can help understand recent empirical observations of the generalizationphenomena of neural networks.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答