This website requires JavaScript.

Annealing Optimization for Progressive Learning with Stochastic Approximation

Christos MavridisJohn Baras
Sep 2022
摘要
In this work, we introduce a learning model designed to meet the needs ofapplications in which computational resources are limited, and robustness andinterpretability are prioritized. Learning problems can be formulated asconstrained stochastic optimization problems, with the constraints originatingmainly from model assumptions that define a trade-off between complexity andperformance. This trade-off is closely related to over-fitting, generalizationcapacity, and robustness to noise and adversarial attacks, and depends on boththe structure and complexity of the model, as well as the properties of theoptimization methods used. We develop an online prototype-based learningalgorithm based on annealing optimization that is formulated as an onlinegradient-free stochastic approximation algorithm. The learning model can beviewed as an interpretable and progressively growing competitive-learningneural network model to be used for supervised, unsupervised, and reinforcementlearning. The annealing nature of the algorithm contributes to minimalhyper-parameter tuning requirements, poor local minima prevention, androbustness with respect to the initial conditions. At the same time, itprovides online control over the performance-complexity trade-off byprogressively increasing the complexity of the learning model as needed,through an intuitive bifurcation phenomenon. Finally, the use of stochasticapproximation enables the study of the convergence of the learning algorithmthrough mathematical tools from dynamical systems and control, and allows forits integration with reinforcement learning algorithms, constructing anadaptive state-action aggregation scheme.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答