Efficient Exploration
0 订阅
Efficient Exploration is one of the main obstacles in scaling up modern deep reinforcement learning algorithms. The main challenge in Efficient Exploration is the balance between exploiting current estimates, and gaining information about poorly understood states and actions. Source: [Randomized Value Functions via Multiplicative Normalizing Flows ](https://arxiv.org/abs/1806.02315)
相关学科: Atari GamesMeta Reinforcement LearningDQNDistributional Reinforcement LearningContinuous ControlMulti-Agent Reinforcement LearningStarcraft IIEntropy RegularizationSoft Actor CriticHierarchical Reinforcement Learning
学科讨论

暂无讨论内容,你可以
推荐文献
按被引用数
学科管理组
暂无学科课代表,你可以申请成为课代表
重要学者
Bernhard Schölkopf
117502 被引用,1231
篇论文
Jürgen Schmidhuber
101619 被引用,563
篇论文
William L. Jorgensen
89420 被引用,601
篇论文
Ruslan Salakhutdinov
89393 被引用,413
篇论文
Pietro Perona
79041 被引用,423
篇论文
Sebastian Thrun
75432 被引用,407
篇论文
Peter Stone
69897 被引用,1396
篇论文
Ik Siong Heng
66447 被引用,476
篇论文
Alessandra Romero
64362 被引用,1445
篇论文
Alex Graves
61500 被引用,98
篇论文