This website requires JavaScript.

SHIRO: Soft Hierarchical Reinforcement Learning

Kandai WatanabeMathew StrongOmer Eldar
Dec 2022
摘要
Hierarchical Reinforcement Learning (HRL) algorithms have been demonstratedto perform well on high-dimensional decision making and robotic control tasks.However, because they solely optimize for rewards, the agent tends to searchthe same space redundantly. This problem reduces the speed of learning andachieved reward. In this work, we present an Off-Policy HRL algorithm thatmaximizes entropy for efficient exploration. The algorithm learns a temporallyabstracted low-level policy and is able to explore broadly through the additionof entropy to the high-level. The novelty of this work is the theoreticalmotivation of adding entropy to the RL objective in the HRL setting. Weempirically show that the entropy can be added to both levels if theKullback-Leibler (KL) divergence between consecutive updates of the low-levelpolicy is sufficiently small. We performed an ablative study to analyze theeffects of entropy on hierarchy, in which adding entropy to high-level emergedas the most desirable configuration. Furthermore, a higher temperature in thelow-level leads to Q-value overestimation and increases the stochasticity ofthe environment that the high-level operates on, making learning morechallenging. Our method, SHIRO, surpasses state-of-the-art performance on arange of simulated robotic control benchmark tasks and requires minimal tuning.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?