Multi-Armed Bandits
0 订阅
Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off. Source: [Microsoft Research](http://research.microsoft.com/en-us/projects/bandits/)
相关学科: Monte-Carlo Tree SearchDecision Making Under UncertaintyActive SearchDueling BanditsReal-Time Strategy GamesRecommendation SystemsEntropy RegularizationEfficient ExplorationGeneralization BoundsSelection Bias
学科讨论

暂无讨论内容,你可以
推荐文献
按被引用数
学科管理组
暂无学科课代表,你可以申请成为课代表
重要学者
Michael I. Jordan
150356 被引用,1056
篇论文
Jay Hauser
117962 被引用,2529
篇论文
Ion Stoica
78984 被引用,507
篇论文
Robert E. Schapire
76454 被引用,259
篇论文
Sebastian Thrun
75432 被引用,407
篇论文
S. Shankar Sastry
70326 被引用,894
篇论文
Peter Stone
69897 被引用,1396
篇论文
Qian Wang
60972 被引用,2391
篇论文
Georgios B. Giannakis
59112 被引用,1336
篇论文
Dacheng Tao
57097 被引用,1414
篇论文