久久影院一区二区三区-久久影院午夜伦手机不四虎卡-久久影院毛片一区二区-久久影视一区-在线精品91青草国产在线观看-在线激情小视频

<返回

Reinforcement Learning from Diverse Human Preferences

Wanqi Xue, Bo An, Shuicheng Yan, Zhongwen Xu

IJCAI 2024 Conference

August 2024

Keywords: Reinforcement Learning, Human Preferences, Human Feedback, Rewards

Abstract:

The complexity of designing reward functions has been a major obstacle to the wide application of deep reinforcement learning (RL) techniques. Describing an agent s desired behaviors and properties can be difficult, even for experts. A new paradigm called reinforcement learning from human preferences (or preference-based RL) has emerged as a promising solution, in which reward functions are learned from human preference labels among behavior trajectories. However, existing methods for preference-based RL are limited by the need for accurate oracle preference labels. This paper addresses this limitation by developing a method for crowd-sourcing preference labels and learning from diverse human preferences. The key idea is to stabilize reward learning through regularization and correction in a latent space. To ensure temporal consistency, a strong constraint is imposed on the reward model that forces its latent space to be close to the prior distribution. Additionally, a confidence-based reward model ensembling method is designed to generate more stable and reliable predictions. The proposed method is tested on a variety of tasks in DMcontrol and Meta-world and has shown consistent and significant improvements over existing preference-based RL algorithms when learning from diverse feedback, paving the way for real-world applications of RL methods.

View More PDF>>

主站蜘蛛池模板: 邹平县| 阿拉善右旗| 和龙市| 庄浪县| 太和县| 玉屏| 安吉县| 海门市| 剑河县| 河间市| 内乡县| 安西县| 独山县| 万山特区| 苏尼特左旗| 黑龙江省| 淮南市| 丹江口市| 信阳市| 湖口县| 湘西| 潞西市| 确山县| 游戏| 文昌市| 晋城| 且末县| 平潭县| 大竹县| 禹州市| 金阳县| 新民市| 宁阳县| 沾益县| 浑源县| 洛浦县| 合江县| 玉溪市| 东平县| 庄浪县| 崇州市|