Visible to the public Protecting Reward Function of Reinforcement Learning via Minimal and Non-catastrophic Adversarial Trajectory

TitleProtecting Reward Function of Reinforcement Learning via Minimal and Non-catastrophic Adversarial Trajectory
Publication TypeConference Paper
Year of Publication2021
AuthorsChen, Tong, Xiang, Yingxiao, Li, Yike, Tian, Yunzhe, Tong, Endong, Niu, Wenjia, Liu, Jiqiang, Li, Gang, Alfred Chen, Qi
Conference Name2021 40th International Symposium on Reliable Distributed Systems (SRDS)
Date Publishedsep
Keywordsadversarial attack, Clustering algorithms, Costs, expert systems, expert trajectory, Human Behavior, human factors, Measurement, non-catastrophic, Perturbation methods, Prediction algorithms, Predictive models, privacy, pubcrawl, reinforcement learning, reward function, Scalability
AbstractReward functions are critical hyperparameters with commercial values for individual or distributed reinforcement learning (RL), as slightly different reward functions result in significantly different performance. However, existing inverse reinforcement learning (IRL) methods can be utilized to approximate reward functions just based on collected expert trajectories through observing. Thus, in the real RL process, how to generate a polluted trajectory and perform an adversarial attack on IRL for protecting reward functions has become the key issue. Meanwhile, considering the actual RL cost, generated adversarial trajectories should be minimal and non-catastrophic for ensuring normal RL performance. In this work, we propose a novel approach to craft adversarial trajectories disguised as expert ones, for decreasing the IRL performance and realize the anti-IRL ability. Firstly, we design a reward clustering-based metric to integrate both advantages of fine- and coarse-grained IRL assessment, including expected value difference (EVD) and mean reward loss (MRL). Further, based on such metric, we explore an adversarial attack based on agglomerative nesting algorithm (AGNES) clustering and determine targeted states as starting states for reward perturbation. Then we employ the intrinsic fear model to predict the probability of imminent catastrophe, supporting to generate non-catastrophic adversarial trajectories. Extensive experiments of 7 state-of-the-art IRL algorithms are implemented on the Object World benchmark, demonstrating the capability of our proposed approach in (a) decreasing the IRL performance and (b) having minimal and non-catastrophic adversarial trajectories.
Citation Keychen_protecting_2021