Kansas State University

search

AI Safety Research Initiative

Software

RLAttack – Crafting Adversarial Example Attacks on Policy Learners

Framework for experimental analysis of adversarial example attacks on policy learning in Deep RL. Attack methodologies are detailed in our paper “Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger” (Behzadan & Munir, 2017 – https://arxiv.org/abs/1712.09344 ). Examples based on this code are included in the Cleverhans project.

GitHub Repository