Kansas State University

search

AI Safety Research Initiative

Month: January 2018

RLAttack: Crafting Adversarial Example Attacks on Policy Learners

Framework for experimental analysis of adversarial example attacks on policy learning in Deep RL. Attack methodologies are detailed in our paper “Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger” (Behzadan & Munir, 2017 – https://arxiv.org/abs/1712.09344 ).

This project provides an interface between @openai/baselines and @tensorflow/cleverhans to facilitate the crafting and implementation of adversarial example attacks on deep RL algorithms. We would also like to thank @andrewliao11/NoisyNet-DQN for inspiring solutions to implementing the NoisyNet algorithm for DQN.

GitHub Repository