Kansas State University


AI Safety Research Initiative

New Paper: A Psychopathological Approach to Safety Engineering in AI and AGI

The pre-print of our new paper, written by Vahid Behzadan and Dr. Arslan Munir, in collaboration with University of Louisville’s Prof. Roman Yampolskiy , has been made available to public. The abstract of this paper, titled “A Psychopathological Approach to Safety Engineering in AI and AGI”, is as follows:

The complexity of dynamics in AI techniques is already approaching that of complex adaptive systems, thus curtailing the feasibility of formal controllability and reachability analysis in the context of AI safety. It follows that the envisioned instances of Artificial General Intelligence (AGI) will also suffer from challenges of complexity. To tackle such issues, we propose the modeling of deleterious behaviors in AI and AGI as psychological disorders, thereby enabling the employment of psychopathological approaches to analysis and control of misbehaviors. Accordingly, we present a discussion on the feasibility of the psychopathological approaches to AI safety, and propose general directions for research on modeling, diagnosis, and treatment of psychological disorders in AGI.

The full text of this paper is available here.

Undergraduate Project Defense

James Minton, an undergraduate affiliate of the AI Safety Research Initiative, defended his senior project today. He has been working alongside Vahid Behzadan and Dr. Munir on developing a platform for experiments on ethical reinforcement learning in the context of autonomous navigation. James will be joining us this summer to further advance this project, which will be made available to public for research on ethical decision making and the value alignment problem. Many congratulations to James, and job well done : )

Discussion Panel on AI Safety

Last Friday, Vahid Behzadan was invited to host a discussion on AI safety for the KDD research group at K-State. In this session, Vahid touched upon various topics including AGI, safety issues in current and future AI, economics of emergent catastrophe, value alignment problem, game theory, counter-factual reasoning, and an overview of good resources to kickstart research in AI safety. You can view a video of this discussion via the following Youtube link:

RLAttack: Crafting Adversarial Example Attacks on Policy Learners

Framework for experimental analysis of adversarial example attacks on policy learning in Deep RL. Attack methodologies are detailed in our paper “Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger” (Behzadan & Munir, 2017 – https://arxiv.org/abs/1712.09344 ).

This project provides an interface between @openai/baselines and @tensorflow/cleverhans to facilitate the crafting and implementation of adversarial example attacks on deep RL algorithms. We would also like to thank @andrewliao11/NoisyNet-DQN for inspiring solutions to implementing the NoisyNet algorithm for DQN.

GitHub Repository

New Paper: Whatever does not kill deep reinforcement learning, makes it stronger

Abstract: Recent developments have established the vulnerability of deep Reinforcement Learning (RL) to policy manipulation attacks via adversarial perturbations. In this paper, we investigate the robustness and resilience of deep RL to training-time and test-time attacks. Through experimental results, we demonstrate that under noncontiguous training-time attacks, Deep Q-Network (DQN) agents can recover and adapt to the adversarial conditions by reactively adjusting the policy. Our results also show that policies learned under adversarial perturbations are more robust to test-time attacks. Furthermore, we compare the performance of ϵ-greedy and parameter-space noise exploration methods in terms of robustness and resilience against adversarial perturbations.

Read the preprint draft here.