Towards Safe Reinforcement Learning in the Real World
Control for mobile robots in slippery, rough terrain at high speeds is difficult. One approach to designing controllers for complex, non-uniform dynamics in unstructured environments is to use model-free learning-based methods. However, these methods often lack the necessary notion of safety which is needed to deploy these controllers without danger, and hence have rarely been tested successfully on real world tasks. In this work, we present methods and techniques that allow model-free learning-based methods to learn low-level controllers for mobile robot navigation while minimizing violations to user-defined safety constraints. We show these learned controllers working robustly in both simulation and in the real world on a 1:10 scale RC car, as well as a full-size vehicle called the MRZR.