Seminar details

Room 206 (2nd floor, badged access)
21 November 2024 - 14h00
: Safe Reinforcement Learning with Verification in the Loss
by Chao Huang from School of Electronics & Computer Science Universi
invited by Thao DANG


Abstract: Reinforcement learning (RL) is one of the main areas in machine learning, of which the basic idea is that an agent will interact with the environment and use the feedback to update its control policy. Compared with the classical control algorithms, RL can be used in a complex or even unknown environment. However, due to the insufficient consideration of safety, existing RL-based can be hardly deployed in safety-critical scenarios. In this presentation, I will introduce our recent work on safe RL by involving verification in the learning loop, more specifically in the loss function. We started from simple attempt through reducing the Lipschitz constant of a neural-network controller to improve its variability. Then verification techniques, e.g., reachable set computation, barrier certificate, are encoded in the loss function in policy gradient RL framework to update the control policy parameters. The empirical results show that the proposed RL algorithms can significantly outperforms existing techniques in terms of safety under different environment setting.

Dr. Chao Huang is Associate Professor in CPS at University of Southampton, UK. Prior to Southampton, he was a lecturer (assistant professor) in the Department of Computer Science at the University of Liverpool. His research interests include safety-oriented learning and verification of machine learning algorithms, in particular reinforcement learning, as well as their applications in robotics, transportation, etc.

Contact | Site Map | Site powered by SPIP 4.2.16 + AHUNTSIC [CC License]

info visites 4133900