Room 206 (2nd floor, badged access)
30 June 2023 - 14h00
Assuring Safety of Learning-Enabled Systems with Perception Contracts.
by Sayan Mitra from University of Illinois Urbana-Champaign
Abstract: Formal verification of deep learning models remains challenging, and yet they are becoming integral in many safety-critical autonomous systems. We present framework for certifying end-to-end safety of learning-enabled autonomous systems using perception contracts. The method flows from the observation that learning-based perception is often used for state estimation, and the error characteristics of such estimators can be succinct, testable, and amenable to effective formal analysis. Our framework constructs approximations of perception models, using system-level safety requirements and program analysis. Mathematically proving that a given perception model conforms to a contract will be a challenge, but empirical measures of conformance can provide confidence levels to safety claims. We will discuss recent applications of this framework in creating low-dimensional, intelligible contracts and end-to-end safety certificates for vision-based lane keeping and auto-landing systems, as well as a number for future research directions.
This is joint work with several coauthors led by Chiao Hsieh.
Bio. Sayan Mitra is a Professor and John Bardeen Faculty Scholar of ECE at UIUC. Sayan received his PhD from MIT and his research is on formal verification and safe autonomy. His group is well-known for developing algorithms for data-driven verification and synthesis, some of which are being commercialized. His textbook on verification of cyber-physical systems was published in 2021. Former PhD students from his group are now professors at Vanderbilt, NC Chapel Hill, MIT, and WashU. His group's work has been recognized with ACM Doctoral Dissertation Award, NSF CAREER Award, AFOSR YIP, and several best paper awards.