Accueil > Axes > FETLAS > FETLAS

FETLAS

Foundations for Engineering Trustworthy Learning-Enabled Autonomous Systems

  Scientific Context

The convergence of scientific and technological developments in the spheres of the computing and networking with the physical side and Artificial Intelligence (AI) will have an impact in the forthcoming period with respect to several system aspects and disciplines. Learning-enabled Autonomous Systems represent an example of that convergence, which embraces engineering and technological products. More specifically, the Learning-enabled System technologies are expected to bring large-scale improvements through new products and services across a myriad of applications ranging from healthcare to logistics through manufacturing, transport and more. Software is inarguably the enabling factor for realizing such systems. Unfortunately, we still encounter deployment limitation in safety-critical application (transportation, healthcare, etc.), due to lack of trust, behavioural uncertainty and technology compatibility with safe and secure system development methods. In particular, for Urban Autonomous Driving that is considered to be the hardest problem in autonomy, existing development of autonomous vehicles naturally includes software that realizes the AI part (e.g., machine-learning for perception), as well as software that governs the Cyber Physical Systems (CPS) part (e.g., for vehicle control or decision making via infrastructure support). Another field of technology, where higher levels of autonomy are required, includes the Intelligent Medical Devices used in modern healthcare units. Challenges There are significant challenges in ensuring the quality of the overall system :

  1. People applied machine learning (ML) following the classical paradigm in learning from data, where trustworthiness (safety, security) and transparency were rarely an issue. As a consequence, there is no disciplined approach for building, verifying, and certifying ML software components that are safety- and security-aware, unbiased, while increasing the transparency of the underlying decision-making.
  2. Autonomous systems are inherently sophisticated and operate in complex, unpredictable, and non-deterministic environments. For testing such systems, where millions of lines of software are involved, people rely on slowly accumulated experiences such as the number of miles in automated driving, with the hope to encounter unexpected events in between. There is no intelligent approach to derive unexpected or even risky scenarios through a substantially faster manner.
  3. Last, the created product may require continuous revision (e.g., parameter adjustment for medical devices or updates due to newly encountered driving scenarios). When applying the continuous engineering

  Research Directions

Our vision is to introduce a mixed approach for engineering trustworthy learning-enabled autonomous systems based on combining the advantages of data-based (performance) and model-based (guarantees) techniques. Leveraging this concept of mixture, we rely on the following scientific research strands to tackle challenges :

  1. Develop a technical framework suitable to foster convergence of ML and MBE : The convergence of ML and MBE will (i) allow to utilize disciplined model-based safety and security engineering methods to formally verify or synthesize ML components, (ii) allow ML-enabled methods to intelligently find test scenarios to assure safety, security and performance properties while avoiding current costly validation practices, (iii) open new avenues to apply the continuous engineering (DevOps) principle over the system’s evolution, such that we can avoid complete re-verification and re-validation, while addressing new safety and cyber-security risks.
  2. Develop a component-based modelling and simulation approach for learning-enabled autonomous systems : The ambition is to advance the current state-of-the-art in component-based modelling towards a modular autonomy modelling approach that will allow mixing data-driven and model-based decision components.
  3. Develop scalable automatic verification and testing techniques for learning-enabled autonomous systems : Create scalable formal verification techniques that can be applied on self-learning components, including global optimization-based methods, counterexample-guided abstraction and refinement, layer-wise reasoning, and statistical model checking techniques.
  4. Integrate learning from examples (performance) and synthesis from specification (correctness and safety) to create a new method of implementing ML components with runtime monitoring/enforcement techniques : Create a new paradigm for implementing ML components, resulting in high quality implementations that guarantee safety while achieving high performance. This is either achieved by a careful integration of formal verification into the learning procedure, or by equipping a runtime monitor which considers the potential violation of safetyand bring the autonomous system back to the safe region. We will also make the monitoring of neural networks a disciplined approach for scientifically defining the belief of decisions made by the neural network.

  Activities

  • Component-based Modelling of Autonomous Systems for continued development/operation (DevOps) : A component-based modelling and semantic reasoning approach is essential for training, robustly implementing safety, and performance requirements of learning-enabled autonomous systems and for validating them through simulation or formal analysis. In this activity we introduce a generic decomposition of the autonomic function into elementary functions (components) and an approach for mixing model-based and data-based techniques within a continued system development and operation process framework.
  • Systematic testing of learning components : Develop a hierarchy of coverage metrics (to be used in applications with different criticality levels).
  • Formal verification of learning components : combine the following methods to break the scalability barrier :
    1. Use anytime verification methods, which compute both upper and lower bounds for a given quantity and ensure that the bounds can be gradually improved until converged.
    2. Exploit static analysis techniques combined with counter-example guided abstraction refinement.
    3. Study layer-wise reasoning to reduce verification complexity.
    4. Use statistical model checking techniques for probabilistic correctness guarantees.
  • Runtime enforcement for ensuring performance while guaranteeing safety and security : To enhance robustness and transparency of the decision making unit, we develop methods for risk-sensitive control synthesis tailored to minimize the risk to violate the safety and security requirements by taking the uncertainty from the environmental model into account, while maximizing the nominal control performance.
  • Runtime monitoring decision faithfulness of ML components : Define distance metrics between any operation data with training data, which should balance fast computation and efficient memory consumption.