The very first Lustre compiler takes a parallel program, and produces single-loop sequential code, intended to be run on a bare machine. The code reflects the static scheduling decisions taken at compile-time. The resulting code is both efficient and safe, because the sources of non-determinism are reduced to the minimum. However, some classes of embedded systems cannot be implemented efficiently with such a compilation scheme. The group now studies distributed implementations, and/or multi-thread implementations of synchronous models.
The work on these distributed or multi-threaded implementations of synchronous languages has been performed in the context of the project RISE and several academic collaborations. In particular, P. Caspi is recognized as an expert on embedded system development, in the avionics and ground transportation domains.
Multi-threads implementations
Multi-thread implementations are needed for systems with subcomponents that should run at various speeds; this is also the case for systems that have to deal with urgent events. In these cases, it is necessary to define an implementation on top of an operating system kernel. The approach of the group is to select carefully the functions of an OS kernel that are compulsory (threads, priorities, scheduling algorithm), and to define implementations on which determinism can be guaranteed to some point. The idea is to be able to compare the behavior of the implementation with the semantics of the original (centralized) model (for instance a Lustre program). The various results have been put in application in a complete correct-by-construction chain from Matlab/Simulink to embedded code, based on several elements:
- a front-end that translates Simulink into Lustre;
- several back-ends that allow to implement the Lustre model onto several types of execution platforms.
The main results are the following:
- A Memory-Optimal Buffering Protocol for Preservation of Synchronous Semantics under Preemptive Scheduling [SYN46]
- A Semantic-preserving multitask implementation of synchronous programs [SYN47]
- A semantic-Preserving and Memory Efficient Implementation of Inter-Task Communication on Static-Priority or EDF Schedulers [SYN48]
In the PhD by M. Alras, the complete chain from Simulink to code has been generalized, introducing Lustre++ as an intermediate formalism: it is an extension of Lustre allowing to specify constraints that come from the peculiarities of the execution platform (OS+HW). Experiments are conducted using the Lego Mindstorm platform.
Quasi-synchronous implementations of synchronous programs
It might be necessary to distribute the code of a program on a parallel architecture, in particular for fault-tolerance purposes.
The approach is the same, from a high level formal model to an implementation, controlling that the semantics
is respected.
This is an important subject because many safety-critical systems — Airbus flight control systems for instance
— are specified synchronously (SCADE, Simulink) and implemented quasi-synchronously: the activation clocks
of subsystems implemented on distinct computers are slightly different from what their specification requires.
In a Phd thesis sponsored by Airbus, we have studied this problems and addressed it in terms of approximation
topologies: the implementation should not be too far from the specification and do not diverge from it [1].
We have continued this activity with several co-authors in search for logical protocols ensuring functional equivalence between specifications and implementations, in two directions:
- protocols based on the temporal properties of quasi-synchronous clocks [4].