SYRF Project
Task 4/5.1: "Integrating synchrony
& asynchrony: fundamentals"
Abstract of deliverable
4/5.1.2
Foreword:
Integrating synchrony & asynchrony is a major issue for the practical
use of the synchronous technology, this is why a WP is devoted to
it. This WP and the present task summary is presented according to the
new organization for the former two WP 4 and 5. Accordingly, a task
called fundamentals was introduced in this reorganization, and we
report its progresses now.
Approach:
There are several issues that need to be considered in fundamental
studies related to synchrony & asynchrony, namely:
- How to distribute correctly a synchronous program on a given,
possibly asynchronous, architecture? What are the available theoretical
results? What are the assumptions required, for the supporting execution
architecture?
- In what extend a property proved for the original synchronous program
is still valid for the same one, running on a distributed, possibly asynchronous,
architecture?
- What support can be provided (with theoretical justifications) for
the reuse of pre-existing components? How can encapsulation be guaranteed
correct?
This is the type of question which we want to provide theoretical support
for. The difficulties we are faced with are the following:
- In synchronous programming paradigm, there are, as byproducts of the
notion of a reaction,
- a global notion of linear time, and
- a global notion of state
are available. Unfortunately, none of them is generally available in
distributed asynchronous architectures, unless specific protocols are implemented
for them to be guaranteed. Except for ``synchronous'' architectures for
which these are reasonable requirements (e.g., hardware), it may be costly
to guarantee requirements 1 and 2 above.
- Storing components as compiled object code hides the scheduling of
internal actions, and thus can result into possible deadlocks at reuse,
unless drastic programming discipline is enforced.
Achieved Results:
- In a synchronous program, a run is a sequence
of tuples of values (including the status ``absent''), one tuple component
per each flow. Desynchronising a run amounts to (a) mapping this sequence
of t-uples into a tuple of sequences, one sequence per each flow,
and then (b), in each sequence erasing the ``absence'' information. Thus
desynchronized runs are asynchronous tuple of sequences, a program is thus
a set of such tuples, and program composition is just again intersection.
This model complies with the constraint that requirements 1 and 2, of global
time and state, may not be provided by an asynchronous distributed supporting
architecture. It however requires that the communication layer shall
not loose messages, and shall preserve message ordering for each flow,
separately.
- Endochrony has been introduced
as a new notion for synchronous programs. Endochrony means that the program
is able to reconstruct its timing from its ``interior'', and is not subject
to using the environment as an oracle for it. An endochronous program is
robust to desynchronisation, in the sense that each synchronous run of
the original program can be uniquely reconstructed from its desynchronised
version. Endochrony is checkable on synchronous programs; of course, it
is not decidable in general as the corresponding proof obligation involves
arbitrary data types, but approximation of it are model checkable or even
easily guaranteed by certain property of the clock structure of the synchronous
program.
- Isochrony has been introduced
as a new notion for pairs of synchronous programs. Roughly speaking, a
pair of programs is isochronous if it only needs to monitor present
messages in order to reconstruct the whole sequence of successive communication
reactions. The key result is that an isochronous pair of programs is robust
to the desynchronisation of its communication reactions. In practice, tightly
synchronised programs form an isochronous pair; but, on the other hand,
a pair of non interacting programs ruinning asynchronously in parallel
also forms an isochronous pair; and appropriate mixtures of these two situations
also are.
- Endochrony and Isochrony
together form the theoretical basis for (a) program desynchronisation,
and (b) property preserving while desynchronisation is performed.
- A particular form of ``partial compilation'' has been provided,
in which the object code has the following form, suitable to subsequent
reuse:
- the code is structured into tasks; tasks are compiled into sequential
code and are executed in a read_inputs/compute/emit_outputs mode; tasks
can be freely suspended and resumed by the supporting real-time kernel.
- tasks are scheduled, and the scheduler is kept as source synchronous
code.
- the structuring into tasks is performed so as to allow maximal compilation
while avoiding the risk of subsequent deadlock at reuse, due to the unknown
future environment.
This architecture serves as a guideline for the practical design of
libraries of synchronous components.
Documents: we have prepared a deliverable focussing on the results
regarding desynchronisation; it is basically extracted from reference 2
below. This reference also includes additional material, related to causality
analysis, reasoning on scheduling, with separate compilation as an objective.
Publications:
- A. Benveniste, P. Le Guernic, and P. Aubry
Compositionality in dataflow synchronous languages: specification and code
generation,
Malente Workshop on Compositionality, W.P. de Roever, A. Pnueli
Eds., September 1997, to appear in LNCS, late 1998 or early 1999.
- A. Benveniste, P. Le Guernic, B. Caillaud, and P. Aubry
Compositionality in dataflow synchronous languages:
specification and code generation,
(extended version of the former with a detailed study of desynchronisation),
submitted to I&C.
- A. Benveniste
Safety Critical Embedded Systems Design: the SACRES approach,
3 hour course given at FTRTFT'98 school on the whole SACRES project,
manuscript available.