The functional testing of reactive systems raises specific problems. Lurette adopts the usual point of view of synchronous programming, considering that the behavior of such a system is a sequence of atomic reactions --- which can be either time-triggered or event-triggered, or both ---. Each reaction consists in reading inputs, computing outputs, and updating the internal state of the system. As a consequence, a tester has to provide test sequences, i.e., sequences of input vectors. Moreover, since a reactive system is generally designed to control its environment, the input vector at a given reaction may depend on the previous outputs. As a consequence, input sequences cannot be produced off-line, and their elaboration must be intertwined with the execution of the system under test (SUT). Finally, in order to decide whether a given test succeeds or fails, the sequence of pairs (input-vector, output-vector) can be provided to an observer which acts as an oracle at each reaction.
In the past, we proposed languages and tools to automate this testing process . The system under test is a black box; it can be any executable code, able to perform on demand a reaction, i.e., read inputs; do a step; provide outputs. The environment is modeled using dynamically changing constraints on inputs described using Lucky .
We now use a higher-level language named stochastic Lutin , . The oracle is provided as an observer in Lustre. The tool Lurette is then able to run automatically any number of arbitrarily long test sequences. Each step consists of (1) executing one (stochastic) reaction of the environment that provides inputs to the SUT (2) executing one reaction of the SUT with the chosen inputs, (3) executing one reaction of the oracle observer with the SUT inputs and outputs (and stopping the test if the checked property is violated), and (4) looping to (1) using the SUT outputs as environment inputs.
The Lurette iterative process loops. Oracles and environments of the SUT are extracted from heterogeneous specifications. When an oracle is invalidated, it can be due to a design error, a coding error, or to a wrong or imprecise specification. Once the system is running without invalidating oracles, in order to improve the coverage rate, the tester needs to refine test scenarios.