Virtual Prototyping

The Synchrone Group has been studying various kinds of embedded systems models for more than 20 years. A model has several usages. It can either be further refined into some implementation, or be used to validate the functional properties of a system specification, and/or to evaluate the performances of a system before it is actually built. We call virtual prototyping all the approaches in which an executable and system-wide model of a computer system is built and run for various observations. In recent years, the group has applied such an approach to the various levels of abstraction that are used to model embedded control systems (see Embedded Software Implementation), systems-on-a-chip (see VERIMAG + STMicroelectronics common projects [2002...[) and sensor networks. On the other hand, working on abstract models is the key to the understanding of complex computer systems, and helps finding correct-by-construction implementation methods.

We also work on several analysis methods, for functional or non-functional properties. In the latter case, we look for a compromise between fully detailed computational models (which are difficult to analyze), and very abstract analytical models (whose faithfulness can be seriously questioned). Finally, abstract models also serve in developing algorithms; since S. Devismes joined the group in September 2008, some work has been devoted to distributed systems problems.

A system-wide model may include:

  • A simplified model of the physical environment of the computer system (useful for embedded control systems)
  • A model of the execution platform (the hardware, the OS or middleware)
  • A model of the embedded software (that may be extracted automatically from programming languages like C, or obtained as a result of the compilation process from a language like Lustre).

The rest of this text describes several approaches for the definition of formal models. Of course this is not independent of the actual uses of models for the implementation and validation of embedded systems. See also Embedded Software Implementation for more details.

 Wireless Sensor Networks

One of the main challenges for Wireless Sensor Networks (WSNs) is to perform energy-aware design. The problem is difficult because all the elements of a sensor network have an influence on energy consumption: the hardware of a node, the sensors, the medium- access-control and routing protocols, the initial self-organization phase, the application itself, the quality of the radio channel, and even the physical environment that stimulates the sensors. The design of an energy-optimal solution is out of reach because of all the interacting criteria. Since a sensor network includes dedicated hardware, it may be long and costly to build a complete solution before evaluating it. The usual approach is to build a virtual prototype of a sensor network, and then to perform simulations or mathematical analyzes in order to evaluate the energy consumption. One of our approaches to the virtual prototyping of sensor networks is to use a synchronous language to build a global and executable model of a sensor network, including all the elements that influence energy consumption: the details about the hardware of a node, the code of the protocols and the application, an executable model of the physical environment that stimulates the sensors, and also a model of the physical medium in which the radio communication occurs. Synchronous languages are appropriate for building such a detailed model, especially when it comes to describing the detailed energy consumption of the hardware and its relationship with time.

This work started in 2004 with FT R&D and the DCS Team of Verimag, in the context of a CIFRE PhD of Ludovic Samper ). It continued with the ANR project ARESA, and now ARESA2.

One of the main benefits of executability is that it allows comparing the model with an implementation, and does not restrict a priori the amount of details that can be put in the model. This helps getting confidence in the faithfulness of the model. For instance, our models allow to mimic the behavior of the hardware parts, with respect to energy consumption. The energy cost of a message sent from one node and reaching the sink, is computed according to the detailed energy modes of the radio components, driven by the detailed protocols, and connected to the application code. Conversely, in more abstract models for sensor networks, a fixed energy cost is associated with the transmission of one bit of information. This means that we ignore the consumption of all the devices other than the radio; we also ignore the periods of overhearing, i.e., when a node is in the receive mode while it could be in the less consuming sleep mode. The latter model is easier to analyze. The model that mimics the behavior of the hardware is more likely to be faithful.

The main results of this activity are:

  • Results on the faithfulness of such computational models, obtained by comparing the performances measured on some already deployed sensor network with the predictions of our model;
  • Results on the discrepancies between our detailed computational models, and the more abstract models used in the network community; in particular, abstract models that do not take the physical environment into account.
  • The convergence between the energy models developed in the project, and the non-functional models studied for TLM design of SoCs. This has allowed the definition of a joint project with STM, DOCEA Power, and other academic partners (HELP, accepted june 2009).

 Space Systems

Within the European integrated project ASSERT, we proposed a technique to help the simulation and early validation of complex systems. Starting from software components described in Scade or Lustre, and from a hardware and software architecture described in the architecture description language AADL, we developed an automatic translator of the AADL model into an executable model in Lustre, which activates the software components, and simulates in a realistic way the various possible executions of these components on the architecture. Such a model can be used for simulation, but also for verifying properties. The method has been applied to a case study proposed by EADS-Astrium: the collision avoidance mechanism of the ATV (Automatic Transfer Vehicule), a spacecraft providing the international space station (ISS).


Virtual Prototyping for Systems-on-a-Chip The Register Transfer Level (RTL) used to be the entry point of the design flow of hardware systems, including systems-on-a-chip (SoCs). However, the simulation environments for such models do not scale up well. Developing and debugging embedded software for these low level models before getting the physical chip from the factory is no longer possible at a reasonable cost New abstraction levels, such as the Transaction-Level Modeling, have emerged. The TLM approach uses a component-based approach, in which hardware blocks are modules communicating with so-called transactions The TLM models are used for early development of the embedded software, because the high level of abstraction allows a fast simulation. This new abstraction level requires that SoCs be described in some non-deterministic asynchronous way, with new synchronization mechanisms, quite different from the implicit synchronization of synchronous circuit descriptions. SystemC is a C++ library used for the description of SoCs at different levels of abstraction, from cycle accurate to purely functional models. It comes with a simulation environment, and is becoming a de facto standard.

See also VERIMAG + STMicroelectronics common projects [2002...[.

 Computational vs Analytical Models

Computational vs Analytical Models This research direction aims at bridging the gap between computational (or executable) models, and analytical models. The work has been initiated by discussions with Lothar Thiele’s group at ETH Zürich, in the context of the COMBEST European project.

IF we draw a two-axis picture with analysability on one axis, and faithfulness on the other, the interesting and feasible models are placed on a diagonal: non analyzable and unprecise models are useless, whereas very detailed and easily analyzable models are hopeless. On this diagonal, an important point is the limit between models that have states, and models that have not. Detailed computational models (like the ones we use for virtual prototyping) clearly have states. Abstract analytical models like the Real-Time Calculus, that were first defined for performance analysis of networks, have not. In a lot of embedded systems we would like to model and analyze, the notion of state is important, because it allows to model various modes of the system. For instance, a multimedia system-on-a-chip will rely on a component called the power manager, which can turn some other components on and off. A precise estimation of the energy consumption has to take this behavior into account, otherwise it is too abstract. One of the main reasons to try and bridge the gap between analytical and computational models is the hope to find models in which we can express such mode behaviors, while retaining at least part of the efficient analysis of pure analytical models. We experimented the connection of Real-Time Calculus to two classes of computational models: discrete-time models, using the programming language Lustre for specifying the modules, and continuous-time, using the Uppaal timed automata for this task. In the long term, this approach should lead to a generalization of the MPA (Modular Performance Analysis) framework where both module specifications and stream specifications could be written in different formalisms.

 42: a model of components

Components: the model 42 The notion of a component for embedded systems has been discussed for some years now, and there are a lot of proposals. Reusing parts of a previous system requires that these parts be properly defined as components, equipped with some form of a specification (informal or formal). The specification groups all information needed for using the component, without knowing in details how it is built. This includes both functional and non functional aspects, like timing performances, or energy consumption. There seem to be a wide agreement on the fact that the main difficulty is due to the intrinsic heterogeneity of embedded systems, which has numerous causes (components may be hardware or software or OS, synchronous or asynchronous, deterministic or not, detailed w.r.t. time or not, detailed w.r.t. data or not, etc.). A lot of approaches, following Ptolemy, propose to define several Models of Computation and Communication (MoCCs) to deal with heterogeneity, and a framework in which they can be combined hierarchically. Our approach is inspired a lot by Ptolemy, and by our previous expertise in component-based modeling and programming of embedded systems. Our component-model is called 42. We aim at expressing fine-grain timing aspects and several types of concurrency as MoCCs, but we require that all the MoCCs be programmed in terms of more basic primitives. Our point of view is that, if we need to model all possible types of structured interactions, we should not rely on a catalogue of predefined interactions, but we should be able to program new interactions in the model. in 42, MoCCs are implemented by controllers which are small programs that express several types of concurrency and structured interactions in term of basic primitives.

The second important point is that such a liberal framework for the modeling of heterogeneous systems requires some "safeguard": we also defined rich control contracts (as opposed to "data contracts") for the components in 42. 42 is meant to be an abstract description level, to help designing and specifying components, and not yet another parallel programming language.


Since S. Devismes joined the group in September 2008, he mainly worked on three topics related to distributed computing: crash-tolerant protocols, self-stabilization, and multi-agent protocols. For crash-tolerant protocols, he shown some necessary and sufficient conditions for designing crash-tolerant protocols distributed systems equipped with processes having small memories.

Concerning self-stabilization, four problems were addressed:

  • snap-stabilizing protocols (snap-stabilization is a specialization of self-stabilization)
  • self-stabilization in unidirectional networks
  • self-stabilizing resource allocation protocols
  • reducing the communication cost of silent self-stabilizing tasks.

Concerning multi-agent problems, two points were addressed:

  • Rendez-vous of oblivious agents in an arbitrary anonymous network.
  • Exploration of an anonymous ring by a team of anonymous oblivious robots .

The competencies on distributed systems are an important add-on to the competencies of the group, attacking problems in networks of embedded systems.

Contact | Site Map | Site powered by SPIP 4.2.13 + AHUNTSIC [CC License]

info visites 4001645