Home > Topics > Shared Resources > Shared Resources

Shared Resources

Implementation and conception with guaranteed constraints on shared resources

In this group, we focus on the development of embedded systems with shared-resource constraints. Shared resources may be computing unit or communication medium, but also non-functional as time or energy. The main activity is the conception (analysis, development, implementation) of embedded systems with guarantees on shared resources constraints; including guarantees on safety and performances.

What kind of shared resources?

  • Communication medium (shared memory, network, bus)
  • Hardware resources (computation unit, core(s), pipeline, registers)
  • Time
  • Energy

What kind of guarantees?

  • Timing
  • Memory consumption
  • Energy consumption
  • Fault-tolerance (e.g., self-stabilization)
  • Deterministic executions, Safety
  • Hardware or software isolation

What kind of conception?

  • Orchestration: operating systems, distributed algorithms
  • Code generation and compilation
  • Simulation
  • Testing, Safety validation

Research Topics

The SharedResources group studies the development of embedded systems with shared-resource constraints in the following areas:

Certified Micro-Architecture Aware Optimized Compilation

We design efficient instruction schedulers in the CompCert certified compiler. Our optimisations first invoke some untrusted oracle able to find a fast instruction scheduling wrt the latency constraints and the resource constraints (on computation units) of the (multiple-issue) pipeline in the processor. Then, a certified checker verifies that this scheduling preserves the semantics (by certified symbolic execution and other static analyzes). We apply such schedulings both before register allocation and after. We are currently targeting Kalray VLIW, ARM and RISC-V cores.

Distributed fault-tolerant algorithms

Distributed computing relates to systems made of autonomous communicating entities. The main differences between such distributed systems and central ones are the absence of common time, asynchronism, and partial access of computing entities to the global state of the system. Distributed systems allow to model a vast variety of networks, from the LANs (Local Area Networks) to large-scale networks such as the Internet. Modern distributed systems can be large-scale, dynamic, and / or resource constrained. These characteristics make them more vulnerable to faults. Now, the scale of these systems as well as the adversarial environment where they may be deployed limit the possibility of human intervention to repair them. Fault-tolerance, i.e., the ability of a distributed algorithm to withstand faults, is then mandatory. In this context, our research focuses on self-stabilization, a versatile lightweight technique to withstand transient faults in a distributed system.

Executable High-Level Hardware Models for Software Development and Evaluation

We have been working with STMicrolectronics since 2002 on the notion of Transaction-Level Models (TLM) for hardware platforms. This is a level of detail which is: 1) sufficiently detailed so as to enable software development, long before the actual hardware is available; 2) sufficiently abstract and simple to allow for efficient simulations. TLM has been used for various purposes, ranging from functional validation to energy-consumption early estimations. The current topic is to use TLM in order to help programmers take full advantage of a hardware platform.

Timing analysis and implementation of critical applications on multi-core platforms

We focus on the implementation of time-critical applications. We work on the whole conception workflow from the design of time-critical applications to the timing analysis with guaranteed static bounds on execution time and delays. The main on-going work focus on:

  • Code generation and semantic analysis: code generation of Lustre code, on design-level or intermediate code-level extract semantic properties and express them for worst-case execution time estimation
  • Implementation on multi-core with shared resources interference: implementation of critical data-flow application, generation of orchestration code to guarantee a semantic-preserving implementation, worst-case response time estimation including interference delays
  • Hardware models for worst-case execution time and interference delay estimation: static cache memory analysis, shared memory interference delays
  • Study of timing anomaly in the context of multi-core timing analysis

Online Efficient Scheduling for Multi-Core and Multi-Thread Systems


Contact | Site Map | Site powered by SPIP 3.1.13 + AHUNTSIC [CC License]

info visites 1684345