Accueil > Axes > Ressources partagées > Ressources partagées

Ressources partagées

Conception et implémentation garanties sous contraintes de ressources partagées

Dans ce thème, nous nous intéressons au développement de système contraints par des ressources partagées. Ces ressources peuvent être structurelles comme les unités de calcul ou les moyens de communication mais ce peut être aussi du temps ou de l’énergie. Concevoir (analyser, développer, implémenter) des systèmes contraints par de telles ressources tout en garantissant de bonnes propriétés de fonctionnement et de performance est au coeur des activités de notre thème.

Quelles ressources partagées ?

  • moyens de communication (mémoire partagée, réseau, bus)
  • ressources matérielles (unités de calcul, coeur(s), pipeline, registres,..)
  • temps
  • énergie

Quelles garanties ?

  • temporelles
  • consommation en mémoire
  • consommation en énergie
  • tolérance aux fautes, stabilisation
  • déterminisme d’exécution, sûreté
  • isolation matérielle ou logicielle

Quels types de conception/implémentation ?

  • orchestration : briques système, algorithmes distribués
  • génération de code et compilation
  • simulation
  • vérification et test

Research Topics

The SharedResources group studies the development of embedded systems with shared-resource constraints in the following areas :

Certified Micro-Architecture Aware Optimized Compilation

We design efficient instruction schedulers in the CompCert certified compiler. Our optimisations first invoke some untrusted oracle able to find a fast instruction scheduling wrt the latency constraints and the resource constraints (on computation units) of the (multiple-issue) pipeline in the processor. Then, a certified checker verifies that this scheduling preserves the semantics (by certified symbolic execution and other static analyzes). We apply such schedulings both before register allocation and after. We are currently targeting Kalray VLIW, ARM and RISC-V cores.

Distributed fault-tolerant algorithms

Distributed computing relates to systems made of autonomous communicating entities. The main differences between such distributed systems and central ones are the absence of common time, asynchronism, and partial access of computing entities to the global state of the system. Distributed systems allow to model a vast variety of networks, from the LANs (Local Area Networks) to large-scale networks such as the Internet. Modern distributed systems can be large-scale, dynamic, and / or resource constrained. These characteristics make them more vulnerable to faults. Now, the scale of these systems as well as the adversarial environment where they may be deployed limit the possibility of human intervention to repair them. Fault-tolerance, i.e., the ability of a distributed algorithm to withstand faults, is then mandatory. In this context, our research focuses on self-stabilization, a versatile lightweight technique to withstand transient faults in a distributed system.

Executable High-Level Hardware Models for Software Development and Evaluation

We have been working with STMicrolectronics since 2002 on the notion of Transaction-Level Models (TLM) for hardware platforms. This is a level of detail which is : 1) sufficiently detailed so as to enable software development, long before the actual hardware is available ; 2) sufficiently abstract and simple to allow for efficient simulations. TLM has been used for various purposes, ranging from functional validation to energy-consumption early estimations. The current topic is to use TLM in order to help programmers take full advantage of a hardware platform.

Timing analysis and implementation of critical applications on multi-core platforms

We focus on the implementation of time-critical applications. We work on the whole conception workflow from the design of time-critical applications to the timing analysis with guaranteed static bounds on execution time and delays. The main on-going work focus on :

  • Code generation and semantic analysis : code generation of Lustre code, on design-level or intermediate code-level extract semantic properties and express them for worst-case execution time estimation
  • Implementation on multi-core with shared resources interference : implementation of critical data-flow application, generation of orchestration code to guarantee a semantic-preserving implementation, worst-case response time estimation including interference delays
  • Hardware models for worst-case execution time and interference delay estimation : static cache memory analysis, shared memory interference delays
  • Study of timing anomaly in the context of multi-core timing analysis

Online Efficient Scheduling for Multi-Core and Multi-Thread Systems


Contact | Plan du site | Site réalisé avec SPIP 4.2.8 + AHUNTSIC [CC License]

info visites 3886005