David Monniaux

[1] David Monniaux. Réduction de réseaux et factorisation. rapport de stage de première année de magistère, École normale supérieure de Lyon, 1996. French. [ bib | .ps.gz ]
[2] David Monniaux. Méthodes formelles et cryptographie. rapport de stage de seconde année de magistère, École normale supérieure de Lyon, 1997. French (introduction), English (scientific matters). [ bib | .dvi.gz | .pdf ]
[3] David Monniaux. Réalisation mécanisée d'interpréteurs abstraits. Rapport de DEA, Université Paris VII, 1998. French. [ bib | .dvi.gz | .pdf ]
[4] David Monniaux. Decision procedures for the analysis of cryptographic protocols by logics of belief. In 12th Computer Security Foundations Workshop, pages 44--54. IEEE, 1999. [ bib | DOI | .ps.gz | .pdf ]
Belief-logic deductions are used in the analysis of cryptographic protocols. We show a new method to decide such logics. In addition to the familiar BAN logic, it is also applicable to the more advanced versions of protocol security logics, and GNY in particular; and it employs an efficient forward-chaining algorithm the completeness and termination of which are proved. Theoretic proofs, implementation decisions and results are discussed.
[5] David Monniaux. Abstracting cryptographic protocols with tree automata. In Sixth International Static Analysis Symposium (SAS'99), number 1694 in Lecture Notes in Computer Science, pages 149--163. Springer Verlag, 1999. [ bib | .ps.gz | .pdf ]
Cryptographic protocols have so far been analyzed for the most part by means of testing (which does not yield proofs of secrecy) and theorem proving (costly). We propose a new approach, based on abstract interpretation and using regular tree languages. The abstraction we use seems fine-grained enough to be able to certify some protocols. Both the concrete and abstract semantics of the protocol description language and implementation issues are discussed in the paper.
[6] David Monniaux. Abstract interpretation of probabilistic semantics. In Seventh International Static Analysis Symposium (SAS'00), number 1824 in Lecture Notes in Computer Science, pages 322--339. Springer Verlag, 2000. [ bib | DOI | .ps.gz | .pdf ]
Following earlier models, we lift standard deterministic and nondeterministic semantics of imperative programs to probabilistic semantics. This semantics allows for random external inputs of known or unknown probability and random number generators. We then propose a method of analysis of programs according to this semantics, in the general framework of abstract interpretation. This method lifts an “ordinary” abstract lattice, for non-probabilistic programs, to one suitable for probabilistic programs. Our construction is highly generic. We discuss the influence of certain parameters on the precision of the analysis, basing ourselves on experimental results.
[7] David Monniaux. An abstract Monte-Carlo method for the analysis of probabilistic programs (extended abstract). In 28th Symposium on Principles of Programming Languages (POPL '01), pages 93--101. Association for Computer Machinery, 2001. [ bib | DOI | arXiv | .ps.gz | .pdf ]
We introduce a new method, combination of random testing and abstract interpretation, for the analysis of programs featuring both probabilistic and non-probabilistic nondeterminism. After introducing “ordinary” testing, we show how to combine testing and abstract interpretation and give formulas linking the precision of the results to the number of iterations. We then discuss complexity and optimization issues and end with some experimental results.
[8] David Monniaux. Backwards abstract interpretation of probabilistic programs. In European Symposium on Programming Languages and Systems (ESOP '01), number 2028 in Lecture Notes in Computer Science, pages 367--382. Springer Verlag, 2001. [ bib | DOI | .ps.gz | .pdf ]
We introduce an extension of the framework of backwards abstract interpretation to probabilistic programs. This extension is proved to be sound with respect to a semantics that we introduce and prove to be equivalent to the standard semantics of probabilistic programs. We then propose a generic construction lifting ordinary abstract interpretation lattices to probabilistic programs.
[9] David Monniaux. An abstract analysis of the probabilistic termination of programs. In 8th International Static Analysis Symposium (SAS'01), number 2126 in Lecture Notes in Computer Science, pages 111--126. Springer Verlag, 2001. [ bib | DOI | .ps.gz | .pdf ]
It is often useful to introduce probabilistic behavior in programs, either because of the use of internal random generators (probabilistic algorithms), either because of some external devices (networks, physical sensors) with known statistics of behavior. Previous works on probabilistic abstract interpretation have addressed safety properties, but somehow neglected probabilistic termination. In this paper, we propose a method to automatically prove the probabilistic termination of programs using exponential bounds on the tail of the distribution. We apply this method to an example and give some directions as to how to implement it. We also show that this method can also be applied to make unsound statistical methods on average running times sound.
[10] David Monniaux. Analyse de programmes probabilistes par interprétation abstraite. Thèse de doctorat, Université Paris IX Dauphine, 2001. Résumé étendu en français. Contents in English. [ bib | arXiv | .pdf ]
The study of probabilistic programs is of considerable interest for the validation of networking protocols, embedded systems, or simply for compiling optimizations. It is also a difficult matter, due to the undecidability of properties on infinite-state deterministic programs, as well as the difficulties arising from probabilistic aspects.

In this thesis, we propose a formulaic language for the specification of trace properties of probabilistic, nondeterministic transition systems, encompassing those that can be specified using deterministic Büchi automata. Those properties are in general undecidable on infinite processes.

This language has both a concrete semantics in terms of sets of traces, as well as an abstract semantics in terms of measurable functions. We then apply abstract interpretation-based techniques to give upper bounds on the worst-case probability of the studied property. We propose an enhancement of this technique when the state space is partitioned --- for instance along the program points ---, allowing the use of faster iteration methods. We propose two abstract domains suitable for this analysis, one parameterized by an abstract domain suitable for nondeterministic (but not probabilistic) abstract interpretation, one modeling extended normal distributions.

An alternative method to get such upper bounds works is to apply forward abstract interpretation on measures. We propose two abstract domains suitable for this analysis, one parameterized by an abstract domain suitable for nondeterministic abstract interpretation, one modeling sub-exponential queues. This latter domain allows proving probabilistic termination of programs.

The methods described so far are symbolic and do not make use of the statistical properties of probabilities. On the other hand, a well-known way to obtain informations on probabilistic distributions is the Monte-Carlo method. We propose an abstract Monte-Carlo method featuring randomized abstract interpreters.

[11] David Monniaux. Analysis of cryptographic protocols using logics of belief: an overview. Journal of Telecommunications and Information Technology, 4:57--67, 2002. [ bib | .pdf ]
When designing a cryptographic protocol or explaining it, one often uses arguments such as “since this message was signed by machine B, machine A can be sure it came from B” in informal proofs justifying how the protocol works. Since it is, in such informal proofs, often easy to overlook an essential assumption, such as a trust relation or the belief that a message is not a replay from a previous session, it seems desirable to write such proofs in a formal system. While such logics do not replace the recent techniques of automatic proofs of safety properties, they help in pointing the weaknesses of the system. In this paper, we present briefly the BAN (Burrows -- Abadi -- Needham) formal system [?] as well as some derivative. We show how to prove some properties of a simple protocol, as well as detecting undesirable assumptions. We then explain how the manual search for proofs can be made automatic. Finally, we explain how the lack of proper semantics can be a bit worrying.
[12] B. Blanchet, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, D. Monniaux, and X. Rival. Design and implementation of a special-purpose static program analyzer for safety-critical real-time embedded software. In T. Mogensen, D.A. Schmidt, and I.H. Sudborough, editors, The Essence of Computation: Complexity, Analysis, Transformation, number 2566 in Lecture Notes in Computer Science, pages 85--108. Springer Verlag, 2002. [ bib | .ps.gz | .pdf ]
We report on an experience in the design and implementation of a special-purpose static program analyzer for the verification of critical embedded real-time software.
[13] David Monniaux. Abstraction of expectation functions using gaussian distributions. In Lenore D. Zuck, Paul C. Attie, Agostino Cortesi, and Supratik Mukhopadhyay, editors, Verification, Model Checking, and Abstract Interpretation (VMCAI), number 2575 in Lecture Notes in Computer Science, pages 161--173. Springer Verlag, 2003. [ bib | DOI | .ps.gz | .pdf ]
We consider semantics of infinite-state programs, both probabilistic and nondeterministic, as expectation functions: for any set of states A, we associate to each program point a function mapping each state to its expectation of starting a trace reaching A. We then compute a safe upper approximation of these functions using abstract interpretation. This computation takes place in an abstract domain of extended Gaussian (normal) distributions.
[14] Bruno Blanchet, Patrick Cousot, Radhia Cousot, Jérôme Feret, Laurent Mauborgne, Antoine Miné, David Monniaux, and Xavier Rival. A static analyzer for large safety-critical software. In ACM SIGPLAN Conference on Programming language design and implementation (PLDI), pages 196--207. ACM, 2003. [ bib | DOI | arXiv | .ps.gz | .pdf ]
We show that abstract interpretation-based static program analysis can be made efficient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms. This is achieved by refinement of a general purpose static analyzer and later adaptation to particular programs of the family by the end-user through parametrization. This is applied to the proof of soundness of data manipulation operations at the machine level for periodic synchronous safety critical embedded software. The main novelties are the design principle of static analyzers by refinement and adaptation through parametrization, the symbolic manipulation of expressions to improve the precision of abstract transfer functions, ellipsoid, and decision tree abstract domains, all with sound handling of rounding errors in floating point computations, widening strategies (with thresholds, delayed) and the automatic determination of the parameters (parametrized packing).
[15] David Monniaux. Abstracting cryptographic protocols with tree automata. Science of Computer Programming, 47(2--3):177--202, 2003. [ bib | http | .ps.gz | .pdf ]
[16] David Monniaux. Abstract interpretation of programs as Markov decision processes. In Static Analysis Symposium (SAS '03), number 2694 in Lecture Notes in Computer Science, pages 237--254. Springer Verlag, 2003. [ bib | DOI | .ps.gz | .pdf ]
We propose a formal language for the specification of trace properties of probabilistic, nondeterministic transition systems, encompassing the properties expressible in Linear Time Logic. Those formulas are in general undecidable on infinite deterministic transition systems and thus on infinite Markov decision processes. This language has both a semantics in terms of sets of traces, as well as another semantics in terms of measurable functions; we give and prove theorems linking the two semantics. We then apply abstract interpretation-based techniques to give upper bounds on the worst-case probability of the studied property. We propose an enhancement of this technique when the state space is partitioned --- for instance along the program points ---, allowing the use of faster iteration methods.
[17] David Monniaux. Abstract interpretation of programs as Markov decision processes. Science of Computer Programming, 58(1--2):179--205, October 2005. [ bib | http | .ps.gz | .pdf ]
We propose a formal language for the specification of trace properties of probabilistic, nondeterministic transition systems, encompassing the properties expressible in Linear Time Logic. Those formulas are in general undecidable on infinite deterministic transition systems and thus on infinite Markov decision processes. This language has both a semantics in terms of sets of traces, as well as another semantics in terms of measurable functions; we give and prove theorems linking the two semantics. We then apply abstract interpretation-based techniques to give upper bounds on the worst-case probability of the studied property. We propose an enhancement of this technique when the state space is partitioned --- for instance along the program points ---, allowing the use of faster iteration methods.
[18] David Monniaux. Compositional analysis of floating-point linear numerical filters. In Computer-aided verification (CAV), number 3576 in Lecture Notes in Computer Science, pages 199--212. Springer Verlag, 2005. [ bib | DOI | .pdf ]
Digital linear filters are used in a variety of applications (sound treatment, control/command, etc.), implemented in software, in hardware, or a combination thereof. For safety-critical applications, it is necessary to bound all variables and outputs of all filters. We give a compositional, effective abstraction for digital linear filters expressed as block diagrams, yielding sound, precise bounds for fixed-point or floating-point implementations of the filters.
[19] Patrick Cousot, Radhia Cousot, Jérôme Feret, Laurent Mauborgne, Antoine Miné, David Monniaux, and Xavier Rival. The ASTRÉE analyzer. In European symposium on programming (ESOP), number 3444 in Lecture Notes in Computer Science, pages 21--30, 2005. [ bib | DOI | .ps.gz | .pdf ]
Astrée is an abstract interpretation-based static program analyzer aiming at proving automatically the absence of run time errors in programs written in the C programming language. It has been applied with success to large embedded control-command safety critical real-time software generated automatically from synchronous specifications, producing a correctness proof for complex software without any false alarm in a few hours of computation.
[20] David Monniaux. The parallel implementation of the Astrée static analyzer. In Kwangkeun Yi, editor, Programming Languages and Systems (APLAS), number 3780 in Lecture Notes in Computer Science. Springer Verlag, 2005. [ bib | DOI | arXiv | .ps.gz | .pdf ]
The Astrée static analyzer is a specialized tool that can prove the absence of runtime errors, including arithmetic overflows, in large critical programs. Keeping analysis times reasonable for industrial use is one of the design objectives. In this paper, we discuss the parallel implementation of the analysis.
[21] David Monniaux and Jean-Baptiste Soufron. DRM as a dangerous alternative to copyright licences. Upgrade, 7(3), 2006. [ bib | .pdf ]
[22] David Monniaux. Optimal abstraction on real-valued programs. In Gilberto Filé and Hanne Riis Nielson, editors, Static analysis (SAS '07), number 4634 in Lecture Notes in Computer Science, pages 104--120. Springer Verlag, 2007. [ bib | DOI | .ps.gz | .pdf ]
In this paper, we show that it is possible to abstract program fragments using real variables using formulas in the theory of real closed fields. This abstraction is compositional and modular. We first propose an exact abstraction for progr ams without loops. Given an abstract domain (in a wide class including intervals and octagons), we then show how to obtain an optimal abstraction of program fra gments with respect to that domain. This abstraction allows computing optimal fixed points inside that abstract domain, without the need for a widening operator .
[23] David Monniaux. The pitfalls of verifying floating-point computations. TOPLAS, 30(3):12, May 2008. [ bib | DOI | arXiv | http | .pdf ]
Current critical systems commonly use a lot of floating-point computations, and thus the testing or static analysis of programs containing floating-point operators has become a priority. However, correctly defining the semantics of common implementations of floating-point is tricky, because semantics may change with many factors beyond source-code level, such as choices made by compilers. We here give concrete examples of problems that can appear and solutions to implement in analysis software.
[24] Patrick Cousot, Radhia Cousot, Jerome Feret, Antoine Miné, Laurent Mauborgne, David Monniaux, and Xavier Rival. Varieties of static analyzers: A comparison with ASTRÉE. In Theoretical Aspects of Software Engineering (TASE '07). IEEE, 2007. [ bib | DOI ]
We discuss the characteristic properties of ASTRÉE, an automatic static analyzer for proving the absence of runtime errors in safety-critical real-time synchronous control/command C programs, and compare it with a variety of other program analysis tools.
[25] David Monniaux. Verification of device drivers and intelligent controllers: a case study. In Christoph Kirsch and Reinhard Wilhelm, editors, EMSOFT, pages 30--36. ACM & IEEE, 2007. [ bib | DOI | .ps.gz | .pdf ]
The soundness of device drivers generally cannot be verified in isolation, but has to take into account the reactions of the hardware devices. In critical embedded systems, interfaces often were simple “volatile” variables, and the interface specification typically a list of bounds on these variables. Some newer systems use “intelligent” controllers that handle dynamic worklists in shared memory and perform direct memory accesses, all asynchronously from the main processor. Thus, it is impossible to truly verify the device driver without taking the intelligent device into account, because incorrect programming of the device can lead to dire consequences, such as memory zones being erased. We have successfully verified a device driver extracted from a critical industrial system, asynchronously combined with a model for a USB OHCI controller. This paper studies this case, as well as introduces a model and analysis techniques for this asynchronous composition.
[26] Patrick Cousot, Radhia Cousot, Jérôme Feret, Laurent Mauborgne, Antoine Miné, David Monniaux, and Xavier Rival. Combination of abstractions in the astrée static analyzer. In Advances in Computer Science --- ASIAN 2006. Secure Software and Related Issues, number 4435 in Lecture Notes in Computer Science, pages 272--300. Springer Verlag, 2008. [ bib | DOI ]
[27] David Monniaux. A quantifier elimination algorithm for linear real arithmetic. In LPAR (Logic for Programming Artificial Intelligence and Reasoning), number 5330 in Lecture Notes in Computer Science, pages 243--257. Springer Verlag, 2008. [ bib | DOI | arXiv | .pdf ]
We propose a new quantifier elimination algorithm for the theory of linear real arithmetic. This algorithm uses as subroutines satisfiability modulo this theory and polyhedral projection; there are good algorithms and implementations for both of these. The quantifier elimination algorithm presented in the paper is compared, on examples arising from program analysis problems and on random examples, to several other implementations, all of which cannot solve some of the examples that our algorithm solves easily.
[28] David Monniaux. Automatic modular abstractions for linear constraints. In POPL (Principles of programming languages), pages 140--151. ACM, 2009. [ bib | DOI | arXiv | .pdf ]
We propose a method for automatically generating abstract transformers for static analysis by abstract interpretation. The method focuses on linear constraints on programs operating on rational, real or floating-point variables and containing linear assignments and tests.

In addition to loop-free code, the same method also applies for obtaining least fixed points as functions of the precondition, which permits the analysis of loops and recursive functions. Our algorithms are based on new quantifier elimination and symbolic manipulation techniques.

Given the specification of an abstract domain, and a program block, our method automatically outputs an implementation of the corresponding abstract transformer. It is thus a form of program transformation.

The motivation of our work is data-flow synchronous programming languages, used for building control-command embedded systems, but it also applies to imperative and functional programming.

[29] David Monniaux. On using floating-point computations to help an exact linear arithmetic decision procedure. In Computer-aided verification (CAV), number 5643 in Lecture Notes in Computer Science, pages 570--583. Springer Verlag, 2009. [ bib | DOI | .pdf ]
We consider the decision problem for quantifier-free formulas whose atoms are linear inequalities interpreted over the reals or rationals. This problem may be decided using satisfiability modulo theory (SMT), using a mixture of a SAT solver and a simplex-based decision procedure for conjunctions. State-of-the-art SMT solvers use simplex implementations over rational numbers, which perform well for typical problems arising from model-checking and program analysis (sparse inequalities, small coefficients) but are slow for other applications (denser problems, larger coefficients).

We propose a simple preprocessing phase that can be adapted to existing SMT solvers and that may be optionally triggered. Despite using floating-point computations, our method is sound and complete --- it merely affects efficiency. We implemented the method and provide benchmarks showing that this change brings a naive and slow decision procedure (“textbook simplex” with rational numbers) up to the efficiency of recent SMT solvers, over test cases arising from model-checking, and makes it definitely faster than state-of-the-art SMT solvers on dense examples.

[30] David Monniaux. Analyse statique : de la théorie à la pratique. Habilitation à diriger les recherches, Université Joseph Fourier, Grenoble, France, July 2009. [ bib | arXiv | .pdf ]
Software operating critical systems (aircraft, nuclear power plants) should not fail --- whereas most computerised systems of daily life (personal computer, ticket vending machines, cell phone) fail from time to time. This is not a simple engineering problem: it is known, since the works of Turing and Cook, that proving that programs work correctly is intrinsically hard.

In order to solve this problem, one needs methods that are, at the same time, efficient (moderate costs in time and memory), safe (all possible failures should be found), and precise (few warnings about nonexistent failures). In order to reach a satisfactory compromise between these goals, one can research fields as diverse as formal logic, numerical analysis or “classical” algorithmics.

From 2002 to 2007 I participated in the development of the Astrée static analyser. This suggested to me a number of side projects, both theoretical and practical (use of formal proof techniques, analysis of numerical filters...). More recently, I became interested in modular analysis of numerical property and in the applications to program analysis of constraint solving techniques (semidefinite programming, SAT and SAT modulo theory).

[31] David Monniaux. A minimalistic look at widening operators. Higher order and symbolic computation, 22(2):145--154, December 2009. [ bib | DOI | .pdf ]
We consider the problem of formalizing in higher-order logic the familiar notion of widening from abstract interpretation. It turns out that many axioms of widening (e.g. widening sequences are ascending) are not useful for proving correctness. After keeping only useful axioms, we give an equivalent characterization of widening as a lazily constructed well-founded tree. In type systems supporting dependent products and sums, this tree can be made to reflect the condition of correct termination of the widening sequence.
[32] David Monniaux. Quantifier elimination by lazy model enumeration. In Computer-aided verification (CAV), number 6174 in Lecture Notes in Computer Science, pages 585--599. Springer Verlag, 2010. [ bib | DOI | .pdf ]
We propose a quantifier elimination scheme based on nested lazy model enumeration through SMT-solving, and projections. This scheme may be applied to any logic that fulfills certain conditions; we illustrate it for linear real arithmetic. The quantifier elimination problem for linear real arithmetic is doubly exponential in the worst case, and so is our method. We have implemented it and benchmarked it against other methods from the literature.
[33] David Monniaux. Automatic modular abstractions for template numerical constraints. Logical Methods in Computer Science, June 2010. [ bib | DOI | arXiv | .pdf ]
We propose a method for automatically generating abstract transformers for static analysis by abstract interpretation. The method focuses on linear constraints on programs operating on rational, real or floating-point variables and containing linear assignments and tests. Given the specification of an abstract domain, and a program block, our method automatically outputs an implementation of the corresponding abstract transformer. It is thus a form of program transformation.

In addition to loop-free code, the same method also applies for obtaining least fixed points as functions of the precondition, which permits the analysis of loops and recursive functions.

The motivation of our work is data-flow synchronous programming languages, used for building control-command embedded systems, but it also applies to imperative and functional programming.

Our algorithms are based on quantifier elimination and symbolic manipulation techniques over linear arithmetic formulas. We also give less general results for nonlinear constraints and nonlinear program constructs.

[34] Thomas Gawlitza and David Monniaux. Improving strategies via SMT solving. In Gilles Barthe, editor, ESOP, number 6602 in Lecture Notes in Computer Science, pages 236--255. Springer Verlag, 2011. [ bib | DOI | arXiv | .pdf ]
We consider the problem of computing numerical invariants of programs by abstract interpretation. Our method eschews two traditional sources of imprecision: (i) the use of widening operators for enforcing convergence within a finite number of iterations (ii) the use of merge operations (often, convex hulls) at the merge points of the control flow graph. It instead computes the least inductive invariant expressible in the domain at a restricted set of program points, and analyzes the rest of the code en bloc. We emphasize that we compute this inductive invariant precisely. For that we extend the strategy improvement algorithm of [Gawlitza and Seidl, 2007]. If we applied their method directly, we would have to solve an exponentially sized system of abstract semantic equations, resulting in memory exhaustion. Instead, we keep the system implicit and discover strategy improvements using SAT modulo real linear arithmetic (SMT). For evaluating strategies we use linear programming. Our algorithm has low polynomial space complexity and performs for contrived examples in the worst case exponentially many strategy improvement steps; this is unsurprising, since we show that the associated abstract reachability problem is Pi-p-2-complete.
[35] David Monniaux and Pierre Corbineau. On the generation of Positivstellensatz witnesses in degenerate cases. In Marko Van Eekelen, Herman Geuvers, Julien Schmaltz, and Freek Wiedijk, editors, Interactive Theorem Proving (ITP), volume 6898 of Lecture Notes in Computer Science, pages 249--264. Springer Verlag, August 2011. [ bib | DOI | arXiv | .pdf ]
One can reduce the problem of proving that a polynomial is nonnegative, or more generally of proving that a system of polynomial inequalities has no solutions, to finding polynomials that are sums of squares of polynomials and satisfy some linear equality (Positivstellensatz). This produces a witness for the desired property, from which it is reasonably easy to obtain a formal proof of the property suitable for a proof assistant such as Coq.

The problem of finding a witness reduces to a feasibility problem in semidefinite programming, for which there exist numerical solvers. Unfortunately, this problem is in general not strictly feasible, meaning the solution can be a convex set with empty interior, in which case the numerical optimization method fails. Previously published methods thus assumed strict feasibility; we propose a workaround for this difficulty.

We implemented our method and illustrate its use with examples, including extractions of proofs to Coq.

[36] David Monniaux and Laure Gonnord. Using bounded model checking to focus fixpoint iterations. In Eran Yahav, editor, Static analysis (SAS), volume 6887 of Lecture Notes in Computer Science, pages 369--385. Springer Verlag, 2011. [ bib | DOI | arXiv | .pdf ]
Two classical sources of imprecision in static analysis by abstract interpretation are widening and merge operations. Merge operations can be done away by distinguishing paths, as in trace partitioning, at the expense of enumerating an exponential number of paths. In this article, we describe how to avoid such systematic exploration by focusing on a single path at a time, designated by SMT-solving. Our method combines well with acceleration techniques, thus doing away with widenings as well in some cases. We illustrate it over the well-known domain of convex polyhedra.
[37] David Monniaux and Julien Le Guen. Stratified static analysis based on variable dependencies. In Third International Workshop on Numerical and Symbolic Abstract Domains, 2011. [ bib | DOI | arXiv | .pdf ]
In static analysis by abstract interpretation, one often uses widening operators in order to enforce convergence within finite time to an inductive invariant. Certain widening operators, including the classical one over finite polyhedra, exhibit an unintuitive behavior: analyzing the program over a subset of its variables may lead a more precise result than analyzing the original program! In this article, we present simple workarounds for such behavior.
[38] David Monniaux and Martin Bodin. Modular abstractions of reactive nodes using disjunctive invariants. In Programming Languages and Systems (APLAS), pages 19--33, 2011. [ bib | DOI | arXiv | .pdf ]
We wish to abstract nodes in a reactive programming language, such as Lustre, into nodes with a simpler control structure, with a bound on the number of control states. In order to do so, we compute disjunctive invariants in predicate abstraction, with a bounded number of disjuncts, then we abstract the node, each disjunct representing an abstract state. The computation of the disjunctive invariant is performed by a form of quantifier elimination expressed using SMT-solving.

The same method can also be used to obtain disjunctive loop invariants.

[39] Diego Caminha Barbosa de Oliveira and David Monniaux. Experiments on the feasibility of using a floating-point simplex in an SMT solver. In Pascal Fontaine, Renate A. Schmidt, and Stephan Schulz, editors, Workshop on Practical Aspects of Automated Reasoning (PAAR), volume 21 of EPiC Series. Easychair, 2012. [ bib | http | .pdf ]
SMT solvers use simplex-based decision procedures to solve decision problems whose formulas are quantifier-free and atoms are linear constraints over the rationals. State-of-art SMT solvers use rational (exact) simplex implementations, which have shown good performance for typical software, hardware or protocol verification problems over the years. Yet, most other scientific and technical fields use (inexact) floating-point computations, which are deemed far more efficient than exact ones. It is therefore tempting to use a floating-point simplex implementation inside an SMT solver, though special precautions must be taken to avoid unsoundness.

In this work, we describe experimental results, over common benchmarks (SMT-LIB) of the integration of a mature floating-point implementation of the simplex algorithm (GLPK) into an existing SMT solver (OpenSMT). We investigate whether commonly cited reasons for and against the use of floating-point truly apply to real cases from verification problems.

[40] Anh-Dung Phan, Nikolaj Bjørner, and David Monniaux. Anatomy of alternating quantifier satisfiability (work in progress). In Pascal Fontaine and Amit Goel, editors, 10th International Workshop on Satisfiability Modulo Theories (SMT), 2012. [ bib | .pdf ]
We report on work in progress to generalize an algorithm recently introduced in for checking satisfiability of formulas with quantifier alternation. The algorithm uses two auxiliary procedures: a procedure for producing a candidate formula for quantifier elimination and a procedure for eliminating or partially eliminating quantifiers. We also apply the algorithm for Presburger Arithmetic formulas and evaluate it on formulas from a model checker for Duration Calculus. We report on experiments on different variants of the auxiliary procedures. So far, there is an edge to applying SMT-TEST proposed in [Monniaux, CAV2010], while we found that a simpler approach which just eliminates quantified variables per round is almost as good. Both approaches offer drastic improvements to applying default quantifier elimination.
[41] Julien Henry, David Monniaux, and Matthieu Moy. Succinct representations for abstract interpretation. In Static analysis (SAS), volume 7460 of Lecture Notes in Computer Science, pages 283--299. Springer Verlag, 2012. [ bib | DOI | arXiv | .pdf ]
Abstract interpretation techniques can be made more precise by distinguishing paths inside loops, at the expense of possibly exponential complexity. SMT-solving techniques and sparse representations of paths and sets of paths avoid this pitfall.

We improve previously proposed techniques for guided static analysis and the generation of disjunctive invariants by combining them with techniques for succinct representations of paths and symbolic representations for transitions based on static single assignment.

Because of the non-monotonicity of the results of abstract interpretation with widening operators, it is difficult to conclude that some abstraction is more precise than another based on theoretical local precision results. We thus conducted extensive comparisons between our new techniques and previous ones, on a variety of open-source packages.

[42] Julien Henry, David Monniaux, and Matthieu Moy. PAGAI: a path sensitive static analyzer. In Bertrand Jeannet, editor, Tools for Automatic Program Analysis (TAPAS), 2012. [ bib | DOI | arXiv | .pdf ]
We describe the design and the implementation of PAGAI, a new static analyzer working over the LLVM compiler infrastructure, which computes inductive invariants on the numerical variables of the analyzed program.

PAGAI implements various state-of-the-art algorithms combining abstract interpretation and decision procedures (SMT-solving), focusing on distinction of paths inside the control flow graph while avoiding systematic exponential enumerations. It is parametric in the abstract domain in use, the iteration algorithm, and the decision procedure.

We compared the time and precision of various combinations of analysis algorithms and abstract domains, with extensive experiments both on personal benchmarks and widely available GNU programs.

[43] Thomas Gawlitza and David Monniaux. Invariant generation through strategy iteration in succinctly represented control flow graphs. Logical Methods in Computer Science, September 2012. [ bib | DOI | arXiv ]
We consider the problem of computing numerical invariants of programs, for instance bounds on the values of numerical program variables. More specifically, we study the problem of performing static analysis by abstract interpretation using template linear constraint domains. Such invariants can be obtained by Kleene iterations that are, in order to guarantee termination, accelerated by widening operators. In many cases, however, applying this form of extrapolation leads to invariants that are weaker than the strongest inductive invariant that can be expressed within the abstract domain in use. Another well-known source of imprecision of traditional abstract interpretation techniques stems from their use of join operators at merge nodes in the control flow graph. The mentioned weaknesses may prevent these methods from proving safety properties.

The technique we develop in this article addresses both of these issues: contrary to Kleene iterations accelerated by widening operators, it is guaranteed to yield the strongest inductive invariant that can be expressed within the template linear constraint domain in use. It also eschews join operators by distinguishing all paths of loop-free code segments. Formally speaking, our technique computes the least fixpoint within a given template linear constraint domain of a transition relation that is succinctly expressed as an existentially quantified linear real arithmetic formula.

In contrast to previously published techniques that rely on quantifier elimination, our algorithm is proved to have optimal complexity: we prove that the decision problem associated with our fixpoint problem is Πp2-complete. Our procedure mimics a Πp2 search.

[44] David Monniaux. Applying the Z-transform for the static analysis of floating-point numerical filters. Draft from a possible extended version of CAV 2005., 2005. [ bib | arXiv ]
[45] David Monniaux. Introduction à la calculabilité. Quadrature, 86:17--28, 2012. [ bib | .pdf ]
Summary: Whether or not diophantine equations (that is, polynomials equations with integer coefficients and solutions) have solutions does not seem, a priori, to be related to bugs in computer programs. Computability theory establishes such a link: a classical result is that the problem of finding out whether a system of diophantine equations has a solution is the same as determining whether the execution a computer program will eventually stop. This article introduces computability theory: the difficulty of defining the class of computable functions (primitive recursive functions are insufficient), Turing machines, the Church thesis, and classical results (Halting problem, Rice's theorem, ect.).
Keywords: computable functions; turing; diophantine equations; turing machines
[46] Alexis Fouilhé, David Monniaux, and Michaël Périn. Efficient generation of correctness certificates for the abstract domain of polyhedra. In Static analysis (SAS), 2013. [ bib | DOI | arXiv | .pdf ]
Polyhedra form an established abstract domain for inferring runtime properties of programs using abstract interpretation. Computations on them need to be certified for the whole static analysis results to be trusted. In this work, we look at how far we can get down the road of a posteriori verification to lower the overhead of certification of the abstract domain of polyhedra. We demonstrate methods for making the cost of inclusion certificate generation negligible. From a performance point of view, our single-representation, constraints-based implementation compares with state-of-the-art implementations.
[47] Thomas Braibant, Jacques-Henri Jourdan, and David Monniaux. Implementing hash-consed data structures in Coq. In Interactive theorem proving (ITP), volume 7998 of Lecture Notes in Computer Science. Springer Verlag, 2013. [ bib | DOI | arXiv ]
We report on three different approaches to use hash-consing in programs certified with the Coq system, using binary decision diagrams (BDD) as running example. The use cases include execution inside Coq, or execution of the extracted OCaml code. There are different trade-offs between faithful use of pristine extracted code, and code that is fine-tuned to make use of OCaml programming constructs not available in Coq. We discuss the possible consequences in terms of performances and guarantees.
[48] Julien Henry, Mihail Asavoae, David Monniaux, and Claire Maiza. How to compute worst-case execution time by optimization modulo theory and a clever encoding of program semantics. In Youtao Zhang and Prasad Kulkarni, editors, LCTES, pages 43--52. ACM, 2014. [ bib | DOI | arXiv | http ]
In systems with hard real-time constraints, it is necessary to compute upper bounds on the worst-case execution time (WCET) of programs; the closer the bound to the real WCET, the better. This is especially the case of synchronous reactive control loops with a fixed clock; the WCET of the loop body must not exceed the clock period. We compute the WCET (or at least a close upper bound thereof) as the solution of an optimization modulo theory problem that takes into account the semantics of the program, in contrast to other methods that compute the longest path whether or not it is feasible according to these semantics. Optimization modulo theory extends satisfiability modulo theory (SMT) to maximization problems. Immediate encodings of WCET problems into SMT yield formulas intractable for all current production-grade solvers; this is inherent to the DPLL(T) approach to SMT implemented in these solvers. By conjoining some appropriate "cuts" to these formulas, we considerably reduce the computation time of the SMT-solver. We experimented our approach on a variety of control programs, using the OTAWA analyzer both as baseline and as underlying microarchitectural analysis for our analysis, and show notable improvement on the WCET bound on a variety of benchmarks and control programs.
[49] Thomas Braibant, Jacques-Henri Jourdan, and David Monniaux. Implementing and reasoning about hash-consed data structures in Coq. Journal of Automated Reasoning, pages 1--34, June 2014. [ bib | DOI | arXiv | http ]
We report on four different approaches to implementing hash-consing in Coq programs. The use cases include execution inside Coq, or execution of the extracted OCaml code. We explore the different trade-offs between faithful use of pristine extracted code, and code that is fine-tuned to make use of OCaml programming constructs not available in Coq. We discuss the possible consequences in terms of performances and guarantees. We use the running example of binary decision diagrams and then demonstrate the generality of our solutions by applying them to other examples of hash-consed data structures.
Keywords: Coq; Hash-consing; Binary decision diagrams
[50] David Monniaux and Peter Schrammel. Speeding up logico-numerical strategy iteration. In Markus Müller-Olm and Helmut Seidl, editors, Static Analysis - 21st International Symposium, SAS 2014, Munich, Germany, September 11-13, 2014. Proceedings, volume 8723 of Lecture Notes in Computer Science, pages 253--267. Springer, 2014. [ bib | DOI | arXiv | http ]
We introduce an efficient combination of polyhedral analysis and predicate partitioning. Template polyhedral analysis abstracts numerical variables inside a program by one polyhedron per control location, with a priori fixed directions for the faces. The strongest inductive invariant in such an abstract domain may be computed by upward strategy iteration. If the transition relation includes disjunctions and existential quantifiers (a succinct representation for an exponential set of paths), this invariant can be computed by a combination of strategy iteration and satisfiability modulo theory (SMT) solving. Unfortunately, the above approaches lead to unacceptable space and time costs if applied to a program whose control states have been partitioned according to predicates. We therefore propose a modification of the strategy iteration algorithm where the strategies are stored succinctly, and the linear programs to be solved at each iteration step are simplified according to an equivalence relation. We have implemented the technique in a prototype tool and we demonstrate on a series of examples that the approach performs significantly better than previous strategy iteration techniques.
[51] Francesco Alberti and David Monniaux. Polyhedra to the rescue of array interpolants. In ACM Symposium on Applied Computing, software verification and testing track, pages 1745--1750. ACM, 2015. [ bib | DOI | arXiv ]
We propose a new approach to the automated verification of the correctness of programs handling arrays. An abstract interpreter supplies auxiliary numeric invariants to an interpolation-based refinement procedure suited to array programs. Experiments show that this combination approach, implemented in an enhanced version of the Booster software model-checker, performs better than the pure interpolationbased approach, at no additional cost.
[52] Laure Gonnord, David Monniaux, and Gabriel Radanne. Synthesis of ranking functions using extremal counterexamples. In Programming Language Design and Implementation (PLDI), pages 608--618. ACM, 2015. [ bib | DOI | arXiv | http ]
We present a complete method for synthesizing lexicographic linear ranking functions (and thus proving termination), supported by inductive invariants, in the case where the transition relation of the program includes disjunctions and existentials (large block encoding of control flow). Previous work would either synthesize a ranking function at every basic block head, not just loop headers, which reduces the scope of programs that may be proved to be terminating, or expand large block transitions including tests into (exponentially many) elementary transitions, prior to computing the ranking function, resulting in a very large global constraint system. In contrast, our algorithm incrementally refines a global linear constraint system according to extremal counterexamples: only constraints that exclude spurious solutions are included. Experiments with our tool Termite show marked performance and scalability improvements compared to other systems.
[53] David Monniaux and Francesco Alberti. A simple abstraction of arrays and maps by program translation. In Static analysis (SAS), volume 9291 of Lecture Notes in Computer Science, pages 217--234. Springer Verlag, 2015. [ bib | DOI | arXiv | http ]
We present an approach for the static analysis of programs handling arrays, with a Galois connection between the semantics of the array program and semantics of purely scalar operations. The simplest way to implement it is by automatic, syntactic transformation of the array program into a scalar program followed analysis of the scalar program with any static analysis technique (abstract interpretation, acceleration, predicate abstraction,.. .). The scalars invariants thus obtained are translated back onto the original program as universally quantified array invariants. We illustrate our approach on a variety of examples, leading to the “Dutch flag” algorithm.
[54] Egor George Karpenkov, David Monniaux, and Philipp Wendler. Program analysis with local policy iteration. In Barbara Jobstmann and K. Rustan M. Leino, editors, Verification, Model Checking, and Abstract Interpretation (VMCAI), volume 9583 of Lecture Notes in Computer Science, pages 127--146. Springer Verlag, January 2016. [ bib | DOI | arXiv ]
We present a new algorithm for deriving numerical invariants that combines the precision of max-policy iteration with the flexibility and scalability of conventional Kleene iterations. It is defined in the Configurable Program Analysis (CPA) framework, thus allowing inter-analysis communication. It uses adjustable-block encoding in order to traverse loop-free program sections, possibly containing branching, without introducing extra abstraction. Our technique operates over any template linear constraint domain, including the interval and octagon domains; templates can also be derived from the program source. The implementation is evaluated on a set of benchmarks from the Software Verification Competition (SV-Comp). It competes favorably with state-of-the-art analyzers.
[55] Alexandre Maréchal, Alexis Fouilhé, Tim King, David Monniaux, and Michaël Périn. Polyhedral approximation of multivariate polynomials using handelman's theorem. In Barbara Jobstmann and K. Rustan M. Leino, editors, Verification, Model Checking, and Abstract Interpretation (VMCAI), volume 9583 of Lecture Notes in Computer Science, pages 166--184. Springer Verlag, January 2016. [ bib | DOI | arXiv ]
Convex polyhedra are commonly used in the static analysis of programs to represent over-approximations of sets of reachable states of numerical program variables. When the analyzed programs contain nonlinear instructions, they do not directly map to standard polyhedral operations: some kind of linearization is needed. Convex polyhe-dra are also used in satisfiability modulo theory solvers which combine a propositional satisfiability solver with a fast emptiness check for polyhedra. Existing decision procedures become expensive when nonlinear constraints are involved: a fast procedure to ensure emptiness of systems of nonlinear constraints is needed. We present a new linearization algorithm based on Handelman's representation of positive polynomials. Given a polyhedron and a polynomial (in)equality, we compute a polyhedron enclosing their intersection as the solution of a parametric linear programming problem. To get a scalable algorithm, we provide several heuristics that guide the construction of the Handelman's representation. To ensure the correctness of our polyhedral approximation , our Ocaml implementation generates certificates verified by a checker certified in Coq.
[56] David Monniaux. A survey of satisfiability modulo theory. In Computer Algebra in Scientific Computing (CASC), volume 9890 of Lecture Notes in Computer Science. Springer Verlag, September 2016. [ bib | DOI | arXiv ]
Satisfiability modulo theory (SMT) consists in testing the satisfiability of first-order formulas over linear integer or real arithmetic, or other theories. In this survey, we explain the combination of propositional satisfiability and decision procedures for conjunctions known as DPLL(T), and the alternative “natural domain” approaches. We also cover quantifiers, Craig interpolants, polynomial arithmetic, and how SMT solvers are used in automated software analysis.
[57] David Monniaux and Laure Gonnord. Cell morphing: From array programs to array-free Horn clauses. In Static analysis, volume 9837 of Lecture Notes in Computer Science, pages 361--382. Springer Verlag, September 2016. [ bib | arXiv ]
Automatically verifying safety properties of programs is hard. Many approaches exist for verifying programs operating on Boolean and integer values (e.g. abstract interpretation, counterexample-guided abstraction refinement using interpolants), but transposing them to array properties has been fraught with difficulties. Our work addresses that issue with a powerful and flexible abstraction that morphes concrete array cells into a finite set of abstract ones. This abstraction is parametric both in precision and in the back-end analysis used. From our programs with arrays, we generate nonlinear Horn clauses over scalar variables only, in a common format with clear and unambiguous logical semantics, for which there exist several solvers. We thus avoid the use of solvers operating over arrays, which are still very immature. Experiments with our prototype VAPHOR show that this approach can prove automatically and without user annotations the functional correctness of several classical examples, including selection sort, bubble sort, insertion sort, as well as examples from literature on array analysis.
[58] Egor George Karpenkov and David Monniaux. Formula slicing: Inductive invariants from preconditions. In Roderick Bloem and Eli Arbel, editors, Hardware and Software: Verification and Testing (Haifa Verification Conference), volume 10028 of Lecture Notes in Computer Science. Springer Verlag, November 2016. [ bib | http ]
We propose a “formula slicing” method for finding inductive invariants. It is based on the observation that many loops in the program affect only a small part of the memory, and many invariants which were valid before a loop are still valid after.

Given a precondition of the loop, obtained from the preceding program fragment, we weaken it until it becomes inductive. The weakening procedure is guided by counterexamples-to-induction given by an SMT solver. Our algorithm applies to programs with arbitrary loop structure, and it computes the strongest invariant in an abstract domain of weakenings of preconditions. We call this algorithm “formula slicing”, as it effectively performs “slicing” on formulas derived from symbolic execution.

We evaluate our algorithm on the device driver benchmarks from the International Competition on Software Verification (SV-COMP), and we show that it is competitive with the state-of-the-art verification techniques.

[59] Valentin Touzeau, Claire Maïza, and David Monniaux. Model checking of cache for WCET analysis refinement. In 10th Junior Researcher Workshop on Real-Time Computing, 2016. [ bib | .pdf ]
[60] Virginie Gautron and David Monniaux. De la surveillance secrète à la prédiction des risques : les dérives du fichage dans le champ de la lutte contre le terrorisme. Archives de politique criminelle, 38:123--135, 2016. [ bib | arXiv | http ]
[61] Ahmed Bouajjani and David Monniaux, editors. Verification, Model Checking, and Abstract Interpretation (VMCAI), volume 10145 of Lecture Notes in Computer Science. Springer Verlag, January 2017. [ bib | DOI ]
[62] Valentin Touzeau, Claire Maïza, David Monniaux, and Jan Reineke. Ascertaining uncertainty for efficient exact cache analysis. In Viktor Kuncak and Rupak Majumdar, editors, Computer-aided verification (CAV). Springer Verlag, 2017. [ bib | arXiv ]
[63] Alexey Bakhirkin and David Monniaux. Combining forward and backward abstract interpretation of Horn clauses. In Francesco Ranzato, editor, Static analysis (SAS), volume 10422 of Lecture Notes in Computer Science. Springer Verlag, 2017. [ bib | arXiv ]
[64] David Monniaux Alexandre Maréchal and Michaël Périn. Scalable minimizing-operators on polyhedra via parametric linear programming. In Francesco Ranzato, editor, Static analysis (SAS), volume 10422 of Lecture Notes in Computer Science. Springer Verlag, 2017. [ bib | arXiv ]
[65] David Monniaux. On the decidability of the existence of polyhedral invariants in transition systems. Acta Informatica, 56(4):385--389, 2019. [ bib | DOI | arXiv | http | .pdf ]
[66] Alexey Bakhirkin and David Monniaux. Extending constraint-only representation of polyhedra with boolean constraints. In Static analysis (SAS), pages 127--145, 2018. [ bib | arXiv ]
[67] Valentin Touzeau, Claire Maiza, David Monniaux, and Jan Reineke. Fast and exact analysis for lru caches. Proc. ACM Program. Lang., 3:54:1--54:29, January 2019. [ bib | DOI | http ]
Keywords: Abstract Interpretation, Cache Analysis, LRU
[68] Sylvain Boulmé, Alexandre Maréchal, David Monniaux, Michael Périn, and Hang Yu. The verified polyhedron library: an overview. In Erika Ábrahám, Viorel Negru, Dana Petcu, Daniela Zaharie, Tetsuo Ida, Tudor Jebelean, and Stephen M. Watt, editors, 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC 2018), pages 9--17. IEEE Computer Society, 2019. [ bib ]
[69] Camille Coti, David Monniaux, and Hang Yu. Parallel parametric linear programming solving, and application to polyhedral computations. In Computational Science – ICCS 2019, volume 11540 of Lecture Notes in Computer Science, pages 566--572, June 2019. [ bib | DOI | arXiv ]
[70] Hang Yu and David Monniaux. An efficient parametric linear programming solver and application to polyhedral projection. In Bor-Yuh Evan Chang, editor, Static analysis (SAS), volume 11822 of Lecture Notes in Computer Science, pages 203--224. Springer Verlag, 2019. [ bib | DOI ]
[71] David Monniaux and Valentin Touzeau. On the complexity of cache analysis for different replacement policies. Journal of the ACM, 66(6):41:1--41:22, November 2019. [ bib | DOI | arXiv ]
Keywords: Caches, NP-completeness, PSPACE-completeness, cache analysis, complexity, static analysis
[72] Camille Coti, David Monniaux, and Hang Yu. A task‐based approach to parallel parametric linear programming solving, and application to polyhedral computations. Concurrency and computation: practice and experience, 33:e6050, 2021. [ bib | DOI | arXiv ]
[73] Cyril Six, Sylvain Boulmé, and David Monniaux. Certified and efficient instruction scheduling: application to interlocked VLIW processors. Proc. ACM Program. Lang., 4(OOPSLA):129:1--129:29, 2020. [ bib | DOI | arXiv ]
[74] Julien Braine, Laure Gonnord, and David Monniaux. Data abstraction: A general framework to handle program verification of data structures. In Static analysis (SAS), volume 12913 of Lecture Notes in Computer Science, pages 215--235. Springer, 2021. [ bib | DOI | arXiv ]
[75] David Monniaux and Cyril Six. Simple, light, yet formally verified, global common subexpression elimination and loop-invariant code motion. In Jörg Henkel and Xu Liu, editors, LCTES '21: 22nd ACM SIGPLAN/SIGBED International Conference on Languages, Compilers, and Tools for Embedded Systems, Virtual Event, Canada, 22 June, 2021, pages 85--96. ACM, 2021. [ bib | DOI ]
[76] David Monniaux and Sylvain Boulmé. The trusted computing base of the compcert verified compiler. In European Symposium on Programming Languages and Systems (ESOP '22), volume 13240 of Lecture Notes in Computer Science, pages 204--233. Springer, 2022. [ bib | DOI | arXiv ]
[77] David Monniaux. The complexity gap in the static analysis of cache accesses grows if procedure calls are added. Formal Methods in System Design, 2022. [ bib | DOI | arXiv ]
[78] David Monniaux and Alice Pain. Formally verified 32- and 64-bit integer division using double-precision floating-point arithmetic. In 29th IEEE Symposium on Computer Arithmetic, ARITH 2022, Lyon, France, September 12-14, 2022, pages 128--132. IEEE, 2022. [ bib | DOI | arXiv ]
[79] Julien Braine, Laure Gonnord, and David Monniaux. Verifying programs with arrays and lists. Internship report, ENS Lyon, June 2016. [ bib | arXiv | .pdf ]
Keywords: Abstract Interpretation ; Program Verification
[80] David Monniaux and Cyril Six. Formally verified loop-invariant code motion and assorted optimizations. ACM Trans. Embed. Comput. Syst., 22(1), December 2022. [ bib | DOI | arXiv ]
We present an approach for implementing a formally certified loop-invariant code motion optimization by composing an unrolling pass and a formally certified yet efficient global subexpression elimination. This approach is lightweight: each pass comes with a simple and independent proof of correctness. Experiments show the approach significantly narrows the performance gap between the CompCert certified compiler and state-of-the-art optimizing compilers. Our static analysis employs an efficient yet verified hashed set structure, resulting in the fast compilation.
Keywords: CompCert, common subexpression elimination, Verified compilation
[81] Thomas Vigouroux, Cristian Ene, David Monniaux, Laurent Mounier, and Marie-Laure Potet. Baxmc: a CEGAR approach to max#sat. In Alberto Griggio and Neha Rungta, editors, 22nd Formal Methods in Computer-Aided Design, FMCAD 2022, Trento, Italy, October 17-21, 2022, pages 170--178. IEEE, 2022. [ bib | DOI ]
[82] David Monniaux, Léo Gourdin, Sylvain Boulmé, and Olivier Lebeltel. Testing a formally verified compiler. In Virgile Prevosto and Cristina Seceleanu, editors, Tests and Proofs - 17th International Conference, TAP 2023, Leicester, UK, July 18-19, 2023, Proceedings, volume 14066 of Lecture Notes in Computer Science, pages 40--48. Springer, 2023. [ bib | DOI | arXiv ]
[83] Léo Gourdin, Benjamin Bonneau, Sylvain Boulmé, David Monniaux, and Alexandre Bérard. Formally verifying optimizations with block simulations. Proc. ACM Program. Lang., 7(OOPSLA2), October 2023. [ bib | DOI | arXiv | http ]
CompCert (ACM Software System Award 2021) is the first industrial-strength compiler with a mechanically checked proof of correctness. Yet, CompCert remains a moderately optimizing C compiler. Indeed, some optimizations of “gcc ‍-O1” such as Lazy Code Motion (LCM) or Strength Reduction (SR) were still missing: developing these efficient optimizations together with their formal proofs remained a challenge. Cyril Six et al. have developed efficient formally verified translation validators for certifying the results of superblock schedulers and peephole optimizations. We revisit and generalize their approach into a framework (integrated into CompCert) able to validate many more optimizations: an enhanced superblock scheduler, but also Dead Code Elimination (DCE), Constant Propagation (CP), and more noticeably, LCM and SR. In contrast to other approaches to translation validation, we co-design our untrusted optimizations and their validators. Our optimizations provide hints, in the forms of invariants or CFG morphisms, that help keep the formally verified validators both simple and efficient. Such designs seem applicable beyond CompCert.
Keywords: Translation validation, Symbolic execution, Formal verification of compiler optimizations, the Coq proof assistant
[84] David Monniaux. Completeness in static analysis by abstract interpretation, a personal point of view. In Vincenzo Arceri, Agostino Cortesi, Pietro Ferrara, and Martina Olliaro, editors, Challenges of Software Verification. Springer Singapore, July 2023. [ bib | DOI ]

This file was generated by bibtex2html 1.99.