# Formal Methods for Computer Security

The research group is interested in developing a theoretic and formal grounding for computer security. From a practical point of view, the goal of such research is the development of efficient and sound tools for the analysis of security properties. Currently, the work carried out covers the following main areas:

### Semantics, specification language and verification of security protocols:

Security protocols are almost everywhere. They have the particular feature of being concise and yet subtle and error prone. Just realize that the Needham-Schroeder protocol has been around for almost 17 years before an attack has been found. Today, we still lack semantically clean and widely accepted languages for describing security protocols. The situation concerning their properties is worse. Indeed, confidentiality and authentication are more or less well understood (it is actually not completely true of the latter) but anonymity, opactness, fairness, denial of service etc.... are far from being well studied.

Main topics of this area are:

- Verification of security protocols within Dolev-Yao model:the group has developed a general verification method for security protocols that can handle unbounded sessions. The following papers underpin its theoretic background: [TACAS'03], [TR-2004-1]. This work culminates in a powerful and efficient tool : Hermes that is described in [CAV03] and can be tested at the following url: Hermes. This method is further refined in [FOSSACS04], where the authors provide a sound and complete method for the verification of a rich class of security properties in the case of bounded number of sessions.
- Beyond secrecy and authentication:the group has also studied anonymity properties and developed new results. In [WITS04] it is proved that opacity (the inability of intruder to decide for an execution of a security protocol, that a property is satisfied or not) is decidable in the case of a passive intruder. This work is further extended in [FAST04], where the hypothesis of encryption with only atomic keys is dropped, and in [ARSPA04] where it is proved that a restricted version of opacity is still decidable for active intruders. In [TR-2004-25], opacity is extended to labelled transition systems and generalised in order to better represent concepts from the work on information flow. In particular, in this work, links between opacity and the information flow concepts of anonymity and non-interference are established. Also abstraction-based methods of verifying opacity when working with Petri nets are studied.
- More realistic models for security protocols:most work done in verification of security protocols uses a formal model, called the Dolev-Yao model. Important hypotheses of Dolev-Yao model are: 1) the encryption algorithms are perfect (the intruder must know the inverse key to obtain the plaintext from a ciphertext) and 2) nonces (random numbers used to ensure the freshness of a message) are ideally generated (they do not collude, and can not be "guessed"). A direction of research of the group deals with developing more realistic models. In [CONCUR04] the hypothesis of "perfect nonces" is weakened, and the authors extend the method of [FOSSACS04] to provide a sound and complete method for the verification of security protocols that use timestamps.
- Soundness of the symbolic model:[TR-2004-19] represents a significant step towards reconciling the computational model (where cryptographic functions operate on string of bits and can be attacked using Turing machines) and the symbolic, also coalled Dolev-Yao, model. The main result of the paper is that the formal model (Dolev-Yao) is a sound abstraction of the computational model, provided that encryption schemes are IND-CCA. This work extends previous ones in several ways: 1) generalizes to multi-party protocols, 2) allows symmetric/asymmetric encryption as well as digital signature, 3) encoding of secret keys is allowed, 4) applies to secrecy properties as well. In [TR-2005-3], this result is extended to symmetric and asymmetric encryption, signature and hashing. The same reult is extended to include Diffie-Hellman exponentiation in [TR-2005-7]. The soundness of the symbolic model when probabilistic opacity is considered is proved in [TR-2005-4]. A predicate is opaque for a given system, if an adversary will never be able to establish truth or falsehood of the predicate for any observed computation. In the Dolev-Yao model, even if an adversary is $99\%$ sure of the truth of the predicate, it remains opaque as the adversary cannot conclude for sure. In this paper, a computational version of opacity is introduced in the case of passive adversaries called cryptographic opacity. The main result is a composition theorem: if a system is secure in an abstract formalism and the cryptographic primitives used to implement it are secure, then this system is secure in a computational formalism. Security of the abstract system is the usual opacity and security of the cryptographic primitives is IND-CPA security. To illustrate the results, two applications are given: a short and elegant proof of the classical Abadi-Rogaway result and the first computational proof of Chaum's visual electronic voting scheme.

This work is conducted in tight connection with the following projects:

- EVA:Explication et Vérification Automatique de protocoles cryptographiques.
- ACI Rossignol:Verification of Cryptographic Protocols.
- RNTL PROUVE:Verification of Cryptographic Protocols.
- AS Sécurité logicielle:Modèles et vérification.

### Semantics, specification language and verification of security protocols:

Certification of security properties with respect to the Common Criteria: The Common Criteria is an international standard for IT Security Evaluation. These criteria define 7 levels of evaluation: EAL 1 to EAL 7. The last two, EAL 6 and EAL 7, require formal descriptions and proofs. More generally, this standard advocates a sort of top-down development through a number of description levels such as SPM (Security policy model), FSP ( Functional specification), HLD (High level design) etc..Currently, a quick look at the cite of the DCSSI (Direction centrale de la sécurité des systèmes d'information), the french official body in charge of certification, shows that evaluation at the higher levels has yet been achieved. This is due to a lack of an efficient tool-assisted methodology. We are participating in the national project EDEN whose aim is to develop such a methodology.