Seminar details

salle A. Turing CE4

19 February 2015 - 14h00
Verimag PhD seminars 3
by Seminaires doctorants 3 from Verimag

Abstract: 14:00-14:35 Egor Karpenkov
Policy Iteration: a Scalable Approach
(supervision David Monniaux)
Max-Policy iteration is a new technique for obtaining inductive
invariants in the program, which is based on the ideas of
abstract interpretation. One of the major pitfalls of abstract
interpretation is widening: if the invariant value does not converge
after a few iterations, the invariant is widened to the largest
value of the abstract domain. Abstract interpretation addresses
the introduced imprecision problem with a narrowing operator,
which keeps performing fixpoint iterations after the widening,
hoping to tighten the invariant. However the narrowing approach
is fragile, and often fails to tighten the invariant depending on
the structure of the program. In contrast, the policy iteration
approach guarantees to find the tightest possible invariant in the
given abstract domain, by applying ideas from convex optimization.
In our work we address on of the main shortcomings of the max-policy
iteration approach: the algorithm presented in the original paper
is not scalable and only work for smallest of the examples, as
it requires solving an equation system size of the entire program
at each fixpoint iteration. We develop a new algorithm based on
the max-policy iteration which can scale to considerably larger
programs. Furthermore our implementation fits into CPA framework
and establishes ground for extensive cross-analysis communication.
In addition we develop a formula slicing approach, based on
finding slices of the formula invariant under the loop. We use
the formula slicing approach to improve the precision of pointer
analysis in our implementation. We evaluate our implementation
on SV-comp benchmarks and we obtain a competitive score with
state-of-the-art analyzers.
14:45-15:20 Yuliia Romenska
High-level executable component based models for
functional and non-functional properties of SoCs
(supervision Florence Maraninchi)
Applications of systems-on-chip (SoCs) include most of the electronic
devices used nowadays. SoC is an electronic system embedded within
a larger electronic device, whose hardware components necessary
to the system’s operation are regrouped onto a single integrated
circuit. It consists of hardware and software parts.
The aim of the given thesis is a general method for specifying
component-based systems and simulating them at the very early
stages of the design flow. Simulation helps to detect bugs in
a system’s design. Bugs detected earlier require less time
and effort to be corrected; it leads to improvement of design
productivity (faster time-to-market). The method is based on the
reuse of pre-integrated and pre-verified subsystems that stimulates
faster development speed and avoidance of re-designs. To specify
and simulate component-based systems, it is necessary to have
a formalism that would support both functional requirements and
non-functional properties such as power and temperature. It must
be hierarchical, suitable for both top-down and bottom-up design
of SoCs, executable and interoperable with existing open standards
(e.g. SystemC/TLM). To avoid overspecification issues and to raise
the abstraction level, the formal structure must support functional
and non-functional non-determinism.
Most of the modern “verification” approaches for models of
SoCs are testing approaches. They are based on the generation of
stimuli for a DUT (design under test). There are languages that offer
constraint-based generation for that purpose. Instead of testing
design with stimuli, the given work proposes to look inside the DUT
and to replace each underlying component by its non-deterministic
version. The whole obtained system should be executable, which
means that it is needed to generate random values with constraints
somewhere. The given approach integrates gracefully with the Assume
part and it is more general then just testing a given DUT.
The talk will present intermediate results of the thesis and
enumerate ongoing tasks. A short overview of a full case-study
used for identifying basic ingredients of the formalism will be
given. We will see an example of non-deterministic functional
specification and consider the way appropriate defensive programming
and constraint-based random generation can be written.
15:30-16:05 Hosein Nazarpour
Monitoring of concurrent component-based systems in the BIP framework
(supervision Saddek Bensalen and Ylies Falcone)
We address the problem of monitoring concurrent component-based
systems in the BIP framework. Such component-based systems execute
following a partial-state semantics where concurrency between
interactions is possible and the global state is not available at
any moment during the execution. Given a concurrent BIP system,
our method reconstructs a sequence of global states from the
partial states obtained at runtime. The sequence of global states
can then be used for runtime monitoring. We provide experimental
results illustrating the application of the theory on a prototype
16:15-16:50 Amrit Kumar
(Un)Safe Browsing
(supervision Pascal Lafourcade)
Users often accidentally or inadvertently click malicious phishing
or malware website links, and in doing so they sacrifice secret
information and sometimes even fully compromise their devices. The
purveyors of these sites are highly motivated and hence construct
intelligently scripted URLs to remain inconspicuous over the
Internet. Furthermore, they quickly adapt to defensive technologies.
In light of the ever increasing number of such URLs, new ingenious
strategies have been invented to detect them and inform the end
user when he is tempted to access such a link. The most notable
of all is the Safe Browsing technique, which provides an exemplary
service to identify unsafe websites and notify users and webmasters
allowing them to protect themselves from harm.
In this work, we show how to turn Google Safe Browsing service
against itself and its users. We propose several Distributed
Denial-of-Service attacks that simultaneously affect both the Google
Safe Browsing server and the end users. Our attacks leverage on the
false positive probability of the data structure used for malicious
URL detection. This probability exists because a trade-off was made
between Google server load and client's memory consumption. The
presented attacks are based on the forgery of malicious URLs
to increase the false positive probability. The increased false
positive probability forces the client to make more requests than
necessary to the Google server and symmetrically resources on the
Google side are unnecessarily consumed.

Contact | Site Map | Site powered by SPIP 4.2.8 + AHUNTSIC [CC License]

info visites 3942370