Lundi 6 février à 14h : Youssef Mesri (MINES ParisTech, CEMEF), en salle P.2.14.

« Des maillages, des algorithmes et des fluides »

Cet exposé porte sur les méthodes numériques adaptatives et parallèles pour la résolution des problèmes de la dynamique des fluides. Le domaine de calcul est discrétisé par des maillages non-structurés anisotropes appropriés à la capture des hétérogénéités et des singularités physiques. La dynamique et la localité de celles-ci nécessitent une résolution très fine du maillage rendant impossible le calcul même sur les supercalculateurs les plus puissants d'aujourd'hui.
Nous avons développé ces dernières années une approche alternative à base de méthodes d'adaptation de maillage anisotrope. Le maillage est sujet à des modifications locales suivant des critères physiques grâce à des estimateurs d'erreurs a posteriori liant la précision du maillage à celle de la solution. Les travaux et les avancées réalisés dans ce cadre seront présentés avec différentes illustrations du monde de la mécanique des fluides.

Lundi 9 janvier à 14h : Claude Tadonki (CRI), en salle P.2.14.

« NUMA-Aware Wilson-Dirac on Supercomputers »

We revisit the Wilson-Dirac operator, also referred as Dslash, on NUMA many-core vector machines and thus seek an efficient supercomputing implementation. The Wilson-Dirac operator is known as the major computing kernel in Lattice Quantum ChromoDynamics (LQCD), which is the canonical discrete formalism for Quantum ChromoDynamics (QCD) investigations. QCD is the theory of sub-nuclear particles physics, aiming at modeling the strong nuclear force, which is responsible for the interactions of nuclear particles. Based on LQCD formalism, intensive simulations are performed following the Monte-Carlo paradigm. Physicists claim that informative observations should be derived from large-scale LQCD scenarios, most of them being numerically sensitive. The corresponding computing demand is indeed tremendous at various levels, from storage to floating-point operations, hence the essential need for powerful supercomputers. Designing efficient LQCD codes on modern (mostly hybrid) supercomputers requires to efficiently exploit all available levels of parallelism including accelerators. Since the Wilson-Dirac operator is a coarse-grain stencil computation performed on huge volumes of data, any performance- and scalability-related investigation should skillfully address memory accesses and interprocessor communication overheads. In order the lower the latter, an explicit shared memory implementation should be considered at the node level, since this will lead to a less complex data communication graph and thus (at least intuitively) reduce the overall communication latency.
We focus on this aspect and propose a novel efficient NUMA-aware scheduling, together with a combination of the major HPC strategies for large-scale LQCD. We reach a nearly optimal performance on a single core and a significant scalability improvement across several NUMA nodes. Then, using a classical domain decomposition approach, we extend our work to a large cluster of many-core nodes, showing the global efficiency of our implementation and its benefit over pure multi-process schemes.

Lundi 5 décembre à 14h : Bruno Sguerra (CRI), en salle P.2.14.

« Classification Using Sum-Product Networks for Autonomous Flight of Micro Aerial Vehicles »

Sum product networks (SPN) are a deep probabilistic model proposed by Poon and Domingos in 2014. This model has the advantage (when compared to classical probabilistic graphical model such as Bayesian networks and Markov networks) that it has tractable inference.
In this presentation, we will present the results of using the Sum product network model as an image classifier to guide micro aerial vahicles through indoor environments.

Lundi 7 novembre à 14h : Keryan Didier (Inria), en salle P.2.14.

« Towards formally verified real-time parallel implementation of avionics applications »

We advance on our goal of defining a compilation process going all the way from high-level functional and non-functional specifications to running real-time implementations. When compared to classical compilation problems, our proposal involves three novel features:
- ensuring a safe and precise time accounting throughout the compilation process;
- explicitly considering back-end transformations such as linking and loading, whose control is necessary to ensure the respect of real-time guarantees;
- introducing language constructs to represent mapping and real-time requirements.
We formalize such a real-time compilation problem for simple dataflow synchronous specifications that are statically compiled to shared memory execution platforms. We also show a prototype working on a subset of this formal model on a large-scale avionics application.

Lundi 26 septembre à 14h : Adilla Susungi (CRI), en salle P.2.14.

« On the Road of Designing a Parallel Intermediate Language »

This talk is about the current state of design and implementation of a parallel intermediate language. In this language, arrays are first-class data structures for which operations such as data placement and layout transformations can be expressed through dedicated primitives, and higher-order functions are used to describe array accesses in parallel or sequential loops. We present an example of compilation flow involving the language and discuss limitations and future directions.

Lundi 5 septembre à 14h : Pierre Jouvelot (CRI), en salle P.2.14.

« Signal Rate Inference for Multi-Dimensional Fauste »

We introduce a new signal-level, type- and rate-based semantic framework for describing a multi-rate version of the functional, domain-specific Faust language, dedicated to audio signal processing, and here extended to support array-valued samples. If Faust is usually viewed as a formalism for combining "signal processors", which are expressions mapping input signals to output signals, we provide here the first formal, lower-level semantics for Faust based on "signals" instead. In addition to its interest in understanding the inner workings of the Faust compiler, which uses symbolic evaluation of signal expressions, this approach turns out to be useful when introducing a language extension targeting multi-rate and multi-dimensional (array-valued) processing.

Lundi 27 juin à 14h : Pierre Guillou (CRI) et Florian Gouin (CRI et Safran), en salle R.0.8

1) Pierre Guillou : « A Dynamic to Static DSL Compiler for Image Processing Applications »

Computer vision is a thriving field of research, and Python is an instrument of choice for developing image processing software applications. Used in conjunction with specialized libraries written in C or C++, performance can be enhanced to match native code. The SMIL library is a new C++ image processing library offering ease of programming with a Python wrapper. However, SMIL applications also have to be executed on embedded platforms such as FPGAs on which a Python interpreter is not available. The generic answer to such an issue is to re-code the original Python applications in C or C++, which will be then optimized for every hardware target, or to try to compile Python into native code using tools such as Cython.
The approach taken by the FREIA project is to ease portability of applications written in a DSL embedded in C (the FREIA API) by using specific optimizations such as image expressions evaluation, removal of temporary variables or image tiling.
Is it possible for SMIL Python applications to benefit from the FREIA compilation toolchain in order to increase their portability onto specialized hardware targets?
We present in this paper (1) a methodology to convert a dynamic DSL into a static one that preserves programmability, (2) a working implementation which takes care of types, memory allocation, polymorphism and API adaptation between SMIL and FREIA, (3) and experimental results on portability and performance.

2) Florian Gouin : « Méthode de calcul de variance locale adaptée aux processeurs graphiques »

Le calcul de noyaux est utilisé en traitement d'images pour appliquer un algorithme sur le voisinage de chaque élément constituant une image. Ce type d'algorithme fréquemment employé, a pour caractéristique d'être particulièrement consommateur en bande passante mémoire.
Dans cette présentation, nous nous intéressons plus précisément au calcul de noyaux de variances locales sur GPU. Ainsi, nous introduirons l'état de l'art sur les méthodes de calcul de variance et détaillerons les choix d'optimisations qui nous ont permis de réduire la quantité de communication mémoire d'une complexité en O(N²) à une complexité en O(log(N)).

Lundi 6 juin à 14h : Claude Tadonki (CRI), en salle P.2.14.

« LQCD revisited on multicore vector machines »

We revisit the Wilson-Dirac operator on multicore vector machines. The Wilson-Dirac operator is the major computing kernel in Lattice Quantum ChromoDynamics (LQCD), which is the canonical discrete formalism for Quantum ChromoDynamics (QCD) investigations. QCD is the theory of sub-nuclear physics, aiming at modeling the strong nuclear force, which is responsible for the interactions of nuclear particles. Based on LQCD formalism, intensive simulations are performed following the Monte Carlo paradigm. Informative observations are expected from large-scale and numerically sensitive LQCD simulations. The corresponding computing demand is therefore tremendous, thus the serious consideration for powerful supercomputers.
Designing LQCD codes that scale well on large (mostly hybrid) supercomputers requires to efficiently exploit many level of parallelism including accelerators. Since the Wilson-Dirac operator is a coarse-grain stencil computation performed on huge volume of data, any performance related investigation should skillfully address memory accesses and interprocessor communication overheads. In order to lower the later, a shared memory approach should be considered at the node level. Because of the threads scalability issue, among other reasons, the message passing paradigm seems to be widely considered, even with shared memory systems.
Although this pragmatic approach is understandable in general, we think that hybrid approaches are worth considering. Indeed, even if message passing implementations can run as such on a shared memory node, explicit data exchanges have a significant cost and might trigger noisy synchronizations. For such exchanges, as long as all communicating processes reside on the same compute node, we are limited by the memory bandwidth available. Once more nodes become involved, the network bandwidth, usually much lower than the memory bandwidth, becomes the limiting factor.
The case of applications that expose heavy data exchanges like LQCD are particularly sensitive to this aspect. This is the main focus of our work, where we provide, explain, and discuss a multi-threaded vector implementation, whose experimental results on the newly released {\sc intel broadwell} processor show competitive absolute efficiency and scalability on one of its NUMA nodes.

Lundi 23 mai à 14h : Romain Michon (étudiant en thèse au laboration CCRMA d'informatique musicale de Stanford, USA), en salle P.2.14.

« Faust-Based Research, Teaching and Applications at Stanford University »

Slides disponible :

Lundi 2 mai de 14 à 15h : Pierre Wargnier (CRI), en salle P.2.14.

1) « Field Evaluation with Cognitively-Impaired Older Adults of Attention Management in the Embodied Conversational Agent Louise »

We present the first experiment we conducted to evaluate the attention monitoring performance of Louise, following a Wizard of Oz method, during the interactions with a cohort of 8 elderly users in a day hospital environment. Louise is a new, semi-automatic prototype of an Embodied Conversational Agent (ECA), a virtual character interacting with users through social-like communication, adapted to the special needs of older adults with cognitive impairment; it is intended to ultimately provide assistance in their activities of daily living. We recorded and analyzed both videos of the conversation-like interactions and Louise's tracking data. In our experiment, Louise's attention estimation algorithm achieved about 80% accuracy; moreover, in almost all cases, the user's attention was successfully recaptured by Louise after a planned, experimenter-induced distraction. These results are in line with what was observed in previous experiments involving only younger adults, thus suggesting that attention measurement tools embedded in cognitive prostheses will not need to be adapted to elderly patients. Finally, to gain further insights on conversation management and provide evidence-based suggestions for future work, we performed an anthropological analysis of the whole experiment.

2) « Virtual Promenade: A New Serious Game for the Rehabilitation of Older Adults with Post-fall Syndrome »

We introduce a novel rehabilitation tool to treat Post-Fall Syndrome (PFS) in older adults: a serious game, called Virtual Promenade, combined with a haptic chair imitating the hips movements of human walk. We report on the user-centered design of our prototype, following ``living lab'' principles, which was well received by our test participants. This system aims at addressing the psycho-motor consequences of older adults' falls; they are often neglected in current post-fall care practices. We first checked for feasibility and tolerability of such interventions. We then applied a living lab participatory design approach, involving health care professionals and older adults, to build the Virtual Promenade prototype. We found that patients with PFS tolerated the system well and that there were no major obstacles to feasibility. We also report that the aesthetics of the virtual environment is an important motivational factor for older adults and discuss our results in searching for the most suitable game controller for this type of patients and game. Finally, we observed that the chairs' movements improved the immersion in the game.

Lundi 4 avril 2016 à 14h : Emilio Gallego Arias (CRI), en salle P.2.14.

« Experiments in Certified Digital Audio Processing »

We will present recent developments in Wagner, a functional programming language geared towards real-time digital audio processing with formal, machine-checkable semantics.

The system currently has three main components: a Coq library with some facts about the Discrete Fourier Transform and unitary transforms, a stream-oriented lambda calculus enabling easy coding of filters and digital waveguide oscillators, and a call-by-value abstract machine providing an efficient execution model.

Lundi 22 février 2016 à 14h : Patryk Kiepas (CRI), en salle P.2.14.

« Gentle glimpse on machine learning and even more gentle look on meta-learning and stuff »

The talk will be divided into three uneven parts.
We will start with a really gentle introduction to machine learning(ML)from perspective of the ML user. We won't talk about WHAT machine learning IS, but instead we will talk WHAT IT IS ABOUT. We will describe parts of machine learning like learning data, data features as way of describing data, different tasks to solve, models to describe relations in data and means to evaluate built models. We will also look on the large number of available machine learning algorithms and problems that arise.

Second part of the talk will be about meta-learning. We will look on it as a solution for few problems from the first part of the talk. We will talk about algorithm selection, parameter selection and tuning, hyperparameters optimization and other means to automate classic machine learning methods.

Last part of the talk is the smallest one. It will cover work I've done in meta-learning field as a part of my master thesis (algorithm selection system).

Lundi 01 février 2016 à 14h : Dumitru Potop Butucaru (Inria), en salle P.2.14.

« Real-time systems compilation »

My presentation is structured in 2 main parts. In the first one I will introduce and advocate for the concept of Real-Time Systems Compilation. By analogy with classical compilation, real-time systems compilation consists in the fully automatic construction of running, correct-by-construction implementations from functional and non-functional specifications of embedded control systems. Like in a classical compiler, the whole process must be fast (thus enabling a trial-and-error design style) and produce reasonably efficient code. This requires the use of fast heuristics, and the use of fine-grain platform and application models. Unlike a classical compiler, a real-time systems compiler must take into account non-functional properties of a system and ensure the respect of non-functional requirements (in addition to functional correctness). In the second part of my presentation I will present Lopht, a real-time systems compiler we built by combining techniques and concepts from real-time scheduling, compilation, and synchronous language design.

Lundi 25 janvier 2016 à 14h : Emmanuelle Encrenaz-Tiphene (LIP6), en salle P.2.14.

« A formal framework for the protection of assembly code against fault attacks »

Fault attacks against embedded circuits aim at disrupting the normal behavior of an application in order to gain benefits (confidential data as an example). In order to be successful, a fault injection must be performed following an attack path, which defines when and where the fault must be injected. Many attack paths against secure circuits are possible. Every attack path relies on a specific fault model which defines the type of faults that the attacker can perform. Fault models corresponding to the skip of an assembly instruction or to the replacement of an instruction by another one has been obtained by using several fault injection means, these fault models are powerful and practical for an attacker.

To avoid this threat, some countermeasure schemes have been proposed and secure circuit designers implement them to protect their assets. However, the robustness of these countermeasure are evaluated by security experts and may have some undisclosed weaknesses.

We present a formal verification framework and tool named Robusta that enables to evaluate the robustness of pieces of assembly code against fault-injection attacks. By modeling a reference assembly code and its protected variant as automata, the framework can generate a set of equations for an SMT solver, the solutions of which, when they exist, represent possible attack paths. Using this approach we defined a protection scheme against instruction skip at the assembly instruction level. We also evaluated the robustness of state-of-the-art countermeasures against fault injection attacks. Based on insights gathered from this evaluation, we analyze any remaining weaknesses and propose enhancement of these countermeasures that are more robust.

Lundi 7 décembre 2015 à 14h : Pierre Wargnier (CRI)

« Conception de d'agent conversationnel Louise pour les besoins des personnes âgées atteintes de troubles cognitifs »

Le terme « démence » désigne les symptômes de plusieurs maladies neurodégénératives de la personne âgée. La plus répandue d'entre elles est la maladie d'Alzheimer. Afin d'apporter une aide compensatoire aux symptômes de perte de mémoire et de troubles exécutifs et attentionnels, nous développons l'agent conversationnel Louise. Louise a pour but d'étudier les conditions rendant l'usage des agents conversationnels possible à ce public. Dans cette présentation, j'évoquerai les résultats des premières séries d'expériences s'intéressant particulièrement à la gestion de l'attention de l'utilisateur, puis je présenterai la conception d'un prototype plus évolué, le modèle de gestion des conversations utilitaires envisagé, la représentation de ce modèle dans un programme informatique ainsi que le langage dédié, fondé sur XML (élémentaire), permettant de décrire de telles conversations.

Mercredi 4 novembre 2015 à 11h : Pierre Jouvelot (CRI)

« FEEVER à mi-parcours »

Le projet FEEVER, coordonné par le CRI, est une collaboration impliquant également GRAME (Lyon), l'Université de St-Etienne et Inria/Irisa (Rennes). Nous ferons une présentation de l'état actuel du projet et des défis qu'il reste à lever pour atteindre l'objectif initial visant à faire du langage de traitement numérique du son FAUST une solution indispensable sur toutes les plateformes et pour toutes les applications liées à la musique.

5 octobre 2015 à 14h : Daverio laurent (CRI), et Pin Benoît (CRI)

« Aligning legivoc Legal Vocabularies by Crowdsourcing »

legivoc is the first Internet-based platform dedicated to the diffusion, edition and alignment of legal vocabularies across countries. Funded in part by the European Commission and administered by the French Ministry of Justice, legivoc offers a seamless path for governments to disseminate their legal foundations and specify semantic bridges between them. We describe the general principles behind the legivoc framework and provide some ideas about its implementation, with a particular focus on the state-of-the-art tools it includes to help crowdsource the alignment of legal corpora together.

7 septembre 2015 à 14h : Gaëtan Gilbert (CRI)

« Term inference and Unification in Lambda Pi Calculus Modulo  »

I implemented a term inference algorithm for λΠ modulo cleanly separating constraint generation, constraint manipulation through a safe interface, and heuristics for solving higher order unification problems.

6 juillet 2015 à 15h :

15h : Arnaud Spiwack / Pierre Guillou (CRI) « Rust: system programming with guaranties »

Rust is a new programming language developped by Mozilla which saw its first sable release in May 2015. Besides reaching a suprisingly fast popularity and active community, Rust is a truly innovative language, at the intersection of functional programming languages such as Ocaml with memory-conscious languages such as C.

We will present the main novelties brought by Rust to the world of practical programming language, in particular its ownership and lifetime systems which Rust uses to enforce memory safety at a very low cost. We will also give an overview of the Rust environment and of the day-to-day life of the Rust programmer.

16h : Ronan Keryell (Khronos OpenCL SYCL Committee) « Modern C++, heterogeneous computing and OpenCL SYCL »

SYCL is a single source programming environment that exposes OpenCL devices to C++ developers in a language-integrated way. SYCL relies on standard C++ syntax and semantics, using API calls and an optional multi-pass compilation technology to target an underlying OpenCL runtime. We will introduce SYCL and give an overview of the design goals and decisions behind SYCL, as well as its status within the Khronos standardization framework.

During this talk, we will briefly introduce C++14 and some alternative ways to program heterogeneous accelerators in C++, such as Boost.Compute, VexCL, ViennaCL, OpenMP4, C++AMP... or of course the lower level legacy CL.hpp and the newer OpenCL 2.1 C++ kernel language. We will compare with SYCL and give an overview of the design goals and decisions behind SYCL and its status within the Khronos standardization framework.

Defining a new specification for a new parallel language is difficult without the ability to experiment with the concepts and the syntax. Thus to help the discussions on the new OpenCL SYCL C++11 standard inside the Khronos committee, we started an open-source implementation based on modern C++1z templated classes and OpenMP: triSYCL. The use of metaprogramming with some useful libraries such as Boost.MultiArray and Boost.Operators helps attain a conciseness in the implementation. Since there is no compiler in this implementation yet, we cannot target OpenCL but we hope it will evolved into a full-fledged open-source implementation.

Bio: Ronan Keryell, Khronos OpenCL SYCL Committee. Ronan is involved in the development and standardization of SYCL, and is the author of an open source implementation: triSYCL. Ronan is a co-founder of SILKAN where part of his role was to lead development of the automatic parallelizer Par4All, and he was previously an assistant professor in France. He graduated from École Normale Supérieure of Paris where he got his PhD too.

8 juin 2015 à 15h : Vu Hoang Thai (CRI et University of Science and Technology, Hanoi)

« Simulation of memory behavior on multi-core computers »

With the advent of multi-core processors, together with multi-level memories, concurrent accesses are becoming more complex to profile and predict. Taking into account the fact most of common applications are memory-bounded, it becomes crucial to deeply investigate the overhead of memory activities. For this purpose, the current proposal aims at designing a parser that could instrument a given C code in order to extract its memory access patterns. From this matrix of accessed indices, a target architecture and a given memory management protocol, we wish to simulate the behavior of the memory and thus estimate the associated costs. Such estimation could be used to design static or dynamic optimizations to improve the memory performance of computer programs.

4 mai 2015 à 14h : Florian Gouin(Sagem et CRI)

« L'encodage Mill Split-stream »

Face aux problématiques d'utilisation du cache d'instructions L1, le Mill Split-stream Encoding apporte une solution pour doubler la taille du cache L1 sans en dégrader ses performances. Cette présentation détaillera différentes méthodes de décodage d'instructions ainsi que les particularités de l'architecture Mill.

13 avril 2015 à 15h : Karel de Vogeleer (Télécom ParisTech et CRI)

« The Energy/Frequency Convexity Rule »

Energy efficiency is an important aspect for battery-driven devices, whether autonomous or designed for human interaction.

In this talk we look at the trade-off between energy consumption and performance of a microprocessor, which we refer to as the Energy/Frequency Convexity Rule. We discuss under what circumstances this convexity rule can be exploited, and when other methods are more effective, to improve the energy efficiency of the microprocessor. Amongst the parameters of interest are; the microprocessor's thermal behavior, the overhead of the system, and the background power requirements of the system.

Beside, the microprocessor's thermal behavior is developed in more detail as it has a significant impact on its power requirements.

10 mars 2015 à 14h : Jim Lipton (Université Wesleyan, Connecticut US)

« On a Kripke semantics for Intuitionistic Higher-Order Logic with constraints »

We define a Kripke semantics for Intuitionistic Higher-Order Logic with constraints formulated within Church's Theory of Types via the addition of a new constraint base type.

We then define an executable fragment, hohh(C) of the logic: a higher-order constraint logic programming language with typed $\lambda$-abstraction, implication and universal quantification in goals and give a modified model theory for this fragment. Both formal systems are sound and complete for their respective semantics.

We will sketch a simple semantics-based proof that the language hohh(C) satisfies the uniformity property of Miller et. al.

02 mars 2015 à 14h : Arnaud Spiwack(CRI)

« Introduction à la théorie des topos appliquée »

Dans cet exposé j'introduirai la notion de topos et les mathématiques constructives (pas de prérequis !). Les topos viennent de préoccupations qu'on peut difficilement imaginer plus théoriques : la généralisation de constructions de géométries algébrique.

Néanmoins, je ne porterai pas (trop) un chapeau de théoricien : je vais montrer comment les topos peuvent être utilisés pour parler de sémantiques de langages synchrones, en particulier de langages de circuits tels que Faust qui est la motivation pour le travail que je présenterai.

L'idée générale est que les topos vont nous permettre d'internaliser le temps de manière implicite pour traiter de manière uniforme les portes combinatoires (comme le "ou" booléen) qui ne traitent que de données, et les portes, comme le délai, qui contrôlent les aspects temporels du circuit.

2 février 2015 à 14h : Imré Frotier de la Messelière (CRI)

« Fondations sémantiques de l'analyse de programmes : Application au cas de Faust (Article original de Patrick Cousot )  »

Afin d'affiner l'intervalle fourni par les règles de typage de Faust dans le cas d'un opérateur de boucle, nous effectuons un calcul de point fixe ayant recours à l'interprétation abstraite. Pour cela, en partant d'un programme Faust contenant au moins un opérateur de boucle, nous réalisons un système correspondant à celui de l'article de Patrick Cousot "Fondations sémantiques de l'analyse de programmes."

C'est ainsi que, dans un premier temps, nous représentons le point d'utilisation d'une boucle dans Faust sous forme d'un diagrame de programme dont nous calculons les invariants. En se basant sur ces invariants, nous procédons ainsi à l'approximation du calcul de point fixe dans un diagramme de programme, suivie de l'utilisation des opérations de widening puis de narrowing pour accélérer ce processus.

Enfin, plusieurs pistes d'étude supplémentaires allant au-delà d'une application directe de l'algorithme présenté ici seront abordées. Nous nous pencherons notamment sur divers choix de treillis et sur une possible automatisation du calcul des invariants dans le cas de Faust. Une étude sur l'intégration d'éléments d'interprétation abstraite dans les règles de sémantique de Faust sera aussi abordée.

5 janvier 2015 :

14h : Claude Tadonki (CRI) « A Fine-grained Approach for Power Consumption Analysis and Prediction »

Power consumption has became a critical concern in modern computing systems for various reasons including financial savings and environmental protection. With battery powered devices, we need to care about the available amount of energy since it is limited. For the case of supercomputers, as they imply a large aggregation of heavy CPU activities, we are exposed to a risk of overheating. As the design of current and future hardware is becoming more and more complex, energy prediction or estimation is as elusive as that of time performance.

However, having a good prediction of power consumption is still an important request to the computer science community. Indeed, power consumption might become a common performance and cost metric in the near future. A good methodology for energy prediction could have a great impact on power-aware programming, compilation, or runtime monitoring.

In this paper, we try to understand from measurements where and how power is consumed at the level of a computing node. We focus on a set of basic programming instructions, more precisely those related to CPU and memory.

We propose an analytical prediction model based on the hypothesis that each basic instruction has an average energy cost that can be estimated on a given architecture through a series of micro-benchmarks. The considered energy cost per operation includes all of the overhead due to context of the loop where it is executed. Using these pre-calculated values, we derive an linear extrapolation model to predict the energy of a given algorithm expressed by means of atomic instructions. We then use three selected applications to check the accuracy of our prediction method by comparing our estimations with the corresponding measurements obtained using a multimeter. We show a 9.48\% energy prediction on sorting.

15h : Pierre Guillou / Nelson Lossing (CRI) « From Data to Effects Dependence Graphs: Source-to-Source Transformations for C »

Program optimizations, transformations and analyses are applied to intermediate representations, which usually do not include explicit variable declarations. This description level is fine for middle-ends and for optimizers of simple languages such as Fortran77. However, the C language, especially its C99 standard, is much more flexible. Variable and type declarations can appear almost anywhere in source code, and they cannot become implicit in the output code of a C compiler.

We show that declaration statements can be handled like the other statements and with the same algorithms if new effect information is defined and handled by the compiler, such as writing the environment when a variable is declared and reading it when it is accessed. Our solution is interesting because no legal transformation is hindered by our new effects and because existing algorithms are either not modified or only slightly modified by filtering upon the effect kind.

This extension has been used for several years in our PIPS framework and has remained compatible with the new developments such as offloading compilers for GPUs and coprocessors.

1 décembre 2014 à 14h : Pierre Wargnier (CRI)

« De l'image 3D aux agents conversationnels animés : représentation, usages et applications »

Cette présentation portera sur l'animation graphique 3D interactive.

Dans un premier temps je présenterai quelques généralité sur les modes de représentation des objets virtuels tridimensionnels et de l'animation de ces objets. Ensuite nous verrons comment cela est réalisé en pratique par les artistes et animateurs. Puis, je parlerai des moteurs de jeux et environnements intégrés de développement de jeux. Enfin, je présenterai un cas concret d'application utilisant ces technologies pour la conception d'un agent conversationnel animé.

6 octobre 2014 :

14h : Jesus Gallago Arias (CRI, travail réalisé à UPenn) « Approximate Relational Refinement Types with Applications to Verification of Differential Privacy and Mechanism Design »

In software verification, properties over two arbitrary runs of a program are pervasive. The class of "2-properties" or "relational properties" is known to capture many definitions of interest.

I will present a recent draft [1] introducing a relational refinement type system (HOARe2) for the verification of probabilistic relational properties. We have applied to system to two case studies, Differential Privacy and Mechanism Design, automatically verifying over 12 state of the art algorithms.

A (probabilistic) algorithm is differentially privacy if given two adjacent inputs,the distance on the output probability distributions is bounded. Differential Privacy has gathered wide interest in the privacy, theory and data-analysis community.

With a common reading of it as a privacy promise: "Whether we use your data or not, a DP analysis will be approximately the same". Mechanism Design is the study of algorithm design in which the inputs to the algorithm are controlled by strategic agents, who must be incentivized to faithfully report them. A particular property of interest is truthfulness: An agent revenue will be maximal if and only if she tells the truth to the mechanism.


15h : Soufiane Baghadi (Télécom Sud-Paris) « Améliorer le recouvrement des communications et des calculs par le compilateur »

Les communications dans les actuels superordinateurs deviennent un goulet d'étranglement et la moindre optimisation de ces derniers peut engendrer des gains considérables au moment de l'exécution des applications distribuées.

Le recouvrement des communications par les calculs est une technique prouvée très efficace pour diminuer le temps d'inactivité du processeur pendant l'envoi ou la réception des données. L'optimisation manuelle à l'aide de cette technique n'est pas évidente, ce qui explique le nombre très limité de programmes qui l'utilisent.

Nous allons proposer l'état de l'art d'une passe de compilation qui transforme automatiquement les communications bloquantes en non bloquantes ce qui augmente le recouvrement et relaxe les contraintes de communication. Nous allons montrer la puissance de notre approche en appliquant ces optimisations manuellement sur un benchmark réel.

1 septembre 2014 à 14h : David Delahaye (CNAM)

« The BWare Project: Building a Proof Platform for the Automated Verification of B Proof Obligations »

We introduce BWare, an industrial research project that aims to provide a mechanized framework to support the automated verification of proof obligations coming from the development of industrial applications using the B method and requiring high integrity. The adopted methodology consists in building a generic verification platform relying on different automated theorem provers, such as first order provers and SMT (Satisfiability Modulo Theories) solvers.

Beyond the multi-tool aspect of our methodology, the originality of this project also resides in the requirement for the verification tools to produce proof objects, which are to be checked independently.

In this talk, we present some preliminary results of BWare, as well as some current major lines of work.

This work is supported by the BWare project (ANR-12-INSE-0010), funded for 4 years by the INS programme of the French National Research Agency (ANR) and started on September 2012. For more details, see:

3 juin 2014 à 14h : Laurent Daverio

« Les bases de données NoSQL »

  • Introduction : - un domaine en forte demande - une alternative au modèle relationnel
  • NoSQL : - présentation (des points communs, mais de nombreuses variantes) - pourquoi une alternative au modèle relationnel ?
  • Le théorème CAP (Consistency, Availability, Partition Tolerance)
  • Les principaux moteurs NoSQL à l'heure actuelle : Rias, HBase, MongoDB, CouchDB, Neo4j, Redis. Chaque moteur sera présenté de façon synthétique, avec ses caractéristiques, contraintes, points forts, propriétés CAP, etc.
  • Récapitulatif (sous forme de tableau)
  • Conclusion

2 juin 2014 à 14h : Olivier Hermant

« Double dose de double négation »

La double négation est utilisée en logique pour traduire des preuves faisant appel au principe du tiers-exclu ("soit A, soit non A") en des preuves qui ne font pas appel a ce principe - l'un des plus controversés en logique moderne. Nous commencerons par une introduction aux systèmes syntaxiques de preuves en logique classique et intuitionniste, puis nous détaillerons quelques résultats sur les traductions par double négation.

Double negation is used in logic to translate proofs that appeal to the excluded-middle principle ("either A, or not A") into proofs that do not appeal to this principle - one of the most controversed in modern logic. We will begin by an introduction to syntactic proof systems in classical and intuitionnistic logic, and we will then detail some results on double-negation translations.

5 mai 2014 à 14h : Imré Frotier de la Messelière

« Les types liquides (Logically Qualified Data Types) »

Les types liquides (Logically Qualified Data Types) sont un cas particulier de types dépendants, qui sont des type qui traduisent une dépendance sur la valeur de l'instance concernée. Dans cet article est présentée l'inférence de types liquides, qui est un système qui combine l'inférence de type de Hindley-Milner avec une abstraction par prédicat afin de déduire automatiquement des types dépendants. Les auteurs ont implémenté l'inférence de types liquides, qui prend en entrée un programme en Ocaml et un ensemble de qualificateurs logiques et déduit les types dépendants pour les expressions dans le programme en Ocaml. Nous étudierons en détail cet algorithme.

7 avril 2014 à 14h : Pierre Guillou

« Solutions logicielles pour l'amélioration du poste de travail informatique »

Au cours des dernières décennies, l'emploi de l'outil informatique s'est démocratisé au travail et dans les foyers.

Pourtant, l'utilisateur moyen, par méconnaissance ou par restriction de sa hiérarchie, peine toujours à adapter l'outil à ses besoins, ainsi qu'à questionner certains standards de fait. Qui par exemple irait aujourd'hui remettre en cause l'agencement AZERTY de nos clavier ?

Pour autant, quelques améliorations simples permettent de radicalement transformer l'expérience utilisateur.

Cette présentation se focalisera sur plusieurs d'entre elles.

  • apprentissage de la dactylographie et agencement BÉPO,
  • réduction de la fatigue visuelle sur écran,
  • compilation automatique de documents LaTeX et gestion de références,
  • zsh et terminator, des outils dédiés à la ligne de commande.

10 mars 2014 à 14h : Farhani Wajdi (Ecole Polytechnique de Tunis)

« Data representation synthesis »

3 mars 2014 à 14h : Pierre Jouvelot

« La musique à l'ère des nouvelles technologies »

CD, MP3, synthétiseur, Dolby AC3, DSP voire MIDI, "streaming" ou "sampling"…, ces termes récents, apportés par la révolution numérique et informatique du siècle dernier dans le domaine de la musique et de l'audionumérique, sont devenus presque familiers, même à ceux d'entre nous qui n'affichent qu'un intérêt passager pour le monde du son.

Mais que cachent ces idiomes souvent ésotériques ? Quels sont les mécanismes qui résident au coeur de ces merveilleuses machines informatiques que sont par exemple iPod et autres baladeurs MP3 ? Sur quels fondamentaux des mécanismes physique et physiologique de l'audition humaine s'appuient-ils pour nous permettre de profiter d'un Wagner ou Daft Punk où et quand nous le souhaitons ?

C'est d'une plongée au croisement de la musique et des nouvelles technologies informatiques qu'il sera question dans ce large survol d'un monde parfois mystérieux.

3 février 2014 à 14h : José Afonso L. Sanches (Universidade Federal Fluminense, Niteroi, Brazil)

« Resource Allocation in Large-scale Distributed Systems »

This thesis proposal is about creating and evaluating a new resource allocation method in large- scale computing environments (such as grids and clouds), by taking into consideration the complexity and particularities of the structures. For example, this could be done by means of analyzing the traffic generated at the grid sites, by investigating and modelling packet-level metrics. Since computing grids normally reflect the panorama of a complex network, and the same can probably be said about cloud systems, we plan to apply complex networks concepts to improve the efficiency of the method. We also intend to combine the aforementioned method with the JobPilot technique, currently used in the CMS/LHC grid system, and for which improvments are expected, both from the algorithm and the architecture viewpoints, as a part of the overall goal. As an objective, this talk intends to present what we already did and what we plan to do for the future, which is surrounded by more general information on the context of my PhD.

28 octobre 2013 à 14h : Karim Barkati (IRCAM)

« Faustine, un interprète pour le langage Faust »