Lundi 18 février à 15h : Emilio Gallego Arias (CRI), en salle mastère, (rez-de-chaussée du batiment P.)

« Towards verified modelling of Digital Signal Processing algorithms  »

We study the formal verification of some aspects of Digital Signal Processing algorithms. In particular, we are interested in the high-level theory of Linear-Time-Invariant (LTI) systems, including typical algorithms such as filters and resonators in their different implementations.
Our goal is to develop a setting for reasoning about DSP algorithms such that a) they are written in a convenient, high-level language, b) operational semantics are real-time and reasonably efficient, and c) the overhead of mechanized verification remains low.

Lundi 21 janvier à 14h : Claude Tadonki (CRI),en P.2.14, batiment P.

« Focus on Parallel Combinatorial Optimization  »

HPC has made great steps and its potential is continuously impacting. The next exciting horizon is the Exascale milestone. In this perspective, it is worth investigating how challenging applications like those coming from combinatorial optimization will benefit from this significant computing power available on supercomputers. The reference branch-and-bound technique for instance needs a special focus at a macroscopic level. Then, as its main ingredient at a node level is how we solve the kernel problem, an efficient shared memory parallelization is worth considering. Multicore has emerged as a typical architecture model since its advent, and it now stands as a standard. The trend is to increase the number of cores and improve the memory system performance. Providing an efficient multicore implementation for a important algorithmic kernel is a genuine contribution. From a methodology standpoint, this should be done at the level of the underlying paradigm if any. We study the cases of dynamic programming and greedy algorithms, which are two major combinatorial algorithmic paradigms. We exclusively consider directives-based loop parallelization through OpenMP and investigate necessary transformations to reach a regular parallel form. We evaluate our methodology with a selection of well-known combinatorial optimization problems on an INTEL Broadwell processor. Key points for scalability are discussed before and after experimental results, the NUMA case being our next focus.

Lundi 17 décembre à 14h : Maksim Berezov (CRI), en salle mastère, (rez-de-chaussée du batiment P.)

« Attribute transferring using deep generative networks  »

In my presentation, I will consider the problem of the attribute transferring between images using deep generative models, especially, Generative Adversarial Networks. For example, we have an image of a person without a smile, but we want to add a smile. Such popular models as DCGAN, DNA-GAN, VAE-GAN will be used as baselines. I will also talk about disentangled and entangled data representations and how it is used in these models and can be used to transfer attributes.

Lundi 19 novembre à 14h : Adilla Susungi (CRI, ENS), en salle mastère, (rez-de-chaussée du batiment P.)

« Analysis and Compilation of Parallel Programming Languages  »

Faced with the complexity of parallel architectures, meta-languages for program transformations are strong allies for building adaptive, portable and efficient compiler infrastructures. In this talk, with a focus on meta-languages for loop transformations, we discuss keys to increase their potential and formally specify them.
We particularly address four questions: (i) How do we introduce domain-specific expressiveness? (ii) How do we rethink their design to enhance their flexibility in composing optimizations paths and generating multiple program variants? (iii) How far can we introduce NUMA (Non-Uniform Memory Access) awareness? (iv) As a new class of meta-languages, how do we formalize their semantics? We answer these questions through the design and semantics of TeML, a tensor optimizations meta-language.

Lundi 24 septembre à 14h : Maciek Kocot (AGH), en P.2.14, batiment P.

« Predicting the benefits of code optimisation.  »

Automatic code optimisation (specifically, loop transformations), such as performed by the PIPS framework, can largely improve the performance of programs However, there are cases when it does not prove beneficial.
The goal of the internship the results of which will be presented today was to apply machine learning techniques to this problem, trying to predict the benefit of code optimisation beforehand. It also involved analysing which features of a program are most affected by gcc compiler optimisations (for example, cache misses).

Lundi 17 septembre à 14h : Arthur Aubertin (CRI, Stimshop), en P.2.14, batiment P.

« Vers un système multi-éléments de communication ultrasonore basses fréquences.  »

Dans le cadre d'une thèse CIFRE réalisée en collaboration entre Stimshop, MINES ParisTech et ESPCI Paris, il s'agit de caractériser les capacités qu'offrent les communications par ultrasons à des fréquences basses, encore peu étudiées. Afin de répondre à cette problématique, on appuiera la recherche par une étude expérimentale et la modélisation de l’application d’une technologie existante, les capteurs multi-éléments, à un domaine pour lequel elle n’est pas utilisée. Un état actuel de la recherche effectuée jusqu'à présent, incluant la caractérisation de l'élément unitaire de ce futur capteur, sera, entre autres, présenté.

Lundi 9 juillet à 14h : Emilio Gallego Arias (CRI), en salle mastère, (rez-de-chaussée du batiment P.)

« Towards formally verified Solidity specifications  »

In the last few years, permissionless distributed execution platforms have gained enormous popularity, as they have become a computing environment of choice for decentralized asset management.
Concretely, the Ethereum project implements a distributed ledger featuring a Turing-complete execution engine with read/write capability to the underlying blockchain-based storage. Thus, users can immutably store and interact with other "smart contracts", which has allowed the development of rich trustless applications. However, lack of central control does come at a prize: once a contract is deployed to the blockchain, its code is in principle immutable, thus bugs take a very different dimension as the same autonomy that makes the permissionless setting interesting works against the effective mitigation of vulnerabilities.
In the talk, we will review the basics of the Ethereum execution platform: the EVM, and of the Solidity programming language. We will discuss a few examples of smart contracts that make interesting targets for the verification of high-level properties.

Lundi 18 juin à 14h : Corinne Ancourt (CRI), en P.2.14, batiment P.

« Optimisation automatique de programmes scientifiques - Interface des outils NICE et PIPS  »

NICE est un framework qui permet d'automatiser l'optimisation de codes scientifiques. Il est développé par Thiago Teixeira à l'Université d'Illinois à Urbana-Champaign (USA). PIPS est un compilateur source-à-source, développé au CRI, qui dispose de nombreuses analyses et transformations de programme.
Cette présentation introduira les travaux, réalisés au cours de ma visite de 3 mois à UIUC, pour interfacer ces deux outils, orchestrer efficacement les optimisations, minimiser l'espace de recherche et générer automatiquement des codes efficaces.

Mardi 15 mai à 14h : Lucas Massoni Sguerra (CRI), en P.2.14, batiment P.

« Ethereum, smart contracts and tokens  »

Ethereum is growing in popularity. With an estimated value of one billion dollars and a tight-knit community around, the open-source, blockchain-based distributed computing platform is catching the attention of researchers and bankers alike.
In this seminar, I'll explain how the Ethereum system, as well as its smart contracts feature, work and how it enables crowdfunding via ICOs (Initial Coin Offering).

Lundi 16 avril à 10h : Claude Tadonki (CRI), en salle mastère (rez-de-chaussée du batiment P.)

« Harris Corner Detection on a NUMA Manycore  »

Corner detection is a key kernel for many image processing procedures including pattern recognition and motion detection. The latter, for instance, mainly relies on the corner points for which spatial analyses are performed, typically on (probably live) videos or temporal flows of images. Thus, highly efficient corner detection is essential to meet the real-time requirement of associated applications.
In this paper, we consider the corner detection algorithm proposed by Harris, whose the main work-flow is a composition of basic operators represented by their approximations using 3 x 3 matrices. The corresponding data access patterns follow a stencil model, which is known to require careful memory organization and management. Cache misses and other additional hindering factors with NUMA architectures need to be skillfully addressed in order to reach an efficient scalable implementation. In addition, with an increasingly wide vector registers, an efficient SIMD version should be designed and explicitly implemented. In this paper, we study a direct and explicit implementation of common and novel optimization strategies, and provide a NUMA-aware parallelization. Experimental results on a dual-socket INTEL Broadwell-E/EP show a noticeably good scalability performance.

Lundi 19 mars à 14h : Patryk Kiepas (CRI et AGH, Pologne), en salle mastère (rez-de-chaussée du batiment P.)

« Performance Estimation of Array Operations through Coupling.  »

With the recent improvements in the major interpreters, the typical overhead with ordinary computation loops has been reduced. As a result, considering a given (let say MATLAB) code section, the choice between using either explicit loops or the equivalent array version needs a more thorough decision criteria. Indeed, each of the two coding approaches has its pros and cons regarding the effect on space-time performances. Knowing ahead the key technical points related to the native code performance helps to choose the right style of coding or refactoring.
This talk investigates performance estimation of array operations using the concept of coupling. Initially, the performance of common array operations is measured, followed by that of their combination, the aim being to capture performance loss or gain in the coupling context. The outcome is a machine-dependent model with a rather unique basis. In the last step, the input code to optimize is decomposed in order to fit array operations within the model. The result of the model is a performance estimate. The intention behind this is to use the model in the future to guide the optimization process, mainly to choose between explicit loops and array operations. Our model is tested on several case studies with major interpreters which implement array operations (e.g. Octave, MATLAB, Python+Numpy). Finally, we consider the limitations of our method and point out some potential perspectives.

Lundi 19 février à 14h : Guillaume Genestier (CRI et ENS Paris-Saclay), en P.2.14, batiment P.

« Using the Size-Change Principle for Termination Checking in Dedukti  »

Dedukti is a type-checker for the lambda-pi calculus modulo theory, which has the particularity to allow the user to declare rewrite rules, especially in order to encode the logic he/she wants to use. To decide the type-checking, Dedukti needs the system of rewrite rules declared by the user, together with beta-reduction, to be confluent and terminating.
The aim of the talk will be to show how to use the reducibility candidates in order to obtain a termination criterion adapted from the "Size-Change Principle" on a subset of the lambda-pi calculus modulo theory, and a prototypic implementation of it for Dedukti.

Lundi 15 janvier à 14h : Anis Khlif (Deezer), en salle mastère (rez-de-chaussée du batiment P.)

« Music information retrieval research at Deezer  »

Building workable knowledge around a catalog as large as Deezer's is a challenge that precludes the exclusive use of human agents. In this presentation, I will touch on the necessity for companies like Deezer of working on more fundamental research questions like those of representation learning (embeddings), metric learning, classifier systems, and their application to Music Information Retrieval.
Doing so, I will give a thorough description of our missions in the R&D team, the kind of research we are conducting, and some systems we have built. We will see how this research draws its sources from real world data revolving around audio (raw audio, knowledge bases, structured/unstructured texts, semantic graphs, usage data, ...) and is fuelled by deep learning.

Lundi 18 décembre à 14h : Fabien Coelho (CRI), en P.2.14, batiment P.

« Itsuku: a Memory-Hardened Proof-of-Work Scheme  »

Je parlerai des fonctions de preuve d'effort (Proof-of-Work, ou PoW), en particulier celles qui visent à garantir un haut niveau d'usage de la mémoire de manière à rendre plus difficile les implémentations matérielles de certaines technologies utilisées dans les systèmes de type "blockchain".

Proof-of-Work (PoW) schemes allow to limit access to resources or to share rewards for crypto-currency mining. The MTP-Argon2 PoW by Biryukov and Khovratovich is loosely based on the Argon2 memory-hard password hashing function. Several attacks have been published. We introduce a new transposed parallel implementation attack which achieves higher performance by circumventing apparent bandwidth requirements. We then present Itsuku, a new scheme that fixes known issues by changing MTP-Argon2 parameters and adds new operations to improve memory hardness. Our scheme is built on a simple security criterion: any implementation which requires half the memory or less should induce at least a × 64 computation cost for difficulty d ≤ 100. The Itsuku proof size is typically 1/16 th of the initial scheme, while providing better memory hardness. We also describe high-end hardware designs for MTP-Argon2 and Itsuku.

Lundi 16 octobre à 14h : Aurélie Rolland (ENSAD), en P.2.14, batiment P.

« Le design de services  »

Nous vivons dans une économie de services depuis 40 ans. Cependant l’essor du numérique et la démocratisation des nouvelles technologies voient émerger un nouveau vocabulaire, comme le design de services. Qu’est-ce que le design de services ? Pourquoi en France cette notion est-elle encore assez peu répandue ?
La présentation sera donnée par Aurélie Rolland, jeune diplômée des Arts Décoratifs de Paris, designer de services, qui a rejoint l’équipe de Pierre Wargnier, à l'hopital Broca et au CRI, pour la conception d’un appel malade à destination des personnes atteintes de troubles cognitifs.

Lundi 18 septembre à 14h : Bruno Sguerra (CRI), en P.2.14, batiment P.

« Vers une mémoire markovienne  »

Un Processus de décision markovien (Markovian Decision Process, ou MDP) est un framework mathématique pour la modélisation d'un système stochastique où un agent prend des décisions et le résultat de ces actions est partiellement aléatoire. Les MDPs décrivent formellement un environnement pour l'apprentissage par renforcement. Presque tous les problèmes d'apprentissage de renforcement peuvent être formalisés en MDP.
Dans cette présentation, nous allons définir un MDP, décrire les façons habituelles de le résoudre et montrer comment nous pouvons utiliser un MDP pour simuler la mémoire de travail humaine.

Mardi 26 juin à 10h : Olfa Haggui (Univ. de Sousse, Tunisie), en salle Mastère (rez-de-chaussée du batiment P.)

« Optimisation des accès mémoire dans les applications parallèles et accélérées  »

Avec l'augmentation de la vitesse de traitement des processeurs modernes, le coût associé aux accès mémoire est de plus en plus mis en évidence. Dans beaucoup de cas, ce coût devient le principal obstacle sur le chemin d'une performance absolue satisfaisante. L'augmentation du nombre d'unités parallèles a rendu ce défi encore plus critique et beaucoup plus difficile à surmonter. Ceci concerne aussi bien les multi-coeurs plus ou moins conventionnels que les accélérateurs (GPU) de dernière génération. Notre thèse repose sur cette problématique, avec pour but d'apporter une description quantifiée des goulots d'étranglement, de définir une méthodologie de profilage et de proposer des solutions générales ou spécifiques à des applications de traitement d'images.
Dans cet exposé, après avoir rapidement présenter mon cursus et mes centres d'intérêts académiques, je vais présenter l'application de simulation de forage de Géosciences ainsi que mes premières observations (projet CARNOT MINES PACA) ; ensuite, je parlerai de la parallélisation efficace de l'algorithme de Harris sur manycoeurs.

Mardi 2 mai à 11h : Maroua Maalej (ENS Lyon), en salle Mastère (rez-de-chaussée du batiment P.)

« Low cost static pointer analyses for efficient compilers  »

To optimize code efficiently, compilers must detect dependencies between variables and especially pointers. This is what we call pointer or alias analysis. Given two pointers p and q, knowing that they are disjoint ensures that any write to p keeps q’s value unchanged. Having the fact that p and q potentially point to the same region, or ignoring the information, makes q potentially affected by any change made on p and vice versa.
From Andersen’s work [And94] to the more recent technique of Zhang et al. [ZXZ + 14] many alias analysis techniques have been proposed. The difference in flow/context/field sensivity lets these approaches trade precision for efficiency, efficiency for precision, or attempt to balance between both. However, in spite of all the attention that this topic has received, the current state-of-the-art approaches inside compilers still face challenges regarding precision and speed. In particular, pointer arithmetic, a key feature in C and C++, is far to be handled satisfactorily.
In this talk, I will explain the pointer arithmetic challenging problem and expose our contributions toward better pointer analyses in the LLVM compiler.

[And94] Lars Ole Andersen, "Program Analysis and Specialization for the C Programming Language", 1994.
[ZXZ + 14] Qirun Zhang and Xiao Xiao and Charles Zhang and Hao Yuan and Zhendong Su, "Efficient subcubic alias analysis for {C}", 2014.

Lundi 23 avril à 14h : Kameswar Rao Vaddina (Telecom ParisTech) et Emilio Gallego Arias (CRI), en salle Mastère (rez-de-chaussée du batiment P.)

1) Kameswar Rao Vaddina : « Energy profiling of embedded systems - experiences learnt so far »

Advances in recent technology have greatly improved the performance of embedded systems. But, many of these ubiquitous devices are energy-constrained and are usually powered by batteries. The energy consumption of the system is highly dependent on the application it is currently running. This limits large-scale deployment, thereby greatly complicating the development of embedded software. Hence optimizing applications for energy consumption still remains a challenge and is an interesting research direction.
In this talk we present an ongoing work about energy profiling of embedded applications. It is driven by an online energy monitoring mechanism using NIcDAQ and Labview running on a host machine. We detail the experiences gained in using and modding three different embedded boards. The boards used are Nvidia Jetson TX1, Pandaboard and TI AM572x evaluation module.
Bio : Dr.Kameswar Rao Vaddina got his PhD on Thermal-aware networked many-core systems from University of Turku, Finland. Currently, he is a postdoc at Telecom Paristech, working on energy and temperature optimization of embedded systems.

2) Emilio Gallego Arias : « Towards Unified Verification Environments »

In this talk, we will present our journey from the development of jsCoq -- a thin layer on top of the Coq system aimed at educational purposes -- to SerAPI, a more ambitious protocol for the Coq proof assistant which aims to support large-scale proof development, and to provide a reliable, lightweight protocol that can ease adoption by other proof assistants.

Lundi 20 mars à 14h : Renaud Pawlak (Cinchéo SAS), en salle Mastère (rez-de-chaussée du batiment P.)

« JSweet: un transpilateur extensible de Java vers JavaScript »

Ces dernières années, la prédominance de JavaScript dans les environnements Web et Mobile a entrainé la recrudescence de compilateurs source-à-source (transpilateurs) capables de générer du JavaScript à partir d'un langage de plus haut niveau (comme Java, OCaml ou Scala). L'objectif principal de ces transpilateurs est d'ajouter une couche de typage à JavaScript, ainsi que la sémantique d'execution du langage source. Cette transpilation pose de nombreux challenges vis-à-vis de l'interopérabilité avec l'existant. En particulier, la question de l'accès et de la compatibilité avec les librairies JavaScript natives se pose, aussi bien en terme de typage que de performances à l'exécution.
Dans cette présentation, nous allons donner un retour d'expérience sur la construction et la mise en application du transpilateur JSweet, qui se focalise sur l'intéropérabilité efficace avec l'existant grâce à une transpilation légère et extensible.

Lundi 6 février à 14h : Youssef Mesri (MINES ParisTech, CEMEF), en salle P.2.14., batiment P.

« Des maillages, des algorithmes et des fluides »

Cet exposé porte sur les méthodes numériques adaptatives et parallèles pour la résolution des problèmes de la dynamique des fluides. Le domaine de calcul est discrétisé par des maillages non-structurés anisotropes appropriés à la capture des hétérogénéités et des singularités physiques. La dynamique et la localité de celles-ci nécessitent une résolution très fine du maillage rendant impossible le calcul même sur les supercalculateurs les plus puissants d'aujourd'hui.
Nous avons développé ces dernières années une approche alternative à base de méthodes d'adaptation de maillage anisotrope. Le maillage est sujet à des modifications locales suivant des critères physiques grâce à des estimateurs d'erreurs a posteriori liant la précision du maillage à celle de la solution. Les travaux et les avancées réalisés dans ce cadre seront présentés avec différentes illustrations du monde de la mécanique des fluides.

Lundi 9 janvier à 14h : Claude Tadonki (CRI), en salle P.2.14., batiment P.

« NUMA-Aware Wilson-Dirac on Supercomputers »

We revisit the Wilson-Dirac operator, also referred as Dslash, on NUMA many-core vector machines and thus seek an efficient supercomputing implementation. The Wilson-Dirac operator is known as the major computing kernel in Lattice Quantum ChromoDynamics (LQCD), which is the canonical discrete formalism for Quantum ChromoDynamics (QCD) investigations. QCD is the theory of sub-nuclear particles physics, aiming at modeling the strong nuclear force, which is responsible for the interactions of nuclear particles. Based on LQCD formalism, intensive simulations are performed following the Monte-Carlo paradigm. Physicists claim that informative observations should be derived from large-scale LQCD scenarios, most of them being numerically sensitive. The corresponding computing demand is indeed tremendous at various levels, from storage to floating-point operations, hence the essential need for powerful supercomputers. Designing efficient LQCD codes on modern (mostly hybrid) supercomputers requires to efficiently exploit all available levels of parallelism including accelerators. Since the Wilson-Dirac operator is a coarse-grain stencil computation performed on huge volumes of data, any performance- and scalability-related investigation should skillfully address memory accesses and interprocessor communication overheads. In order the lower the latter, an explicit shared memory implementation should be considered at the node level, since this will lead to a less complex data communication graph and thus (at least intuitively) reduce the overall communication latency.
We focus on this aspect and propose a novel efficient NUMA-aware scheduling, together with a combination of the major HPC strategies for large-scale LQCD. We reach a nearly optimal performance on a single core and a significant scalability improvement across several NUMA nodes. Then, using a classical domain decomposition approach, we extend our work to a large cluster of many-core nodes, showing the global efficiency of our implementation and its benefit over pure multi-process schemes.

Lundi 5 décembre à 14h : Bruno Sguerra (CRI), en salle P.2.14., batiment P.

« Classification Using Sum-Product Networks for Autonomous Flight of Micro Aerial Vehicles »

Sum product networks (SPN) are a deep probabilistic model proposed by Poon and Domingos in 2014. This model has the advantage (when compared to classical probabilistic graphical model such as Bayesian networks and Markov networks) that it has tractable inference.
In this presentation, we will present the results of using the Sum product network model as an image classifier to guide micro aerial vahicles through indoor environments.

Lundi 7 novembre à 14h : Keryan Didier (Inria), en salle P.2.14., batiment P.

« Towards formally verified real-time parallel implementation of avionics applications »

We advance on our goal of defining a compilation process going all the way from high-level functional and non-functional specifications to running real-time implementations. When compared to classical compilation problems, our proposal involves three novel features:
- ensuring a safe and precise time accounting throughout the compilation process;
- explicitly considering back-end transformations such as linking and loading, whose control is necessary to ensure the respect of real-time guarantees;
- introducing language constructs to represent mapping and real-time requirements.
We formalize such a real-time compilation problem for simple dataflow synchronous specifications that are statically compiled to shared memory execution platforms. We also show a prototype working on a subset of this formal model on a large-scale avionics application.

Lundi 26 septembre à 14h : Adilla Susungi (CRI), en salle P.2.14., batiment P.

« On the Road of Designing a Parallel Intermediate Language »

This talk is about the current state of design and implementation of a parallel intermediate language. In this language, arrays are first-class data structures for which operations such as data placement and layout transformations can be expressed through dedicated primitives, and higher-order functions are used to describe array accesses in parallel or sequential loops. We present an example of compilation flow involving the language and discuss limitations and future directions.

Lundi 5 septembre à 14h : Pierre Jouvelot (CRI), en salle P.2.14., batiment P.

« Signal Rate Inference for Multi-Dimensional Fauste »

We introduce a new signal-level, type- and rate-based semantic framework for describing a multi-rate version of the functional, domain-specific Faust language, dedicated to audio signal processing, and here extended to support array-valued samples. If Faust is usually viewed as a formalism for combining "signal processors", which are expressions mapping input signals to output signals, we provide here the first formal, lower-level semantics for Faust based on "signals" instead. In addition to its interest in understanding the inner workings of the Faust compiler, which uses symbolic evaluation of signal expressions, this approach turns out to be useful when introducing a language extension targeting multi-rate and multi-dimensional (array-valued) processing.

Lundi 27 juin à 14h : Pierre Guillou (CRI) et Florian Gouin (CRI et Safran), en salle R.0.8 (rez-de-chaussée du batiment P.)

1) Pierre Guillou : « A Dynamic to Static DSL Compiler for Image Processing Applications »

Computer vision is a thriving field of research, and Python is an instrument of choice for developing image processing software applications. Used in conjunction with specialized libraries written in C or C++, performance can be enhanced to match native code. The SMIL library is a new C++ image processing library offering ease of programming with a Python wrapper. However, SMIL applications also have to be executed on embedded platforms such as FPGAs on which a Python interpreter is not available. The generic answer to such an issue is to re-code the original Python applications in C or C++, which will be then optimized for every hardware target, or to try to compile Python into native code using tools such as Cython.
The approach taken by the FREIA project is to ease portability of applications written in a DSL embedded in C (the FREIA API) by using specific optimizations such as image expressions evaluation, removal of temporary variables or image tiling.
Is it possible for SMIL Python applications to benefit from the FREIA compilation toolchain in order to increase their portability onto specialized hardware targets?
We present in this paper (1) a methodology to convert a dynamic DSL into a static one that preserves programmability, (2) a working implementation which takes care of types, memory allocation, polymorphism and API adaptation between SMIL and FREIA, (3) and experimental results on portability and performance.

2) Florian Gouin : « Méthode de calcul de variance locale adaptée aux processeurs graphiques »

Le calcul de noyaux est utilisé en traitement d'images pour appliquer un algorithme sur le voisinage de chaque élément constituant une image. Ce type d'algorithme fréquemment employé, a pour caractéristique d'être particulièrement consommateur en bande passante mémoire.
Dans cette présentation, nous nous intéressons plus précisément au calcul de noyaux de variances locales sur GPU. Ainsi, nous introduirons l'état de l'art sur les méthodes de calcul de variance et détaillerons les choix d'optimisations qui nous ont permis de réduire la quantité de communication mémoire d'une complexité en O(N²) à une complexité en O(log(N)).

Lundi 6 juin à 14h : Claude Tadonki (CRI), en salle P.2.14., batiment P.

« LQCD revisited on multicore vector machines »

We revisit the Wilson-Dirac operator on multicore vector machines. The Wilson-Dirac operator is the major computing kernel in Lattice Quantum ChromoDynamics (LQCD), which is the canonical discrete formalism for Quantum ChromoDynamics (QCD) investigations. QCD is the theory of sub-nuclear physics, aiming at modeling the strong nuclear force, which is responsible for the interactions of nuclear particles. Based on LQCD formalism, intensive simulations are performed following the Monte Carlo paradigm. Informative observations are expected from large-scale and numerically sensitive LQCD simulations. The corresponding computing demand is therefore tremendous, thus the serious consideration for powerful supercomputers.
Designing LQCD codes that scale well on large (mostly hybrid) supercomputers requires to efficiently exploit many level of parallelism including accelerators. Since the Wilson-Dirac operator is a coarse-grain stencil computation performed on huge volume of data, any performance related investigation should skillfully address memory accesses and interprocessor communication overheads. In order to lower the later, a shared memory approach should be considered at the node level. Because of the threads scalability issue, among other reasons, the message passing paradigm seems to be widely considered, even with shared memory systems.
Although this pragmatic approach is understandable in general, we think that hybrid approaches are worth considering. Indeed, even if message passing implementations can run as such on a shared memory node, explicit data exchanges have a significant cost and might trigger noisy synchronizations. For such exchanges, as long as all communicating processes reside on the same compute node, we are limited by the memory bandwidth available. Once more nodes become involved, the network bandwidth, usually much lower than the memory bandwidth, becomes the limiting factor.
The case of applications that expose heavy data exchanges like LQCD are particularly sensitive to this aspect. This is the main focus of our work, where we provide, explain, and discuss a multi-threaded vector implementation, whose experimental results on the newly released {\sc intel broadwell} processor show competitive absolute efficiency and scalability on one of its NUMA nodes.

Lundi 23 mai à 14h : Romain Michon (étudiant en thèse au laboration CCRMA d'informatique musicale de Stanford, USA), en salle P.2.14.

« Faust-Based Research, Teaching and Applications at Stanford University »

Slides disponible :

Lundi 2 mai de 14 à 15h : Pierre Wargnier (CRI), en salle P.2.14., batiment P.

1) « Field Evaluation with Cognitively-Impaired Older Adults of Attention Management in the Embodied Conversational Agent Louise »

We present the first experiment we conducted to evaluate the attention monitoring performance of Louise, following a Wizard of Oz method, during the interactions with a cohort of 8 elderly users in a day hospital environment. Louise is a new, semi-automatic prototype of an Embodied Conversational Agent (ECA), a virtual character interacting with users through social-like communication, adapted to the special needs of older adults with cognitive impairment; it is intended to ultimately provide assistance in their activities of daily living. We recorded and analyzed both videos of the conversation-like interactions and Louise's tracking data. In our experiment, Louise's attention estimation algorithm achieved about 80% accuracy; moreover, in almost all cases, the user's attention was successfully recaptured by Louise after a planned, experimenter-induced distraction. These results are in line with what was observed in previous experiments involving only younger adults, thus suggesting that attention measurement tools embedded in cognitive prostheses will not need to be adapted to elderly patients. Finally, to gain further insights on conversation management and provide evidence-based suggestions for future work, we performed an anthropological analysis of the whole experiment.

2) « Virtual Promenade: A New Serious Game for the Rehabilitation of Older Adults with Post-fall Syndrome »

We introduce a novel rehabilitation tool to treat Post-Fall Syndrome (PFS) in older adults: a serious game, called Virtual Promenade, combined with a haptic chair imitating the hips movements of human walk. We report on the user-centered design of our prototype, following ``living lab'' principles, which was well received by our test participants. This system aims at addressing the psycho-motor consequences of older adults' falls; they are often neglected in current post-fall care practices. We first checked for feasibility and tolerability of such interventions. We then applied a living lab participatory design approach, involving health care professionals and older adults, to build the Virtual Promenade prototype. We found that patients with PFS tolerated the system well and that there were no major obstacles to feasibility. We also report that the aesthetics of the virtual environment is an important motivational factor for older adults and discuss our results in searching for the most suitable game controller for this type of patients and game. Finally, we observed that the chairs' movements improved the immersion in the game.

Lundi 4 avril 2016 à 14h : Emilio Gallego Arias (CRI), en salle P.2.14., batiment P.

« Experiments in Certified Digital Audio Processing »

We will present recent developments in Wagner, a functional programming language geared towards real-time digital audio processing with formal, machine-checkable semantics.

The system currently has three main components: a Coq library with some facts about the Discrete Fourier Transform and unitary transforms, a stream-oriented lambda calculus enabling easy coding of filters and digital waveguide oscillators, and a call-by-value abstract machine providing an efficient execution model.

Lundi 22 février 2016 à 14h : Patryk Kiepas (CRI), en salle P.2.14., batiment P.

« Gentle glimpse on machine learning and even more gentle look on meta-learning and stuff »

The talk will be divided into three uneven parts.
We will start with a really gentle introduction to machine learning(ML)from perspective of the ML user. We won't talk about WHAT machine learning IS, but instead we will talk WHAT IT IS ABOUT. We will describe parts of machine learning like learning data, data features as way of describing data, different tasks to solve, models to describe relations in data and means to evaluate built models. We will also look on the large number of available machine learning algorithms and problems that arise.

Second part of the talk will be about meta-learning. We will look on it as a solution for few problems from the first part of the talk. We will talk about algorithm selection, parameter selection and tuning, hyperparameters optimization and other means to automate classic machine learning methods.

Last part of the talk is the smallest one. It will cover work I've done in meta-learning field as a part of my master thesis (algorithm selection system).

Lundi 01 février 2016 à 14h : Dumitru Potop Butucaru (Inria), en salle P.2.14., batiment P.

« Real-time systems compilation »

My presentation is structured in 2 main parts. In the first one I will introduce and advocate for the concept of Real-Time Systems Compilation. By analogy with classical compilation, real-time systems compilation consists in the fully automatic construction of running, correct-by-construction implementations from functional and non-functional specifications of embedded control systems. Like in a classical compiler, the whole process must be fast (thus enabling a trial-and-error design style) and produce reasonably efficient code. This requires the use of fast heuristics, and the use of fine-grain platform and application models. Unlike a classical compiler, a real-time systems compiler must take into account non-functional properties of a system and ensure the respect of non-functional requirements (in addition to functional correctness). In the second part of my presentation I will present Lopht, a real-time systems compiler we built by combining techniques and concepts from real-time scheduling, compilation, and synchronous language design.

Lundi 25 janvier 2016 à 14h : Emmanuelle Encrenaz-Tiphene (LIP6), en salle P.2.14., batiment P.

« A formal framework for the protection of assembly code against fault attacks »

Fault attacks against embedded circuits aim at disrupting the normal behavior of an application in order to gain benefits (confidential data as an example). In order to be successful, a fault injection must be performed following an attack path, which defines when and where the fault must be injected. Many attack paths against secure circuits are possible. Every attack path relies on a specific fault model which defines the type of faults that the attacker can perform. Fault models corresponding to the skip of an assembly instruction or to the replacement of an instruction by another one has been obtained by using several fault injection means, these fault models are powerful and practical for an attacker.

To avoid this threat, some countermeasure schemes have been proposed and secure circuit designers implement them to protect their assets. However, the robustness of these countermeasure are evaluated by security experts and may have some undisclosed weaknesses.

We present a formal verification framework and tool named Robusta that enables to evaluate the robustness of pieces of assembly code against fault-injection attacks. By modeling a reference assembly code and its protected variant as automata, the framework can generate a set of equations for an SMT solver, the solutions of which, when they exist, represent possible attack paths. Using this approach we defined a protection scheme against instruction skip at the assembly instruction level. We also evaluated the robustness of state-of-the-art countermeasures against fault injection attacks. Based on insights gathered from this evaluation, we analyze any remaining weaknesses and propose enhancement of these countermeasures that are more robust.

Lundi 7 décembre 2015 à 14h : Pierre Wargnier (CRI)

« Conception de d'agent conversationnel Louise pour les besoins des personnes âgées atteintes de troubles cognitifs »

Le terme « démence » désigne les symptômes de plusieurs maladies neurodégénératives de la personne âgée. La plus répandue d'entre elles est la maladie d'Alzheimer. Afin d'apporter une aide compensatoire aux symptômes de perte de mémoire et de troubles exécutifs et attentionnels, nous développons l'agent conversationnel Louise. Louise a pour but d'étudier les conditions rendant l'usage des agents conversationnels possible à ce public. Dans cette présentation, j'évoquerai les résultats des premières séries d'expériences s'intéressant particulièrement à la gestion de l'attention de l'utilisateur, puis je présenterai la conception d'un prototype plus évolué, le modèle de gestion des conversations utilitaires envisagé, la représentation de ce modèle dans un programme informatique ainsi que le langage dédié, fondé sur XML (élémentaire), permettant de décrire de telles conversations.

Mercredi 4 novembre 2015 à 11h : Pierre Jouvelot (CRI)

« FEEVER à mi-parcours »

Le projet FEEVER, coordonné par le CRI, est une collaboration impliquant également GRAME (Lyon), l'Université de St-Etienne et Inria/Irisa (Rennes). Nous ferons une présentation de l'état actuel du projet et des défis qu'il reste à lever pour atteindre l'objectif initial visant à faire du langage de traitement numérique du son FAUST une solution indispensable sur toutes les plateformes et pour toutes les applications liées à la musique.

5 octobre 2015 à 14h : Daverio laurent (CRI), et Pin Benoît (CRI)

« Aligning legivoc Legal Vocabularies by Crowdsourcing »

legivoc is the first Internet-based platform dedicated to the diffusion, edition and alignment of legal vocabularies across countries. Funded in part by the European Commission and administered by the French Ministry of Justice, legivoc offers a seamless path for governments to disseminate their legal foundations and specify semantic bridges between them. We describe the general principles behind the legivoc framework and provide some ideas about its implementation, with a particular focus on the state-of-the-art tools it includes to help crowdsource the alignment of legal corpora together.

7 septembre 2015 à 14h : Gaëtan Gilbert (CRI)

« Term inference and Unification in Lambda Pi Calculus Modulo  »

I implemented a term inference algorithm for λΠ modulo cleanly separating constraint generation, constraint manipulation through a safe interface, and heuristics for solving higher order unification problems.

6 juillet 2015 à 15h :

15h : Arnaud Spiwack / Pierre Guillou (CRI) « Rust: system programming with guaranties »

Rust is a new programming language developped by Mozilla which saw its first sable release in May 2015. Besides reaching a suprisingly fast popularity and active community, Rust is a truly innovative language, at the intersection of functional programming languages such as Ocaml with memory-conscious languages such as C.

We will present the main novelties brought by Rust to the world of practical programming language, in particular its ownership and lifetime systems which Rust uses to enforce memory safety at a very low cost. We will also give an overview of the Rust environment and of the day-to-day life of the Rust programmer.

16h : Ronan Keryell (Khronos OpenCL SYCL Committee) « Modern C++, heterogeneous computing and OpenCL SYCL »

SYCL is a single source programming environment that exposes OpenCL devices to C++ developers in a language-integrated way. SYCL relies on standard C++ syntax and semantics, using API calls and an optional multi-pass compilation technology to target an underlying OpenCL runtime. We will introduce SYCL and give an overview of the design goals and decisions behind SYCL, as well as its status within the Khronos standardization framework.

During this talk, we will briefly introduce C++14 and some alternative ways to program heterogeneous accelerators in C++, such as Boost.Compute, VexCL, ViennaCL, OpenMP4, C++AMP... or of course the lower level legacy CL.hpp and the newer OpenCL 2.1 C++ kernel language. We will compare with SYCL and give an overview of the design goals and decisions behind SYCL and its status within the Khronos standardization framework.

Defining a new specification for a new parallel language is difficult without the ability to experiment with the concepts and the syntax. Thus to help the discussions on the new OpenCL SYCL C++11 standard inside the Khronos committee, we started an open-source implementation based on modern C++1z templated classes and OpenMP: triSYCL. The use of metaprogramming with some useful libraries such as Boost.MultiArray and Boost.Operators helps attain a conciseness in the implementation. Since there is no compiler in this implementation yet, we cannot target OpenCL but we hope it will evolved into a full-fledged open-source implementation.

Bio: Ronan Keryell, Khronos OpenCL SYCL Committee. Ronan is involved in the development and standardization of SYCL, and is the author of an open source implementation: triSYCL. Ronan is a co-founder of SILKAN where part of his role was to lead development of the automatic parallelizer Par4All, and he was previously an assistant professor in France. He graduated from École Normale Supérieure of Paris where he got his PhD too.

8 juin 2015 à 15h : Vu Hoang Thai (CRI et University of Science and Technology, Hanoi)

« Simulation of memory behavior on multi-core computers »

With the advent of multi-core processors, together with multi-level memories, concurrent accesses are becoming more complex to profile and predict. Taking into account the fact most of common applications are memory-bounded, it becomes crucial to deeply investigate the overhead of memory activities. For this purpose, the current proposal aims at designing a parser that could instrument a given C code in order to extract its memory access patterns. From this matrix of accessed indices, a target architecture and a given memory management protocol, we wish to simulate the behavior of the memory and thus estimate the associated costs. Such estimation could be used to design static or dynamic optimizations to improve the memory performance of computer programs.

4 mai 2015 à 14h : Florian Gouin(Sagem et CRI)

« L'encodage Mill Split-stream »

Face aux problématiques d'utilisation du cache d'instructions L1, le Mill Split-stream Encoding apporte une solution pour doubler la taille du cache L1 sans en dégrader ses performances. Cette présentation détaillera différentes méthodes de décodage d'instructions ainsi que les particularités de l'architecture Mill.

13 avril 2015 à 15h : Karel de Vogeleer (Télécom ParisTech et CRI)

« The Energy/Frequency Convexity Rule »

Energy efficiency is an important aspect for battery-driven devices, whether autonomous or designed for human interaction.

In this talk we look at the trade-off between energy consumption and performance of a microprocessor, which we refer to as the Energy/Frequency Convexity Rule. We discuss under what circumstances this convexity rule can be exploited, and when other methods are more effective, to improve the energy efficiency of the microprocessor. Amongst the parameters of interest are; the microprocessor's thermal behavior, the overhead of the system, and the background power requirements of the system.

Beside, the microprocessor's thermal behavior is developed in more detail as it has a significant impact on its power requirements.

10 mars 2015 à 14h : Jim Lipton (Université Wesleyan, Connecticut US)

« On a Kripke semantics for Intuitionistic Higher-Order Logic with constraints »

We define a Kripke semantics for Intuitionistic Higher-Order Logic with constraints formulated within Church's Theory of Types via the addition of a new constraint base type.

We then define an executable fragment, hohh(C) of the logic: a higher-order constraint logic programming language with typed $\lambda$-abstraction, implication and universal quantification in goals and give a modified model theory for this fragment. Both formal systems are sound and complete for their respective semantics.

We will sketch a simple semantics-based proof that the language hohh(C) satisfies the uniformity property of Miller et. al.

02 mars 2015 à 14h : Arnaud Spiwack(CRI)

« Introduction à la théorie des topos appliquée »

Dans cet exposé j'introduirai la notion de topos et les mathématiques constructives (pas de prérequis !). Les topos viennent de préoccupations qu'on peut difficilement imaginer plus théoriques : la généralisation de constructions de géométries algébrique.

Néanmoins, je ne porterai pas (trop) un chapeau de théoricien : je vais montrer comment les topos peuvent être utilisés pour parler de sémantiques de langages synchrones, en particulier de langages de circuits tels que Faust qui est la motivation pour le travail que je présenterai.

L'idée générale est que les topos vont nous permettre d'internaliser le temps de manière implicite pour traiter de manière uniforme les portes combinatoires (comme le "ou" booléen) qui ne traitent que de données, et les portes, comme le délai, qui contrôlent les aspects temporels du circuit.

2 février 2015 à 14h : Imré Frotier de la Messelière (CRI)

« Fondations sémantiques de l'analyse de programmes : Application au cas de Faust (Article original de Patrick Cousot )  »

Afin d'affiner l'intervalle fourni par les règles de typage de Faust dans le cas d'un opérateur de boucle, nous effectuons un calcul de point fixe ayant recours à l'interprétation abstraite. Pour cela, en partant d'un programme Faust contenant au moins un opérateur de boucle, nous réalisons un système correspondant à celui de l'article de Patrick Cousot "Fondations sémantiques de l'analyse de programmes."

C'est ainsi que, dans un premier temps, nous représentons le point d'utilisation d'une boucle dans Faust sous forme d'un diagrame de programme dont nous calculons les invariants. En se basant sur ces invariants, nous procédons ainsi à l'approximation du calcul de point fixe dans un diagramme de programme, suivie de l'utilisation des opérations de widening puis de narrowing pour accélérer ce processus.

Enfin, plusieurs pistes d'étude supplémentaires allant au-delà d'une application directe de l'algorithme présenté ici seront abordées. Nous nous pencherons notamment sur divers choix de treillis et sur une possible automatisation du calcul des invariants dans le cas de Faust. Une étude sur l'intégration d'éléments d'interprétation abstraite dans les règles de sémantique de Faust sera aussi abordée.

5 janvier 2015 :

14h : Claude Tadonki (CRI) « A Fine-grained Approach for Power Consumption Analysis and Prediction »

Power consumption has became a critical concern in modern computing systems for various reasons including financial savings and environmental protection. With battery powered devices, we need to care about the available amount of energy since it is limited. For the case of supercomputers, as they imply a large aggregation of heavy CPU activities, we are exposed to a risk of overheating. As the design of current and future hardware is becoming more and more complex, energy prediction or estimation is as elusive as that of time performance.

However, having a good prediction of power consumption is still an important request to the computer science community. Indeed, power consumption might become a common performance and cost metric in the near future. A good methodology for energy prediction could have a great impact on power-aware programming, compilation, or runtime monitoring.

In this paper, we try to understand from measurements where and how power is consumed at the level of a computing node. We focus on a set of basic programming instructions, more precisely those related to CPU and memory.

We propose an analytical prediction model based on the hypothesis that each basic instruction has an average energy cost that can be estimated on a given architecture through a series of micro-benchmarks. The considered energy cost per operation includes all of the overhead due to context of the loop where it is executed. Using these pre-calculated values, we derive an linear extrapolation model to predict the energy of a given algorithm expressed by means of atomic instructions. We then use three selected applications to check the accuracy of our prediction method by comparing our estimations with the corresponding measurements obtained using a multimeter. We show a 9.48\% energy prediction on sorting.

15h : Pierre Guillou / Nelson Lossing (CRI) « From Data to Effects Dependence Graphs: Source-to-Source Transformations for C »

Program optimizations, transformations and analyses are applied to intermediate representations, which usually do not include explicit variable declarations. This description level is fine for middle-ends and for optimizers of simple languages such as Fortran77. However, the C language, especially its C99 standard, is much more flexible. Variable and type declarations can appear almost anywhere in source code, and they cannot become implicit in the output code of a C compiler.

We show that declaration statements can be handled like the other statements and with the same algorithms if new effect information is defined and handled by the compiler, such as writing the environment when a variable is declared and reading it when it is accessed. Our solution is interesting because no legal transformation is hindered by our new effects and because existing algorithms are either not modified or only slightly modified by filtering upon the effect kind.

This extension has been used for several years in our PIPS framework and has remained compatible with the new developments such as offloading compilers for GPUs and coprocessors.

1 décembre 2014 à 14h : Pierre Wargnier (CRI)

« De l'image 3D aux agents conversationnels animés : représentation, usages et applications »

Cette présentation portera sur l'animation graphique 3D interactive.

Dans un premier temps je présenterai quelques généralité sur les modes de représentation des objets virtuels tridimensionnels et de l'animation de ces objets. Ensuite nous verrons comment cela est réalisé en pratique par les artistes et animateurs. Puis, je parlerai des moteurs de jeux et environnements intégrés de développement de jeux. Enfin, je présenterai un cas concret d'application utilisant ces technologies pour la conception d'un agent conversationnel animé.

6 octobre 2014 :

14h : Jesus Gallago Arias (CRI, travail réalisé à UPenn) « Approximate Relational Refinement Types with Applications to Verification of Differential Privacy and Mechanism Design »

In software verification, properties over two arbitrary runs of a program are pervasive. The class of "2-properties" or "relational properties" is known to capture many definitions of interest.

I will present a recent draft [1] introducing a relational refinement type system (HOARe2) for the verification of probabilistic relational properties. We have applied to system to two case studies, Differential Privacy and Mechanism Design, automatically verifying over 12 state of the art algorithms.

A (probabilistic) algorithm is differentially privacy if given two adjacent inputs,the distance on the output probability distributions is bounded. Differential Privacy has gathered wide interest in the privacy, theory and data-analysis community.

With a common reading of it as a privacy promise: "Whether we use your data or not, a DP analysis will be approximately the same". Mechanism Design is the study of algorithm design in which the inputs to the algorithm are controlled by strategic agents, who must be incentivized to faithfully report them. A particular property of interest is truthfulness: An agent revenue will be maximal if and only if she tells the truth to the mechanism.


15h : Soufiane Baghadi (Télécom Sud-Paris) « Améliorer le recouvrement des communications et des calculs par le compilateur »

Les communications dans les actuels superordinateurs deviennent un goulet d'étranglement et la moindre optimisation de ces derniers peut engendrer des gains considérables au moment de l'exécution des applications distribuées.

Le recouvrement des communications par les calculs est une technique prouvée très efficace pour diminuer le temps d'inactivité du processeur pendant l'envoi ou la réception des données. L'optimisation manuelle à l'aide de cette technique n'est pas évidente, ce qui explique le nombre très limité de programmes qui l'utilisent.

Nous allons proposer l'état de l'art d'une passe de compilation qui transforme automatiquement les communications bloquantes en non bloquantes ce qui augmente le recouvrement et relaxe les contraintes de communication. Nous allons montrer la puissance de notre approche en appliquant ces optimisations manuellement sur un benchmark réel.

1 septembre 2014 à 14h : David Delahaye (CNAM)

« The BWare Project: Building a Proof Platform for the Automated Verification of B Proof Obligations »

We introduce BWare, an industrial research project that aims to provide a mechanized framework to support the automated verification of proof obligations coming from the development of industrial applications using the B method and requiring high integrity. The adopted methodology consists in building a generic verification platform relying on different automated theorem provers, such as first order provers and SMT (Satisfiability Modulo Theories) solvers.

Beyond the multi-tool aspect of our methodology, the originality of this project also resides in the requirement for the verification tools to produce proof objects, which are to be checked independently.

In this talk, we present some preliminary results of BWare, as well as some current major lines of work.

This work is supported by the BWare project (ANR-12-INSE-0010), funded for 4 years by the INS programme of the French National Research Agency (ANR) and started on September 2012. For more details, see:

3 juin 2014 à 14h : Laurent Daverio

« Les bases de données NoSQL »

  • Introduction : - un domaine en forte demande - une alternative au modèle relationnel
  • NoSQL : - présentation (des points communs, mais de nombreuses variantes) - pourquoi une alternative au modèle relationnel ?
  • Le théorème CAP (Consistency, Availability, Partition Tolerance)
  • Les principaux moteurs NoSQL à l'heure actuelle : Rias, HBase, MongoDB, CouchDB, Neo4j, Redis. Chaque moteur sera présenté de façon synthétique, avec ses caractéristiques, contraintes, points forts, propriétés CAP, etc.
  • Récapitulatif (sous forme de tableau)
  • Conclusion

2 juin 2014 à 14h : Olivier Hermant

« Double dose de double négation »

La double négation est utilisée en logique pour traduire des preuves faisant appel au principe du tiers-exclu ("soit A, soit non A") en des preuves qui ne font pas appel a ce principe - l'un des plus controversés en logique moderne. Nous commencerons par une introduction aux systèmes syntaxiques de preuves en logique classique et intuitionniste, puis nous détaillerons quelques résultats sur les traductions par double négation.

Double negation is used in logic to translate proofs that appeal to the excluded-middle principle ("either A, or not A") into proofs that do not appeal to this principle - one of the most controversed in modern logic. We will begin by an introduction to syntactic proof systems in classical and intuitionnistic logic, and we will then detail some results on double-negation translations.

5 mai 2014 à 14h : Imré Frotier de la Messelière

« Les types liquides (Logically Qualified Data Types) »

Les types liquides (Logically Qualified Data Types) sont un cas particulier de types dépendants, qui sont des type qui traduisent une dépendance sur la valeur de l'instance concernée. Dans cet article est présentée l'inférence de types liquides, qui est un système qui combine l'inférence de type de Hindley-Milner avec une abstraction par prédicat afin de déduire automatiquement des types dépendants. Les auteurs ont implémenté l'inférence de types liquides, qui prend en entrée un programme en Ocaml et un ensemble de qualificateurs logiques et déduit les types dépendants pour les expressions dans le programme en Ocaml. Nous étudierons en détail cet algorithme.

7 avril 2014 à 14h : Pierre Guillou

« Solutions logicielles pour l'amélioration du poste de travail informatique »

Au cours des dernières décennies, l'emploi de l'outil informatique s'est démocratisé au travail et dans les foyers.

Pourtant, l'utilisateur moyen, par méconnaissance ou par restriction de sa hiérarchie, peine toujours à adapter l'outil à ses besoins, ainsi qu'à questionner certains standards de fait. Qui par exemple irait aujourd'hui remettre en cause l'agencement AZERTY de nos clavier ?

Pour autant, quelques améliorations simples permettent de radicalement transformer l'expérience utilisateur.

Cette présentation se focalisera sur plusieurs d'entre elles.

  • apprentissage de la dactylographie et agencement BÉPO,
  • réduction de la fatigue visuelle sur écran,
  • compilation automatique de documents LaTeX et gestion de références,
  • zsh et terminator, des outils dédiés à la ligne de commande.

10 mars 2014 à 14h : Farhani Wajdi (Ecole Polytechnique de Tunis)

« Data representation synthesis »

3 mars 2014 à 14h : Pierre Jouvelot

« La musique à l'ère des nouvelles technologies »

CD, MP3, synthétiseur, Dolby AC3, DSP voire MIDI, "streaming" ou "sampling"…, ces termes récents, apportés par la révolution numérique et informatique du siècle dernier dans le domaine de la musique et de l'audionumérique, sont devenus presque familiers, même à ceux d'entre nous qui n'affichent qu'un intérêt passager pour le monde du son.

Mais que cachent ces idiomes souvent ésotériques ? Quels sont les mécanismes qui résident au coeur de ces merveilleuses machines informatiques que sont par exemple iPod et autres baladeurs MP3 ? Sur quels fondamentaux des mécanismes physique et physiologique de l'audition humaine s'appuient-ils pour nous permettre de profiter d'un Wagner ou Daft Punk où et quand nous le souhaitons ?

C'est d'une plongée au croisement de la musique et des nouvelles technologies informatiques qu'il sera question dans ce large survol d'un monde parfois mystérieux.

3 février 2014 à 14h : José Afonso L. Sanches (Universidade Federal Fluminense, Niteroi, Brazil)

« Resource Allocation in Large-scale Distributed Systems »

This thesis proposal is about creating and evaluating a new resource allocation method in large- scale computing environments (such as grids and clouds), by taking into consideration the complexity and particularities of the structures. For example, this could be done by means of analyzing the traffic generated at the grid sites, by investigating and modelling packet-level metrics. Since computing grids normally reflect the panorama of a complex network, and the same can probably be said about cloud systems, we plan to apply complex networks concepts to improve the efficiency of the method. We also intend to combine the aforementioned method with the JobPilot technique, currently used in the CMS/LHC grid system, and for which improvments are expected, both from the algorithm and the architecture viewpoints, as a part of the overall goal. As an objective, this talk intends to present what we already did and what we plan to do for the future, which is surrounded by more general information on the context of my PhD.

28 octobre 2013 à 14h : Karim Barkati (IRCAM)

« Faustine, un interprète pour le langage Faust »