Eddy's website

Need to know something about me?

Latest news

October, 2024
Teo. 10km Run'in Lyon. 00:41:55

October, 2023
Cléa. 10km Run'in Lyon. 00:58:11

June, 2022
Teo. 100 NL (50m) 01:02:42

December, 2021
Teo. 50 NL (25m) 00:28:97

December, 2021
Cléa. Sauts Long. 3m98

November, 2021
Cléa. 50m plat. 7"92

Links:

ANR CE25: SkyData (2023-2027)

ANR-22-CE25-0008

A fundamental characteristic of our era is the deluge of Data. Since Grid environments, data management in distributed environments is a quite common practice and Cloud systems provide many solutions to store data. A data manager can be defined through its functionalities that can be understood as services, including security, replication strategy, green data transfer, synchronization, data migration, etc. Many ways to design those services exist and each data manager encompasses its own point of view to compose them. Usually, this point of view is centered on applications rather than data, even when an autonomic solution is provided.

The SkyData project aims at breaking the existing rules and the way the data management is in place nowadays. We propose a new paradigm without any centralized control nor middleware. Imagine a world where the data are controlled by themselves! It is a real challenge to provide an autonomic behavior for the data. In this project, we will endow data with autonomous behaviors and thus create a new entity, so-called Self-Managed Data (or SkyData). We plan to develop a distributed and autonomous environment where the data are regulated by themselves. This change of paradigm represents a huge and truly innovative challenge that can be split into three key challenges: the first one consists in a strong theoretical study on autonomic computing; the second one aims at developing the algorithmic support of the concepts; and the third one is the genesis of a prototype with this new generation of data and a significant use case. This latter will show how to use a SkyData environment to create an autonomous data-management system which produces a reduced carbon footprint.

Web Site

PHC AURORA 2023 (2023-2024) : Exploring Energy Monitoring and Leveraging Energy Efficiency on End-to-end Worst Edge Fog Cloud Continuum for Extreme Climate Environments Observatories

The monitoring of energy usage is the key to analyze and understand continuum resources usage. This project will be the first step in order to deal with energy efficient infrastructures in terms of hardware and software services. But providing an End-to-End energy monitoring solution remains a real challenge in terms of heterogeneity of resources, frequency and precision of monitoring, profiling reporting (etc). A clear challenge raises here: How to monitor, in an end-to-end manner, a system that is deployed in a scares resource environment? Considering continuum-based solutions to extreme climate conditions challenges current solutions. Continuum-based solutions need to be explored in worst conditions in terms of availability of network, energy, resilience, data access. The challenges that we want to address in this aurora project are how to (i) provide the needed mechanisms to reproduce the characteristics of extreme environments with an in-lab testbed (ii) provide an end-to-end energy monitoring of considered worst continuum infrastructure, (iii) discover the most impactful energy leverages to sustain observations and monitoring (iv) deploy a proof of concept (simulated and really implemented) that validates the abstraction, architecture, design and implementation choices. The aim of this collaboration is to (i) explore a best effort end-to-end energy monitoring system for a worst-case continuum, (ii) discuss a long-term infrastructure continuum for the edge, (iii) design an experimental validation of usable energy leverages at the edge.

This project is project with the Computer Science department, at UiT The Arctic University of Norway, Tromsø.

Déefi OVHCloud Inria: Frugal Cloud (2022-2025)

L'impact énergétique et environnemental des datacentres (centres de données, centres de calcul) est pointé du doigt depuis quelques années. Alors que le numérique consommerait 10% de l'électricité mondiale et génèrerait de l'ordre de 4% des gaz à effet de serre dans le monde, les centres de données auraient représenté de l'ordre de 14% de l'empreinte carbone du numérique en 2019. A part un cas idéal théorique où la consommation pourrait demeurer contenue, la plupart des études sur la consommation énergétique des datacentres prédisent des augmentations conséquentes.

Web Site

Interreg project: AiBle (2020-2023)

AiBle is a 3-year UK/France cross-border EU Interreg project to improve the recovery experience of stroke patients with better treatment effects and efficiency by developing an upper-limb rehabilitation exoskeleton robot based on AI and cloud computing.

Web Site


Projet FIL: FaaSBench : Framework extensible de benchmarking d'un système Function-as-a-Service (2020-2022)

En poursuivant le découpage d'une application en éléments atomiques, celle-ci est désormais conceptualisée comme un ensemble de fonctions sans état (jusqu'ici, le découpage s'effectuait à la granularité des micro-services). Cette tendance a récemment donné naissance au modèle de cloud nommé Function-as-a-Service (FaaS). Dans ce dernier, l'utilisateur n'est plus responsable de la gestion de la scalabilité de son application, tâche fastidieuse, comme c'est le cas dans le cloud traditionnel de type IaaS. Dans le FaaS, le fournisseur instancie une fonction uniquement lorsqu'elle est sollicitée. Ceci permet également de minimiser la présence des ressources sous-utilisées dans le cloud. L'utilisateur n'est facturé que pour la quantité de ressources utilisées durant l'exécution de sa fonction, durée généralement de l'ordre de la milliseconde (également limitée à quelques minutes).

Cependant, des travaux de recherche récents ont montré que tous les profiles de charge d'application ne sied pas au FaaS. En effet, une application qui est intensément sollicitée pendant une période suffisamment longue reviendrait plus coûteuse à héberger dans un FaaS que dans le modèle IaaS traditionnel. Dans le projet FaaSBench, nous souhaitons construire un système de benchmarking d'application dans le FaaS afin de permettre l'analyse des coûts, et également de performance des applications.


Projet FIL: Stream Edge (2018-2019)

Nous proposons dans ce projet de mettre en place une architecture distribuée de type ''Fog/Edge Computing'' pour traiter en temps réel les flux importants de l'Internet des Objets - ''Distributed Stream Processing''. L'objectif est d'avoir une architecture pour traiter des données de l'Internet de Objets et s'exécutant sur des périphériques de bord de réseau de l'Internet des Objets.


Projet FIL: LIP/LIRIS DD-AEVOL (2018-2019)

Mise en place d'une plateforme de définition et de distribution de campagne d'expériences d'évolution artificielle. Collaboration avec l'équipe Inria Beagle.


Équipe associée SUSTAM (2017-2019)

SUSTAM (Sustainable Ultra Scale compuTing, dAta and energy Management) a pour objectif de concevoir un framework pour l'orchestration multi-critères efficace. L'équipe associée est basée sur une collaboration à long terme entre l'équipe Inria Avalon et l'équipe RDI2 (Rutgers University; New Jersey. USA).


Celtic+: Seed4C (2012-2015)

From Security in the cloud to security of the cloud. The value proposition of secure elements to protect software execution on a personal computer or on a server is not to be demonstrated. Nowadays, the emergence of cloud computing has led to a growing number of use case scenarios where one has to deal, not with a single computer but rather with a group of connected computers. In this case the challenge is not only to secure the software running on one single machine, but rather to manage and guarantee the security of a group of computers seen as a single entity. The main idea is to evolve from a security in the cloud (with isolated point of enforcement for security, the state of the art) to security of the cloud (with cooperative point of enforcement for security( the innovation proposed by this project) This project value proposition of cooperative points of enforcement of security is proposed under the concept of Network of Secure elements (NoSES). NoSES are made of individual secure elements attached to computers, user or network appliances and possibly pre-provisioned with initial secret keys. They can establish security associations, communicate together to setup a trusted network of computers and propagate security conditions centrally defined to a group of machines. The range of use cases use cases addressed by this concept is very broad; NoSES can be used to lock the execution of software to a group of specific machines, a particular application of this pertaining to tying virtual machines execution to specific servers. NoSEs can also be used to improve the security of distributed computing, not only by making sure that only trusted nodes can take part of the computing game, but also by certifying the integrity of the results returned by each one of them. Secure elements located in user appliances (such as a mobile handset) featuring a user interface can be part of NOSE and help secure server side operations using 2 factor authentication. The project will study the impact of NoSES upon the different layers of the architecture, from hardware to service in order to define how the trust can be propagated from the lower layers to the upper ones. At the lower level, the form factor and physical interfaces of secure elements to the host will be studied as well as, the management of their life cycle. At an upper level, the definition and implementation of security and access control and privacy policies involving the secure elements will be specified, as well as the middleware solutions to interface to the corresponding functional blocks. Finally, an important part of the project will focus on specific use cases including those mentioned above, and where the use of NoSEs can provide interesting solutions. One particular aspect will address privacy and identity management

Web Site


ANR SEGI: SPADES (2009-2012)

 08-ANR-SEGI-025

Today's emergence of Petascale architectures and evolutions of both research grids and computational grids increase a lot the number of potential resources. However, existing infrastructures and access rules do not allow to fully take advantage of these resources.

One key idea of the SPADES project is to propose a non-intrusive but highly dynamic environment able to take advantages to available resources without disturbing their native use. In other words, the SPADES vision is to adapt the desktop grid paradigm by replacing users at the edge of the Internet by volatile resources. These volatile resources are in fact submitted via batch schedulers to reservation mechanisms which are limited in time or susceptible to preemption (best-effort mode).

One of the priorities of SPADES is to support platforms at a very large scale. Petascale environments are in consequence particularly considered. Nevertheless, these next-generation architectures still suffer from a lack of expertise for an accurate and relevant use.

One of the SPADES goal is to show how to take advantage of the power of such architectures. Another challenge of SPADES is to provide a software solution for a service discovery system able to face a highly dynamic platform. This system will be deployed over volatile nodes and thus must tolerate "failures". The implementation of such an experimental development leads to the need for an interface with batch submission systems able to make reservations in a transparent manner for users, but also to be able to communicate with these batch systems in order to get the information required by our schedulers.

SPADES will propose solutions for the management of distributed schedulers in Desktop Computing environments, coping with a co-scheduling framework.

Web Site


ANR MDCA: GWENDIA (2007-2009)

 ANR-06-MDCA-009

GWENDIA: Grid Workflow Efficient Enactment for Data Intensive Applications

Flow management is a very active research area which received special intention from the distributed computing community over the last years. In many scientific areas, such as the application areas considered in this project, complex data processing procedures are needed to analyse huge amounts of data. GWENDIA aims at providing efficient workflow management systems to handle and process large amounts of scientific data on large scale distributed infrastructures such as grids. This is a multi-disciplinary project which gathers researchers in computer science (distributed systems, scheduling) and researchers in the life sciences community (medical image analysis, drug discovery). The project objectives are twofold. In computer science, GWENDIA aims at efficiently exploiting distributed infrastructures to deal with the huge and still increasing amount of scientific data acquired in radiology and biology centres. In particular, we will focus on the representation and the management of large data flows in acceptable time for the operators using distributed resources. In life sciences area, GWENDIA aims at dealing with distributed, heterogeneous, and evolving large scale databases, to represent complex data analysis procedures taking into account the medical or biological context, and to exploit CS tools to design at a low cost scientifically challengig experiments with a real impact for the community. This study will be based on two very large scale grid infrastructures: the Grid'5000 French national research infrastructure and the EGEE European production infrastructure.

GWENDIA will provide a workflow description framework including data composition operators useful for describing the applications data flows. It includes the design of workflow scheduling algorithms optimized for efficiently distributing the computation loads over a grid infrastructure, taking into account the data constraints. The scheduling strategies developed will be implemented, reusing existing software components such as the DIET middleware and the MOTEUR workflow manager. This research will be guided by the requirements of two application areas in life sciences: medical image analysis and in silico drug discovery. Concrete usecases will be implemented and deployed on grid infrastructure in both areas. The GWENDIA project aim at enabling scientific production in both areas, providing transparent access to grid infrastructures for coherently and efficiently processing these data-intensive applications.

This research project is not directly involving industries. Yet, workflow management has been a very active area for industry over the past year and with the industry uptake in grid technologies, there will probably be a significant interest from industry for grid-enabled workflow managers. In particular, INRIA/GRAAL is collaborating with IBM which is one of the major developer of the BPEL workflow language. The two application areas considered also have concrete social and industrial benefits. Automated medical imaging analysis is increasingly needed in clinics and in silico drug discovery is likely to have a huge economical impact, raising a high interest in pharmaceutics industry.

Web Site


ANR CIGC : LEGO (2005-2009)

The aim of this french project is to provide algorithmic and software solutions for large scale architectures; our focus is on performance issues. The software component provides a flexible programming model where resource management issues and performance optimizations are handled by the implementation. On the other hand, current component technology does not provide adequate data management facilities, needed for large data in widely distributed platforms, and does not deal efficiently with dynamic behaviors. We choose three applications ocean-atmosphere numerical simulation, cosmology simulation, and sparse matrix solver. We propose to study the following topics: Parallel software component programming; Data sharing model; Network-based data migration solution; Co-scheduling of CPU, data movement and I/O bandwidth; High-perf. network support. The Grid’5000 platform provides the ideal environment for testing and validation of our approaches

Web Site


ACI GRID : GridASP

The aim of this project is to validate a NES (Network Enabled Servers) architecture on French grid with a set of applications. Different applications in chemistry, physical, electronic and in geology. This project leans on VTHD (and VTHD++) project and “le centre Charles-Hermite” for the hardware technology and is based on GASP for the software technology.

Web Site


RNTL GASP

The aim of this project is to develop the software technology dedicated to application used in an ASP (Application Service Provider) environment on the grid. The main idea of GASP is to integrate industrial applications in according to our another projects (ACI, DIET, etc.).

Web Site


ACI MD - GDS

L’objectif de l’ACI Masse de Donnée GDS est d’apporter une solution permettant de découpler l’application de calcul de la gestion des données associées, par la proposition d’un service de partage de données pour la grille, adapté aux contraintes du calcul scientifique. Ce service vise à fournir essentiellement deux propriétés.

  • La persistance a pour objectif de permettre de stocker des données sur la grille, afin de permettre le partage et le transfert efficaces de ces données, ainsi qu’un meilleur ordonnancement des calculs, compte tenu de la localisation des données.
  • La transparence vise à décharger les applications de la gestion explicite de la localisation et du transfert des données.

Web Site


ACI TLSE

This project aims at setting up a Web expert site for sparse matrices, including software and a database. Using the DIET approach, this project will allow users to submit requests of expertise for the solution of sparse linear systems. For example a typical request could be “which sparse matrix reordering heuristic leads to the smallest number of operations for my matrix ?”, or “which software is the most robust for this test problem ?” The project members also include ENSEEIHT-IRIT (coordinator, Toulouse, France), CERFACS (Toulouse, France) and LABRI (Bordeaux, France).

Web Site


STAR

S.T.A.R. Science and Technology Amical Relationships. L’objectif de ce programme de coopération est de faciliter et de développer une coopération scientifique et technologique de haut niveau entre les laboratoires de recherches français et coréens. Le projet “GRID Experiments in Aerospace Engineering” est une collaboration entre la SNU (Seoul National University) et l’INRIA.

Web Site


Grid'5000 - ENWEG

Enabling a Nation Wide Experimental Grid (ENWEG). Étude préparatoire pour une plate-forme de Grille experimentale d’echelle nationale Action Specifique du RTP 8 du CNRS.

Le projet vise à proposer une réponse aux problèmes soulevés par la mise en oeuvre d’une grille à l’échelle nationale en spécifiant les caractéristiques et les modalités de fonctionnement/utilisation d’une Grille purement expérimentale connectant une dizaine de sites géographiquement distribués et reliés par des réseaux à haut débit.

Web Site


Grid'5000

This project aims at building an experimental Grid platform gathering 8 sites geographically distributed in France. The main purpose of this platform is to serve as an experimental testbed for research in Grid Computing. This project is one initiative of the French ACI Grid Incentive

Grid’5000 is a research effort developping a large scale nation wide infrastructure for Grid research. 10 laboratories are involved, nation wide, in the objective of providing the community of Grid researchers a testbed allowing experiments in all the software layers between the network protocols up to the applications.

This high collaborative research effort is funded by the French ministry of Education and Research, INRIA, CNRS, the Universities of all sites and some regional councils.

Web Site


VTHD++

VTHD++: Plate-forme d’expérimentation IP/WDM Vraiment Très Haut Débit pour le développement des techniques, des applications et des services de l’Internet de nouvelle génération.

Le projet VTHD++ a pour ambition d’enrichir la plate-forme haut-débit d’expérimentation IP déployée dans le cadre du projet VTHD (1999-2001) afin de développer les briques technologiques qui seront nécessaires au déploiement des réseaux Internet et Intranet de deuxième génération (cf contexte).

La solution retenue associe intimement les objectifs de qualité de service et de capacité en bande passante en adoptant une architecture de rupture IP/WDM tirant parti des technologiques optiques de multiplexage en longueur d’onde mais intégrant les modèles de qualité de service en développement dans l’Internet. La viabilité de la solution retenue est évaluée dans le contexte de services en temps réel interactifs et d’applications avancées de données. Le projet VTHD++ vise à contribuer significativement à une action de fédération des efforts pour un Internet de nouvelle génération.

Web Site


VTHD (1999-2001)

VTHD: Plate-forme d’expérimentation IP/WDM Vraiment Très Haut Débit pour applications de l’Internet de nouvelle génération.

Le projet VTHD a pour ambition de déployer une plate-forme d’expérimentation IP à haut débit afin de développer les briques technologiques qui seront nécessaires au déploiement des réseaux Internet et Intranet de deuxième génération. La solution retenue associe intimement les objectifs de qualité de service et de capacité en bande passante en adoptant une architecture de rupture IP/WDM tirant parti des technologiques optiques de multiplexage en longueur d’onde mais intégrant les modèles de qualité de service en développement dans l’Internet. La viabilité de la solution retenue est évaluée dans le contexte de services en temps réel interactifs et d’applications avancées de données. Le projet VTHD vise à contribuer significativement à une action de fédération des efforts pour un Internet de nouvelle génération.

Web Site