Sorry, you need to enable JavaScript to visit this website.
Share

Publications

2017

  • Nonnegative Matrix Factorisation for multimodal data analysis
    • Essid Slim
    , 2017.
  • Challenges, Solutions and Implications for Large Scale Enterprise Systems: OR in the Age of Big Data
    • Hudry Olivier
    , 2017.
  • Low profile superstrate using Transformation Optics for semicircular radiation pattern of antenna
    • Joshi Chetan
    • Lepage A. C.
    • Begaud Xavier
    Applied physics. A, Materials science & processing, Springer Verlag, 2017, 123 (2). In this article, a dielectric superstrate inspired from transformation optics is presented. When placed over a patch antenna, this superstrate increases the half power beam width (HPBW) of a classical patch antenna. An appropriate spatial transformation relation with spatial compression and refractive index shift factors has been used to derive an expression for a dielectric material profile. The wave front exiting from the transformed space is optimized for a semicylindrical shape. Then, a discretized version of this profile has been used to design a cuboidal superstrate. Full wave simulations have been presented that essentially show a superstrate device capable of producing a 297° of HPBW in H-plane with a peak directivity of 3.2 dBi at the design frequency. The derived solution can be realized using the standard dielectric materials for real-world applications. (10.1007/s00339-017-0787-7)
    DOI : 10.1007/s00339-017-0787-7
  • Privacy Preserving Biometric Identity Verification
    • Chollet Gérard
    • Jimenez Abelino
    • Petrovska-Delacrétaz Dijana
    • Raj Bhiksha
    , 2017.
  • Beam steering in quantum cascade lasers with optical feedback
    • Jumpertz Louise
    • Ferré Simon
    • Carras Mathieu
    • Grillot Frédéric
    , 2017.
  • Practical metrics for evaluation of fault-tolerant logic design
    • Stempkovskiy Alexandre
    • Telpukhov Dmitry
    • Solovyev Roman
    • Balaka Ekaterina
    • Naviner Lirida
    , 2017, pp.569-573.
  • Set of tuples expansion by example with reliability
    • Er Ngurah Agus Sanjaya
    • Ba Mouhamadou Lamine
    • Abdessalem Talel
    • Bressan Stéphane
    International Journal of Web Information Systems (IJWIS), 2017, 13 (4), pp.425-444. <p>This paper aims to focus on the design of algorithms and techniques for an effective set expansion. A tool that finds and extracts candidate sets of tuples from the World Wide Web was designed and implemented. For instance, when a given user provides , , as seeds, our system returns tuples composed of countries with their corresponding capital cities and currency names constructed from content extracted from Web pages retrieved.</p> <p> </p> <p>The seeds are used to query a search engine and to retrieve relevant Web pages. The seeds are also used to infer wrappers from the retrieved pages. The wrappers, in turn, are used to extract candidates. The Web pages, wrappers, seeds and candidates, as well as their relationships, are vertices and edges of a heterogeneous graph. Several options for ranking candidates from PageRank to truth finding algorithms were evaluated and compared. Remarkably, all vertices are ranked, thus providing an integrated approach to not only answer direct set expansion questions but also find the most relevant pages to expand a given set of seeds.</p> <p> </p> <p>The experimental results show that leveraging the truth finding algorithm can indeed improve the level of confidence in the extracted candidates and the sources.</p> <p> </p> <p>Current approaches on set expansion mostly support sets of atomic data expansion. This idea can be extended to the sets of tuples and extract relation instances from the Web given a handful set of tuple seeds. A truth finding algorithm is also incorporated into the approach and it is shown that it can improve the confidence level in the ranking of both candidates and sources in set of tuples expansion.</p> (10.1108/IJWIS-04-2017-0037)
    DOI : 10.1108/IJWIS-04-2017-0037
  • Optimal Distributed Channel Assignment in D2D Networks Using Learning in Noisy Potential Games
    • Coupechoux Marceau
    , 2017.
  • Security-Aware Modeling and Analysis for HW/SW Partitioning
    • Li Letitia W.
    • Lugou Florian
    • Apvrille Ludovic
    , 2017. The rising wave of attacks on communicating embedded systems has exposed their users to risks of informa- tion theft, monetary damage, and personal injury. Through improved modeling and analysis of security, we propose that these flaws could be mitigated. Since HW/SW partitioning, one of the first phases, impacts future integration of security into the system, this phase would benefit from supporting modeling security abstrac- tions and security properties, providing designers with useful partitioning feedback obtained from a security formal analyzer. In this paper, we present how our toolkit supports security modeling, automated security integration, and formal analysis during the HW/SW partitioning phase for secure communications in embedded systems. We introduce “Cryptographic Configurations”, an abstract representation of security that allows us to verify security formally. Our toolkit further assists designers by automatically adding these security representations based on a mapping and security requirements.
  • Les radiosciences au service de l’humanité
    • Tanzi Tullio
    • Isnard Jean
    La Revue de l'électricité et de l'électronique, Société de l'Électricité, de l'Électronique et des Technologies de l'Information et de la Communication, 2017, 2017 (5), pp.67-68.
  • Deploy-As-You-Go Wireless Relay Placement: An Optimal Sequential Decision Approach using the Multi-Relay Channel Model
    • Chattopadhyay Arpan
    • Sinha Abhishek
    • Coupechoux Marceau
    • Kumar Anurag
    IEEE Transactions on Mobile Computing, Institute of Electrical and Electronics Engineers, 2017, 16 (2), pp.341 - 354. We use information theoretic achievable rate formulas for the multi-relay channel to study the problem of as-you-go deployment of relay nodes. The achievable rate formulas are for full-duplex radios at the relays and for decode-and-forward relaying. Deployment is done along the straight line joining a source node and a sink node at an unknown distance from the source. The problem is for a deployment agent to walk from the source to the sink, deploying relays as he walks, given the knowledge of the wireless path-loss model, and given that the distance to the sink node is exponentially distributed with known mean. As a precursor to the formulation of the deploy-as-you-go problem, we apply the multi-relay channel achievable rate formula to obtain the optimal power allocation to relays placed along a line, at fixed locations. This permits us to obtain the optimal placement of a given number of nodes when the distance between the source and sink is given. Numerical work for the fixed source-sink distance case suggests that, at low attenuation, the relays are mostly clustered close to the source in order to be able to cooperate among themselves, whereas at high attenuation they are uniformly placed and work as repeaters. We also prove that the effect of path-loss can be entirely mitigated if a large enough number of relays are placed uniformly between the source and the sink. The structure of the optimal power allocation for a given placement of the nodes, then motivates us to formulate the problem of as-you-go placement of relays along a line of exponentially distributed length, and with the exponential path-loss model, so as to minimize a cost function that is additive over hops. The hop cost trades off a capacity limiting term, motivated from the optimal power allocation solution, against the cost of adding a relay node. We formulate the problem as a total cost Markov decision process, establish results for the value function, and provide insights into the placement policy and the performance of the deployed network via numerical exploration.
  • A State of the Art of Drone (In)Security
    • Roudier Yves
    • Tanzi Tullio Joseph
    , 2017.
  • Simulation and design of a multistage 10W Thulium-doped double clad silica fiber amplifier at 2050nm
    • Romano Clément
    • Tench Robert E
    • Jaouën Yves
    • Williams Glen M
    , 2017. A careful comparison of experiment and theory is important both for basic research and systematic engineering design of Thulium fiber amplifiers operating in the 2 µm region for applications such as LIDAR or spectroscopy (e.g. CO2 atmospheric absorption at 2051.4 nm). In this paper we report the design and performance of a multistage high-power PM Tm-doped fiber amplifier, cladding pumped at 793 nm. The design is the result of a careful comparison of numerical simulation, based on a three level model including ion-ion interactions, and experiment. Our simulation model is based on precise measurements of the cross sections and other parameters for both 6 and 10 µm core diameter fibers. Good agreement for several single and multistage amplifier topologies and operating conditions will be presented. Origins of the difference between theory and experiment are discussed, with emphasis on the accuracy of the cross sections and the cross relaxation parameters. Finally based on our simulation tool, we will demonstrate a design with an output power greater than 10 W for a multistage amplifier with a single-frequency signal at 2050 nm. The power stage was constructed with a 6 µm active fiber showing a 64 % optical slope efficiency. The output power is found to be within 5 % of the simulated results and is limited only by the available launched pump power of ~24 W. No stimulated Brillouin scattering is observed at the highest output power level for an active fiber well thermalized.
  • Un Modèle de Factorisation de Poisson pour la Recommandation de Points d'Intérêt
    • Griesner Jean-Benoît
    • Abdessalem Talel
    • Naacke Hubert
    , 2017, pp.411-416. L'explosion des volumes de données circulant sur les réseaux sociaux géo-localisés (LBSN) rend possible l'extraction des préférences des utilisateurs. En particulier ces préférences peuvent être utilisées pour recommander à l'utilisateur des points d'intérêt en adéquation avec son profil. Aujourd'hui la recommandation de points d'intérêt est devenue une composante essentielle des LBSN. Malheureusement les méthodesde recommandation traditionnelles échouent à s'adapter aux contraintes propres aux LBSN, telles que la sparsité très élevée des données, ou prendre en compte l'influence géographique. Dans ce papier nous présentons un modèle de recommandation basée sur la factorisation de Poisson qui offre une solution efficace à ces contraintes. Nous avons testé notre modèle via des expérimentations sur un jeu de données réalisteissu du LBSN Foursquare. Ces expériences nous ont permis de démontrer une meilleure qualité de recommandation que 3 modèles de l'état de l'art.
  • Formal Specification and Verification of Security Guidelines
    • Zhioua Zeineb
    • Roudier Yves
    • Ameur-Boulifa R.
    , 2017, pp.267--273. (10.1109/PRDC.2017.51)
    DOI : 10.1109/PRDC.2017.51
  • Self-Healing Umbrella Sampling: Convergence and efficiency
    • Fort Gersende
    • Jourdain Benjamin
    • Lelièvre Tony
    • Stoltz Gabriel
    Statistics and Computing, Springer Verlag (Germany), 2017, 27 (1), pp.147–168. The Self-Healing Umbrella Sampling (SHUS) algorithm is an adaptive biasing algorithm which has been proposed to efficiently sample a multimodal probability measure. We show that this method can be seen as a variant of the well-known Wang-Landau algorithm. Adapting results on the convergence of the Wang-Landau algorithm, we prove the convergence of the SHUS algorithm. We also compare the two methods in terms of efficiency. We finally propose a modification of the SHUS algorithm in order to increase its efficiency, and exhibit some similarities of SHUS with the well-tempered metadynamics method. (10.1007/s11222-015-9613-2)
    DOI : 10.1007/s11222-015-9613-2
  • Versatile and efficient mixed–criticality scheduling for multi-core processors
    • Gratia Romain
    , 2017. This thesis focuses on the scheduling of mixed-criticality scheduling algorithms for multi-processors. The correctness of the execution of the real-time applications is ensured by a scheduler and is checked during the design phase. The execution platform sizing aims at minimising the number of processors required to ensure this correct scheduling. This sizing is impacted by the safety requirements. Indeed, these requirements tend to overestimate the execution times of the applications to ensure their correct executions. Consequently, the resulting sizing is costly. The mixed-criticality scheduling theory aims at proposing compromises on the guarantees of the execution of the applications to reduce this over-sizing. Several models of mixed-criticality systems offering different compromises have been proposed but all are based on the use of execution modes. Modes are ordered and tasks have non decreasing execution times in each mode. Yet, to reduce the sizing of the execution platform, only the execution of the most critical tasks is ensured. This model is called the discarding model. For simplicity reasons, most of the mixed-criticality scheduling algorithms are limited to this model. Besides, the most efficient scheduling policies for multi-processors entail too many preemptions and migrations to be actually used. Finally, they are rarely generalised to handle different models of mixed-criticality systems. However, the handling of more than two execution modes or of tasks with elastic periods would make such solutions more attractive for the industry. The approach proposed in this thesis is based on the separation of concerns between handling the execution modes and the scheduling of the tasks on the multi-processors. With this approach, we achieve to design an efficient scheduling policy that schedules different models of mixed-criticality systems. It consists in performing the transformation of a mixed-criticality task set into a non mixed-criticality one. We then schedule this task set by using an optimal hard real-time scheduling algorithm that entails few preemptions and migrations: RUN. We first apply our approach on the discarding model with two execution modes. The results show the efficiency of our approach for such model. Then, we demonstrate the versatility of our approach by scheduling systems of the discarding model with more than two execution modes. Finally, by using a method based on the decomposition of task execution, our approach can schedule systems based on elastic tasks.
  • Best of both worlds
    • Diamanti Eleni
    • Kashefi Elham
    Nature Physics, Nature Publishing Group [2005-....], 2017, 13 (1), pp.3-4. Secure communication is emerging as a significant challenge for our hyper-connected data-dependent society. The answer may lie in a clever combination of quantum and classical cryptographic techniques. (10.1038/nphys3972)
    DOI : 10.1038/nphys3972
  • CLEAR: Covariant LEAst-Square Refitting with Applications to Image Restoration
    • Deledalle Charles-Alban
    • Papadakis Nicolas
    • Salmon Joseph
    • Vaiter Samuel
    SIAM Journal on Imaging Sciences, Society for Industrial and Applied Mathematics, 2017, 10 (1), pp.243-284. In this paper, we propose a new framework to remove parts of the systematic errors affecting popular restoration algorithms, with a special focus for image processing tasks. Generalizing ideas that emerged for $\ell_1$ regularization, we develop an approach re-fitting the results of standard methods towards the input data. Total variation regularizations and non-local means are special cases of interest. We identify important covariant information that should be preserved by the re-fitting method, and emphasize the importance of preserving the Jacobian (w.r.t. the observed signal) of the original estimator. Then, we provide an approach that has a ``twicing'' flavor and allows re-fitting the restored signal by adding back a local affine transformation of the residual term. We illustrate the benefits of our method on numerical simulations for image restoration tasks. (10.1137/16M1080318)
    DOI : 10.1137/16M1080318
  • Les facettes de l'Open Data : émergence, fondements et travail en coulisses
    • Denis Jérôme
    • Goëta Samuel
    , 2017, pp.121-138. Dans ce chapitre, nous revenons sur l'émergence des politiques d'open data en mettant en lumière les grands principes (de la transparence jusqu'à la modernisation de l'administration, en passant par la libre circulation de l'information) que différentes initiatives ont progressivement stabilisés pour faire de l'ouverture des données publiques un enjeu international. Nous montrons ensuite, à partir d'une enquête ethnographique menée dans plusieurs institutions françaises ce que cette ouverture implique concrètement : un travail délicat qui demeure largement invisible et représente le coût caché des principes fondateurs de l'open data.
  • Closed-form expressions of the eigen decomposition of 2 x 2 and 3 x 3 Hermitian matrices
    • Deledalle Charles-Alban
    • Denis Loic
    • Tabti Sonia
    • Tupin Florence
    , 2017. The eigen decomposition of covariance matrices is at the core of many data analysis techniques. The study of 2-components or 3-components vector fields typically requires computing numerous eigen decompositions of 2 x 2 or 3 x 3 matrices. This is, for example, the case in the analysis of interferometric or polarimetric SAR images, see MuLoG algorithm (https://hal.archives-ouvertes.fr/hal-01388858). The closed-form expression of eigen-values and eigenvectors then provides a way to derive faster data processing algorithms. This note gives these expressions in the general case (special cases where some coefficients are zero, or the eigenvalues are not separated may not be covered and then require either to introduce a small perturbation of the initial matrix or to derive other expressions).
  • Rate Allocation in Predictive Video Coding Using a Convex Optimization Framework
    • Fiengo Aniello
    • Chierchia Giovanni
    • Cagnazzo Marco
    • Pesquet-Popescu Béatrice
    IEEE Transactions on Image Processing, Institute of Electrical and Electronics Engineers, 2017, 26 (1), pp.479 - 489. Optimal rate allocation is among the most challenging tasks to perform in the context of predictive video coding, because of the dependencies between frames induced by motion compensation. In this paper, using a recursive rate-distortion model that explicitly takes into account these dependencies, we approach the frame-level rate allocation as a convex optimization problem. This technique is integrated into the recent HEVC encoder, and tested on several standard sequences. Experiments indicate that the proposed rate allocation ensures a better performance (in the rate-distortion sense) than the standard HEVC rate control, and with a little loss w.r.t. an optimal exhaustive research which is largely compensated by a much shorter execution time. (10.1109/TIP.2016.2621666)
    DOI : 10.1109/TIP.2016.2621666
  • Top-k Querying of Unknown Values under Order Constraints (Extended Version)
    • Amarilli Antoine
    • Amsterdamer Yael
    • Milo Tova
    • Senellart Pierre
    , 2017. Many practical scenarios make it necessary to evaluate top-k queries over data items with partially unknown values. This paper considers a setting where the values are taken from a numerical domain, and where some partial order constraints are given over known and unknown values: under these constraints, we assume that all possible worlds are equally likely. Our work is the first to propose a principled scheme to derive the value distributions and expected values of unknown items in this setting, with the goal of computing estimated top-k results by interpolating the unknown values from the known ones. We study the complexity of this general task, and show tight complexity bounds, proving that the problem is intractable, but can be tractably approximated. We then consider the case of tree-shaped partial orders, where we show a constructive PTIME solution. We also compare our problem setting to other top-k definitions on uncertain data.
  • Règles d'Associations Temporelles de signaux sociaux pour la synthèse d'Agents Conversationnels Animés : Application aux attitudes sociales
    • Janssoone Thomas
    • Clavel Chloé
    • Bailly Kevin
    • Richard Gael
    Revue des Sciences et Technologies de l'Information - Série RIA : Revue d'Intelligence Artificielle, Lavoisier, 2017. Afin d'améliorer l'interaction entre des humains et des agents conversationnels animés (ACA), l'un des enjeux majeurs du domaine est de générer des agents crédibles socialement. Dans cet article, nous présentons une méthode, intitulée SMART pour social multimodal association rules with timing, capable de trouver automatiquement des associations temporelles entre l'utilisation de signaux sociaux (mouvements de tête, expressions faciales, prosodie. . .) issues de vidéos d'interactions d'humains exprimant différents états affectifs (comportement, attitude, émotions,. . .). Notre système est basé sur un algorithme de fouille de séquences qui lui permet de trouver des règles d'associations temporelles entre des signaux sociaux extraits automatiquement de flux audio-vidéo. SMART va également analyser le lien de ces règles avec chaque état affectif pour ne conserver que celles qui sont pertinentes. Finalement, SMART va les enrichir afin d'assurer une animation facile d'un ACA pour qu'il exprime l'état voulu. Dans ce papier, nous formalisons donc l'implémentation de SMART et nous justifions son inté-rêt par plusieurs études. Dans un premier temps, nous montrons que les règles calculées sont bien en accord avec la littérature en psychologie et sociologie. Ensuite, nous présentons les résultats d'évaluations perceptives que nous avons conduites suite à des études de corpus pro-posant l'expression d'attitudes sociales marquées. ABSTRACT. In the field of Embodied Conversational Agent (ECA) one of the main challenges is to generate socially believable agents. The long run objective of the present study is to infer rules for the multimodal generation of agents' socio-emotional behaviour. In this paper, we introduce the Social Multimodal Association Rules with Timing (SMART) algorithm. It proposes to Revue d'intelligence artificielle-n o 4/2017, 511-537 512 RIA. Volume 31-n o 4/2017 learn the rules from the analysis of a multimodal corpus composed by audio-video recordings of human-human interactions. The proposed methodology consists in applying a Sequence Mining algorithm using automatically extracted Social Signals such as prosody, head movements and facial muscles activation as an input. This allows us to infer Temporal Association Rules for the behaviour generation. We show that this method can automatically compute Temporal Association Rules coherent with prior results found in the literature especially in the psychology and sociology fields. The results of a perceptive evaluation confirms the ability of a Temporal Association Rules based agent to express a specific stance. (10.3166/RIA.31.511-537)
    DOI : 10.3166/RIA.31.511-537
  • Trends in Social Network Analysis - Information Propagation, User Behavior Modeling, Forecasting, and Vulnerability Assessment
    • Missaoui Rokia
    • Abdessalem Talel
    • Latapy Mathieu
    , 2017, pp.255. <p>The book collects contributions from experts worldwide addressing recent scholarship in social network analysis such as influence spread, link prediction, dynamic network biclustering, and delurking. It covers both new topics and new solutions to known problems. The contributions rely on established methods and techniques in graph theory, machine learning, stochastic modelling, user behavior analysis and natural language processing, just to name a few. This text provides an understanding of using such methods and techniques in order to manage practical problems and situations. Trends in Social Network Analysis: Information Propagation, User Behavior Modelling, Forecasting, and Vulnerability Assessment appeals to students, researchers, and professionals working in the field.</p> <p> </p> (10.1007/978-3-319-53420-6)
    DOI : 10.1007/978-3-319-53420-6