Sorry, you need to enable JavaScript to visit this website.
Partager

Publications

 

Les publications de nos enseignants-chercheurs sont sur la plateforme HAL :

 

Les publications des thèses des docteurs du LTCI sont sur la plateforme HAL :

 

Retrouver les publications figurant dans l'archive ouverte HAL par année :

2019

  • A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks
    • Şimşekli Umut
    • Sagun Levent
    • Gurbuzbalaban Mert
    , 2019. The gradient noise (GN) in the stochastic gradient descent (SGD) algorithm is often considered to be Gaussian in the large data regime by assuming that the classical central limit theorem (CLT) kicks in. This assumption is often made for mathematical convenience, since it enables SGD to be analyzed as a stochastic differential equation (SDE) driven by a Brownian motion. We argue that the Gaussianity assumption might fail to hold in deep learning settings and hence render the Brownian motion-based analyses inappropriate. Inspired by non-Gaussian natural phenomena, we consider the GN in a more general context and invoke the generalized CLT (GCLT), which suggests that the GN converges to a heavy-tailed $\alpha$-stable random variable. Accordingly, we propose to analyze SGD as an SDE driven by a L\'{e}vy motion. Such SDEs can incur `jumps', which force the SDE transition from narrow minima to wider minima, as proven by existing metastability theory. To validate the $\alpha$-stable assumption, we conduct extensive experiments on common deep learning architectures and show that in all settings, the GN is highly non-Gaussian and admits heavy-tails. We further investigate the tail behavior in varying network architectures and sizes, loss functions, and datasets. Our results open up a different perspective and shed more light on the belief that SGD prefers wide minima.
  • Modeling the trade-off between security and performance to support the product life cycle
    • Fujdiak Radek
    • Blažek Petr
    • Apvrille Ludovic
    • Martinasek Zdenek
    • Mlynek Petr
    • Pacalet Renaud
    • Smekal David
    • Mrnustik Pavel
    • Barabas Maros
    • Zoor Maysam
    , 2019. Nowadays, the development of products for modern cyber-physical systems consists of many stages defined by the product life cycle (PLC). However, many manufacturers are not paying full attention - if any at all - to each PLC stage. This, among others, is causing growth of development costs. Therefore, the first stage of PLC becomes crucial. Moreover, a significant part of the development costs might be saved via testing the required parameters in this early stage, e.g., via modeling tools, simulation tools or emulators. Considering among others the current cyber-warfare and everyday growing number of threats, security is becoming one of the most critical topics in PLC. However, the security aspects come with significant trade-offs with performance. This paper focuses on methodology for dealing with these trade-offs via simulation in the early stage of PLC, where basic requirements are settled. To establish security requirements, an extensive Secure Software Development Life Cycle catalog is used together with an advanced modeling framework TTool based on UML/SysML-Sec for performance trade-off analysis. This combination creates a powerful approach for establishing the balance between security and performance requirements. As an example, a particular security requirement is selected. Namely, confidentiality, fulfilled by the encryption algorithm AES. This introduces the methodology and approach to the co-engineering issue in the PLC stages, where two different development teams with also different goals (security, performance) are dealing together with the single combined issue. Our results should help to understand the importance of the early PLC stage and show one possible approach on how to deal with these issues.
  • Non-Asymptotic Analysis of Fractional Langevin Monte Carlo for Non-Convex Optimization
    • Nguyen Thanh Huy
    • Şimşekli Umut
    • Richard Gael
    , 2019. Recent studies on diffusion-based sampling methods have shown that Langevin Monte Carlo (LMC) algorithms can be beneficial for non-convex optimization, and rigorous theoretical guarantees have been proven for both asymp-totic and finite-time regimes. Algorithmically, LMC-based algorithms resemble the well-known gradient descent (GD) algorithm, where the GD recursion is perturbed by an additive Gaussian noise whose variance has a particular form. Fractional Langevin Monte Carlo (FLMC) is a recently proposed extension of LMC, where the Gaussian noise is replaced by a heavy-tailed α-stable noise. As opposed to its Gaussian counterpart , these heavy-tailed perturbations can incur large jumps and it has been empirically demonstrated that the choice of α-stable noise can provide several advantages in modern machine learning problems, both in optimization and sampling contexts. However, as opposed to LMC, only asymptotic convergence properties of FLMC have been yet established. In this study, we analyze the non-asymptotic behavior of FLMC for non-convex optimization and prove finite-time bounds for its expected suboptimality. Our results show that the weak-error of FLMC increases faster than LMC, which suggests using smaller step-sizes in FLMC. We finally extend our results to the case where the exact gradients are replaced by stochas-tic gradients and show that similar results hold in this setting as well.
  • Robust Segmentation of Corpus Callosum in Multi-Scanner Pediatric T1-w MRI Using Transfer Learning
    • La Barbera Giammarco
    • Bloch Isabelle
    • Barraza Gonzalo
    • Adamsbaum Catherine
    • Gori Pietro
    , 2019.
  • SGD: General Analysis and Improved Rates
    • Gower Robert M
    • Loizou Nicolas
    • Qian Xun
    • Sailanbayev Alibek
    • Shulgin Egor
    • Richtárik Peter
    , 2019. We propose a general yet simple theorem describing the convergence of SGD under the arbitrary sampling paradigm. Our theorem describes the convergence of an infinite array of variants of SGD, each of which is associated with a specific probability law governing the data selection rule used to form minibatches. This is the first time such an analysis is performed, and most of our variants of SGD were never explicitly considered in the literature before. Our analysis relies on the recently introduced notion of expected smoothness and does not rely on a uniform bound on the variance of the stochastic gradients. By specializing our theorem to different mini-batching strategies , such as sampling with replacement and independent sampling, we derive exact expressions for the stepsize as a function of the mini-batch size. With this we can also determine the mini-batch size that optimizes the total complexity, and show explicitly that as the variance of the stochastic gradient evaluated at the minimum grows, so does the optimal mini-batch size. For zero variance, the optimal mini-batch size is one. Moreover, we prove insightful stepsize-switching rules which describe when one should switch from a constant to a decreasing stepsize regime.
  • Representation of Surfaces with Normal Cycles. Application to Surface Registration
    • Roussillon Pierre
    • Glaunès Joan
    Journal of Mathematical Imaging and Vision, Springer Verlag, 2019, 61, pp.1069–1095. In this paper, we present a framework for computing dissimilarities between surfaces which is based on the mathematical model of normal cycle from geometric measure theory. This model allows to take into account all the curvature information of the surface without explicitely computing it. By defining kernel metrics on normal cycles, we define explicit distances between surfaces that are sensitive to curvature. This mathematical framework also has the advantage of encompassing both continuous and discrete surfaces (triangulated surfaces). We then use this distance as a data attachment term for shape matching, using the Large Deformation Diffeomorphic Metric Mapping (LDDMM) for modeling deformations. We also present an efficient numerical implementation of this problem in PyTorch, using the KeOps library, which allows both the use of auto-differentiation tools and a parallelization of GPU calculations without memory overflow. We show that this method can be scalable on data up to a million points, and we present several examples on surfaces, comparing the results with those obtained with the similar varifold framework.
  • Video inpainting and semi-supervised object removal
    • Le Thuc Trinh
    , 2019. Nowadays, the rapid increase of video creates a massive demand for video-based editing applications. In this dissertation, we solve several problems relating to video post-processing and focus on objects removal application in video. To complete this task, we divided it into two problems: (1) A video objects segmentation step to select which objects to remove and (2) a video inpainting step to filling the damaged regions.For the video segmentation problem, we design a system which is suitable for object removal applications with different requirements in terms of accuracy and efficiency. Our approach relies on the combination of Convolutional Neural Networks (CNNs) for segmentation and the classical mask tracking method. In particular, we adopt the segmentation networks for image case and apply them to video case by performing frame-by-frame segmentation. By exploiting both offline and online training with first frame annotation only, the networks are able to produce highly accurate video object segmentation. Besides, we propose a mask tracking module to ensure temporal continuity and a mask linking module to ensure the identity coherence across frames. Moreover, we introduce a simple way to learn the dilation layer in the mask, which helps us create suitable masks for video objects removal application.For the video inpainting problem, we divide our work into two categories base on the type of background. In particular, we present a simple motion-guided pixel propagation method to deal with static background cases. We show that the problem of objects removal with a static background can be solved efficiently using a simple motion-based technique. To deal with dynamic background, we introduce video inpainting method by optimization a global patch-based energy function. To increase the speed of the algorithm, we proposed a parallel extension of the 3D PatchMatch algorithm. To improve accuracy, we systematically incorporate the optical flow in the overall process. We end up with a video inpainting method which is able to reconstruct moving objects as well as reproduce dynamic textures while running in a reasonable time.Finally, we combine the video objects segmentation and video inpainting methods into a unified system to removes undesired objects in videos. To the best of our knowledge, this is the first system of this kind. In our system, the user only needs to approximately delimit in the first frame the objects to be edited. These annotation process is facilitated by the help of superpixels. Then, these annotations are refined and propagated through the video by the video objects segmentation method. One or several objects can then be removed automatically using our video inpainting methods. This results in a flexible computational video editing tool, with numerous potential applications, ranging from crowd suppression to unphysical scenes correction.
  • Analysing the Impact of Cross-Content Pairs on Pairwise Comparison Scaling
    • Zerman Emin
    • Valenzise Giuseppe
    • Smolic Aljosa
    , 2019, pp.1-6. Pairwise comparisons (PWC) methodology is one of the most commonly used methods for subjective quality assessment, especially for computer graphics and multimedia applications. Unlike rating methods, a psychometric scaling operation is required to convert PWC results to numerical subjective quality values. Due to the nature of this scaling operation, the obtained quality scores are relative to the set they are computed in. While it is customary to compare different versions of the same content, in this work we study how cross-content comparisons may benefit psychometric scaling. For this purpose, we use two different video quality databases which have both rating and PWC experiment results. The results show that despite same-content comparisons play a major role in the accuracy of psychometric scaling, the use of a small portion of cross-content comparison pairs is indeed beneficial to obtain more accurate quality estimates. (10.1109/QoMEX.2019.8743295)
    DOI : 10.1109/QoMEX.2019.8743295
  • Apprendre à programmer via des interfaces en ligne, quel sens pour la présence ?
    • Hamonic Ella
    • Sharrock Rémi
    , 2019. Ce retour d'expérience vise à présenter les leçons tirées de l'intégration de deux MOOCs dédiés à l'apprentissage de la programmation informatique dans les enseignements présentiels d'une formation de licence.
  • Problem Statement for Secure End to End Privacy in IdLoc Systems
    • von Hugo Dirk
    • Sarikaya Behcet
    • Iannone Luigi
    • Petrescu Alexander
    • Sun Kyoungjae
    • Fattore Umberto
    Internet Engineering Task Force, IETF, 2019. Efficient and service aware flexible end-to-end routing in future communication networks is achieved by routing protocol approaches making use of Identifier Locator separation systems. Since these systems require a correlation between identifiers and location which might allow tracking and misusage of individuals' identities and locations such operation demands for highly secure measures to preserve privacy of users and devices. This document tries to identify and describe typical use cases and derive thereof a problem statement describing issues and challenges for application of privacy preserving Identifier-Locator split (PidLoc) approaches.
  • Chainage de vos Virtual Private Clouds avec Segment Routing
    • Spinelli Francesco
    • Iannone Luigi
    • Tollet Jerome
    , 2019. Au cours des dernières années, à coté du Cloud Computing, la virtualisation des fonctions de réseau (NFV) est devenue l'un des paradigmes les plus intèressants du monde des ICT, donnant lieu à des scénarios inexplorés et à de nouvelles fonctionnalités telles que Service Chaining. Ce dernier est la capacité où les paquets sont dirigés à travers une séquence de services sur leur chemin vers la destination. Ces services pourraient également etre situés dans différentes régions de cloud public, nous permettant ainsi d'exploiter pour la première fois une approche multi-cloud. Le nouveau protocole de routage par segment pour IPv6 (Segment Routing-SRv6) est un moyen efficace d'enchainer les services. Dans ce contexte, nous avons commencé à étudier comment créer une configuration multi-cloud dans Amazon Web Services afin de fournir un chainage de services. Dans un premier temps, nous avons déployé, au sein d Amazon Virtual Private Cloud, un routeur logiciel compatible avec le routage par segment appelé Vector Packet Processing (VPP). Ensuite, nous avons automatisé son déploiement avec Terraform, un outil pour d'orchestration. Ensuite, profitant de nos scripts, nous avons répété la meme configuration dans diffèrentes régions d'Amazon, en les connectant ensemble via le SRv6. Enfin, nous avons lancé une première campagne de mesures étendue, en examinant en particulier l'impact de la présence de VPP et SRv6 sur les performances globales d'Amazon Web Services.
  • Characterization at Logical Level of Magnetic Injection Probes
    • Trabelsi Oualid
    • Sauvage Laurent
    • Danger Jean-Luc
    , 2019. Intentional electromagnetic interference is an effective mean to jeopardize the security of integrated circuits. In this paper, we propose a new approach to evaluate the efficiency of magnetic probes used to radiate a disturbance: measuring its impact within the target of the attack, more precisely on the propagation delay of a combinational path. The characterization of five probes carried out using three different integrated circuits is reported. In all cases, bespoke, handmade probes outperform commercial ones. Experimental results also show that the electromagnetic coupling between the probes and the integrated circuits is mainly due to global, bonding wires.
  • Chaining your Virtual Private Clouds with Segment Routing
    • Spinelli Francesco
    • Iannone Luigi
    • Tollet Jerome
    , 2019. In recent years, next to Cloud Computing, Network Function Virtualization (NFV) has emerged as one of the most interesting paradigm inside the ICT world, leading to unexplored scenarios and new features such as Service Chaining. The latter is the capability where packets are steered through a sequence of services on their way to the destination. These services could also be located inside different Public Cloud Regions, hence giving us the possibility to exploit for the first time a Multi-Cloud approach. One way to actually perform Service Chaining is through the new Segment Routing protocol for IPv6. Within this context we have started to investigate how we could create inside Amazon Web Services a Multi-Cloud configuration to provide Service Chaining. As an initial step, we have deployed, inside an Amazon Virtual Private Cloud, a software router compatible with Segment Routing called Vector Packet Processing (VPP). Then, we have automated its deployment with Terraform, a Cloud Orchestrator tool. Afterwards, taking advantage of our scripts, we have repeated the same configuration in different Amazons Region, connecting them together through Segment Routing protocol. Finally, we started a first extensive measurements campaign, looking in particular on how VPP and Segment Routing presence could affect the overall performance inside Amazon Web Services.
  • Querying the Edit History of Wikidata
    • Pellissier Tanon Thomas
    • Suchanek Fabian M.
    , 2019, 11762, pp.161-166. In its 7 years of existence, Wikidata has accumulated an edit history of millions of contributions. In this paper, we propose a system that makes this data accessible through a SPARQL endpoint. We index not just the diffs done by a revision, but also the global state of Wiki-data graph after any given revision. This allows users to answer complex SPARQL 1.1 queries on the Wikidata history, tracing the contributions of human vs. automated contributors, the areas of vandalism, the big schema changes, or the adoption of different values for the "gender" property across time. (10.1007/978-3-030-32327-1_32)
    DOI : 10.1007/978-3-030-32327-1_32
  • Entity Embedding Analogy for Implicit Link Discovery
    • Mimouni Nada
    • Moissinac Jean-Claude Jc
    • Vu Anh Tuan
    , 2019. In this work we are interested in the problem of knowledge graphs (KG) incompleteness that we propose to solve by discovering implicit triples using observed ones in the incomplete graph leveraging analogy structures deducted from KG embedding model. We use a language modelling approach that we adapt to entities and relations. The first results show that analogical inferences in the projected vector space is relevant for link prediction task.
  • On-Wafer Coplanar Waveguide Standards for S-Parameter Measurements of Balanced Circuits Up to 40 GHz
    • Pham Thi Dao
    • Allal Djamel
    • Ziade Francois
    • Bergeault Eric
    IEEE Transactions on Instrumentation and Measurement, Institute of Electrical and Electronics Engineers, 2019, 68 (6), pp.2160-2167. The multimode thru-reflect-line (TRL) calibration technique is applied to perform on-wafer mixed-mode scattering-parameter (S-parameter) measurements for differential circuits. This paper describes the first design and realization of a coupled coplanar waveguide (CCPW) calibration kit on a quartz substrate in the ground-signal-ground-signal-ground configuration. CCPW integrated verification elements such as attenuators, and matched and mismatched transmission lines are simulated and measured to validate the design of the multimode TRL calibration kit as well as the calibration algorithm itself. With the advent of four-port vector network analyzers supporting true mode stimulus, at least two approaches are available for mixed-mode S-parameter measurements. The first one is the one-tier measurement using the single-ended stimulus with the multimode TRL calibration technique. The second one relies on the two-tier measurement with a short-open-load-reciprocal thru calibration at the first tier and the multimode TRL calibration at the second tier. From the viewpoint of metrology, the one-tier approach is preferable as it requires fewer standards and connections. The good agreement between simulation and measurement results performed on different verification elements demonstrates the validity of both methods up to 40 GHz. (10.1109/TIM.2018.2884061)
    DOI : 10.1109/TIM.2018.2884061
  • New Analytical Boundary Condition for Optimized Outphasing PA Design
    • Bachi Joe
    • Serhan Ayssar
    • Pham Dang-Kièn Germain
    • Desgreys Patricia
    • Giry Alexandre
    , 2019, pp.1-4. (10.1109/NEWCAS44328.2019.8961239)
    DOI : 10.1109/NEWCAS44328.2019.8961239
  • Digitally Enhanced Mixed Signal Systems
    • Jabbour Chadi
    • Desgreys Patricia
    • Dallet Dominique
    , 2019.
  • A contrario comparison of local descriptors for change detection in high resolution satellite images of urban areas
    • Liu Gang
    • Gousseau Yann
    • Tupin Florence
    IEEE Transactions on Geoscience and Remote Sensing, Institute of Electrical and Electronics Engineers, 2019, 57 (6), pp.3904-3918.
  • DOSED: A deep learning approach to detect multiple sleep micro-events in EEG signal
    • Chambon Stanislas
    • Thorey Valentin
    • Arnal Pierrick J.
    • Mignot Emmanuel Jean Marie
    • Gramfort Alexandre
    Journal of Neuroscience Methods, Elsevier, 2019, 321, pp.64-78. Background: Electroencephalography (EEG) monitors brain activity during sleep and is used to identify sleep disorders. In sleep medicine, clinicians interpret raw EEG signals in so-called sleep stages, which are assigned by experts to every 30 s window of signal. For diagnosis, they also rely on shorter prototypical micro-architecture events which exhibit variable durations and shapes, such as spindles, K-complexes or arousals. Annotating such events is traditionally performed by a trained sleep expert, making the process time consuming, tedious and subject to inter-scorer variability. To automate this procedure, various methods have been developed, yet these are event-specific and rely on the extraction of hand-crafted features. New method: We propose a novel deep learning architecure called Dreem One Shot Event Detector (DOSED). DOSED jointly predicts locations, durations and types of events in EEG time series. The proposed approach, applied here on sleep related micro-architecture events, is inspired by object detectors developed for computer vision such as YOLO and SSD. It relies on a convolutional neural network that builds a feature representation from raw EEG signals, as well as two modules performing localization and classification respectively. Results and comparison with other methods: The proposed approach is tested on 4 datasets and 3 types of events (spindles, K-complexes, arousals) and compared to the current state-of-the-art detection algorithms. Conclusions: Results demonstrate the versatility of this new approach and improved performance compared to the current state-of-the-art detection methods (10.1016/j.jneumeth.2019.03.017)
    DOI : 10.1016/j.jneumeth.2019.03.017
  • High-level modeling of communication-centric applications: Extensions to a system-level design and virtual prototyping tool
    • Genius Daniela
    • Apvrille Ludovic
    • Li Letitia
    Microprocessors and Microsystems: Embedded Hardware Design, Elsevier, 2019. High performance streaming applications require hardware platforms featuring complex, multi-level interconnects. These applications often resemble a task-farm, where many identical tasks listen to the same input channel. Usual embedded system design tools are not well adapted to capture these applications. In particular, the non-uniform memory access (NUMA) nature of the platforms induces latencies that must be carefully examined. The paper proposes a multi-level modeling methodology and tools (TTool, SoCLib) that have been extended to model the characteristics of streaming applications (multiple tasks, non deter-ministic behavior, I/O devices) in UML/SysML, and to automatically generate a virtual prototype that can be simulated with high precision. The paper uses a typical streaming application to show how latencies can be estimated and fed back to diagrams. (10.1016/j.micpro.2019.03.006)
    DOI : 10.1016/j.micpro.2019.03.006
  • $M$ -NL: Robust NL-Means Approach for PolSAR Images Denoising
    • Draskovic Gordana
    • Pascal Frédéric
    • Tupin Florence
    IEEE Geoscience and Remote Sensing Letters, IEEE - Institute of Electrical and Electronics Engineers, 2019, 16 (6), pp.997-1001. This paper proposes a new method for polarimetric synthetic aperture radar (PolSAR) denoising. More precisely, it seeks to address a new statistical approach for weights computation in non-local (NL) approaches. The aim is to present a simple criterion using M-estimators and to detect similar pixels in an image. A binary hypothesis test is used to select similar pixels which will be used for covariance matrix estimation together with associated weights. The method is then compared to an advanced state of the art PolSAR denoising method, NL-SAR method [1]. The filter performances are measured by a set of different indicators, including relative errors on incoherent target decomposition parameters, coherences, polarimetric signatures, and edge preservation on a set of simulated PolSAR images, as in [2]. Finally, results for RADARSAT-2 PolSAR data are presented. (10.1109/LGRS.2018.2889275)
    DOI : 10.1109/LGRS.2018.2889275
  • Conception d’un convertisseur analogique-paramètres pour l’acquisition intelligente de signaux biologiques
    • Back Antoine
    • Chollet Paul
    • Fercoq Olivier
    • Desgreys Patricia
    , 2019.
  • Requirements to Secure End to End Privacy in IdLoc Systems
    • Iannone Luigi
    • von Hugo Dirk
    • Sarikaya Behcet
    Internet Engineering Task Force, IETF, 2019.
  • Digitally Enhanced Mixed Signal Systems
    • Jabbour Chadi
    • Pham Dang-Kièn Germain
    • Ferré Guillaume
    , 2019. With the CMOS technology reaching its limits, the design of cyber physical systems is becoming more and more complex. Securing the performance in terms of resolution, speed or flexibility would require a more intensive use of digital assistance to compensate for the increased challenges on the RF and analog parts.This tutorial will provide an overall overview of how to design, size and implement a digital assistance to compensate for a given non-ideality. All the main steps will be covered from the modeling approach to the implementation in fixed point. The major caveats for each topic will be discussed in order to materialize the limits of the presented methods and models. Concrete examples as well as live experimental demos will be presented for several applications such as Digital Pre-Distortion for Power Amplifiers, Digital correction of Stacked and Time Interleaved ADCs and Dynamic Element Matching Techniques for Digital to Analog Converters.