Sorry, you need to enable JavaScript to visit this website.
Partager

Publications

 

Les publications de nos enseignants-chercheurs sont sur la plateforme HAL :

 

Les publications des thèses des docteurs du LTCI sont sur la plateforme HAL :

 

Retrouver les publications figurant dans l'archive ouverte HAL par année :

2021

  • Distributed DNN based Processing for Uplink Could- RAN
    • Zhang Chao
    • Askri Aymen
    • Rekaya-Ben Othman Ghaya
    , 2021.
  • A taxonomy of PUF Schemes with a novel Arbiter-based PUF resisting machine learning attacks
    • El-Hajj Mohammed
    • Fadlallah Ahmad
    • Chamoun Maroun
    • Serhrouchni Ahmed
    Computer Networks, Elsevier, 2021, 194, pp.108133. As the Intenert of Things (IoT) continues to evolve in our daily personal lives and in future industrial systems (industry 4.0), one of the most significant issues is security. IoT systems must overcome a number of challenges, including deficiency of resources, low power consumption, and the need to protect devices against cyber-attacks. Regrettably, issues about energy use and the lack of computing resources limit the cryptographic methods that can be implemented on these devices. Moreover, the conventional use of non-volatile memory for storing secret keys are vulnerable to a number of attacks like reverse-engineering, cold-boot, side channel, device tampering, etc. Physical Unclonable Functions (PUFs) are one of the categories for enhancing physical device security and solving issues involving with the use of traditional cryptographic algorithms. PUFs are associated as lightweight one-way functions used to extract a unique identity for each end-device, based on physical factors introduced during manufacturing which are unforeseeable and unclonable. PUFs are promising hardware security primitive and have seen a lot of attention in the past few years. In this paper, we provide a survey of a large range of PUF schemes proposed in the literature. Our comparison analysis of the surveyed schemes ends with a number of observations. Then we propose a new scheme of Arbiter PUF to create an identity for each IoT device. The proposed scheme is resistant to Machine Learning (ML) attacks. It is proven to avoid the shortcoming of several previously proposed related PUF-based authentication protocols. (10.1016/j.comnet.2021.108133)
    DOI : 10.1016/j.comnet.2021.108133
  • MatMorpher: A Morphing Operator for SVBRDFs
    • Gauthier Alban
    • Thiery Jean-Marc
    • Boubekeur Tamy
    , 2021.
  • Adaptive Batching for Fast Packet Processing in Software Routers using Machine Learning
    • Okelmann Peter
    • Linguaglossa Leonardo
    • Geyer Fabien
    • Emmerich Paul
    • Carle Georg
    , 2021, pp.206-210. Processing packets in batches is a common technique in high-speed software routers to improve routing efficiency and increase throughput. With the growing popularity of novel paradigms such as Network Function Virtualization, advocating for the replacement of hardware-based networking modules towards software-based network functions deployed on commodity servers, we observe that batching techniques have been successfully implemented to reduce the HW/SW performance gap. As batch creation and management is at the very core of high-speed packet processors, it provides a significant impact to the overall packet processing capabilities of the system, affecting latency, throughput, CPU utilization and power consumption. It is commonly accepted to adopt a fixed maximum batching size (usually in the range between 32 and 512) to optimize for the worst case scenario (i.e. minimum-size packets at full bandwidth capacity). Such approach may result in a loss of efficiency despite a 100% utilization of the CPU. In this work we explore the possibilities of enhancing the runtime batch creation in VPP, a popular software router based on the Intel DPDK framework. Instead of relying on the automatic batch creation, we apply machine learning techniques to optimize the batching size for lower CPU-time and higher power efficiency in average scenarios, while maintaining its high performance in the worst case. (10.1109/NetSoft51509.2021.9492668)
    DOI : 10.1109/NetSoft51509.2021.9492668
  • S2CE: a hybrid cloud and edge orchestrator for mining exascale distributed streams
    • Kourtellis Nicolas
    • Herodotou Herodotos
    • Grzenda Maciej
    • Wawrzyniak Piotr
    • Bifet Albert
    , 2021, pp.103--113. The explosive increase in volume, velocity, variety, and veracity of data generated by distributed and heterogeneous nodes such as IoT and other devices, continuously challenge the state of art in big data processing platforms and mining techniques. Consequently, it reveals an urgent need to address the ever-growing gap between this expected exascale data generation and the extraction of insights from these data. To address this need, this position paper proposes Stream to Cloud & Edge (S2CE), a first of its kind, optimized, multi-cloud and edge orchestrator, easily configurable, scalable, and extensible. S2CE will enable machine and deep learning over voluminous and heterogeneous data streams running on hybrid cloud and edge settings, while offering the necessary functionalities for practical and scalable processing: data fusion and preprocessing, sampling and synthetic stream generation, cloud and edge smart resource management, and distributed processing. (10.1145/3465480.3466926)
    DOI : 10.1145/3465480.3466926
  • Control variate selection for Monte Carlo integration
    • Leluc Rémi
    • Portier François
    • Segers Johan
    Statistics and Computing, Springer Verlag (Germany), 2021, 31 (4), pp.50. Monte Carlo integration with variance reduction by means of control variates can be implemented by the ordinary least squares estimator for the intercept in a multiple linear regression model with the integrand as response and the control variates as covariates. Even without special knowledge on the integrand, significant efficiency gains can be obtained if the control variate space is sufficiently large. Incorporating a large number of control variates in the ordinary least squares procedure may however result in (i) a certain instability of the ordinary least squares estimator and (ii) a possibly prohibitive computation time. Regularizing the ordinary least squares estimator by preselecting appropriate control variates via the Lasso turns out to increase the accuracy without additional computational cost. The findings in the numerical experiment are confirmed by concentration inequalities for the integration error. (10.1007/s11222-021-10011-z)
    DOI : 10.1007/s11222-021-10011-z
  • Deep Graphics Encoder for Real-Time Video Makeup Synthesis from Example
    • Kips Robin
    • Jiang Ruowei
    • Ba Sileye
    • Phung Edmund
    • Aarabi Parham
    • Gori Pietro
    • Perrot Matthieu
    • Bloch Isabelle
    , 2021. While makeup virtual-try-on is now widespread, parametrizing a computer graphics rendering engine for synthesizing images of a given cosmetics product remains a challenging task. In this paper, we introduce an inverse computer graphics method for automatic makeup synthesis from a reference image, by learning a model that maps an example portrait image with makeup to the space of rendering parameters. This method can be used by artists to automatically create realistic virtual cosmetics image samples, or by consumers, to virtually try-on a makeup extracted from their favorite reference image.
  • Precise, efficient, and context-sensitive cache analysis
    • Brandner Florian
    • Noûs Camille
    Real-Time Systems, Springer Verlag, 2021. (10.1007/s11241-021-09372-5)
    DOI : 10.1007/s11241-021-09372-5
  • Experimental Approach to Demonstrating Contextuality for Qudits
    • Sohbi Adel
    • Ohana Ruben
    • Zaquine Isabelle
    • Diamanti Eleni
    • Markham Damian
    Physical Review A, American Physical Society, 2021, 103 (6), pp.062220. We propose a method to experimentally demonstrate contextuality with a family of tests for qudits. The experiment we propose uses a qudit encoded in the path of a single photon and its temporal degrees of freedom. We consider the impact of noise on the effectiveness of these tests, taking the approach of ontologically faithful non-contextuality. In this approach, imperfections in the experimental set up must be taken into account in any faithful ontological (classical) model, which limits how much the statistics can deviate within different contexts. In this way we bound the precision of the experimental setup under which ontologically faithful non-contextual models can be refuted. We further consider the noise tolerance through different types of decoherence models on different types of encodings of qudits. We quantify the effect of the decoherence on the required precision for the experimental setup in order to demonstrate contextuality in this broader sense. (10.1103/PhysRevA.103.062220)
    DOI : 10.1103/PhysRevA.103.062220
  • Qu'est-ce que la théorie de l'information ?
    • Rioul Olivier
    , 2021. La révolution numérique que nous connaissons aujourd'hui doit énormément à la théorie de l'in- formation de Shannon. La question à la base de la théorie est toute naturelle : peut-on mesurer l'information, contenue dans un message ou transmise dans un canal de communication ?
  • Banners: Binarized Neural Networks with Replicated Secret Sharing
    • Ibarrondo Alberto
    • Chabanne Hervé
    • Önen Melek
    , 2021, pp.63-74. Binarized Neural Networks (BNN) provide efficient implementations of Convolutional Neural Networks (CNN). This makes them particularly suitable to perform fast and memory-light inference of neural networks running on resource-constrained devices. Motivated by the growing interest in CNN-based biometric recognition on potentially insecure devices, or as part of strong multi-factor authentication for sensitive applications, the protection of BNN inference on edge devices is rendered imperative. We propose a new method to perform secure inference of BNN relying on secure multiparty computation. While preceding papers offered security in a semi-honest setting for BNN or malicious security for standard CNN, our work yields security with abort against one malicious adversary for BNN by leveraging on Replicated Secret Sharing (RSS) for an honest majority with three computing parties. Experimentally, we implement Banners on top of MP-SPDZ and compare it with prior work over binarized models trained for MNIST and CIFAR10 image classification datasets. Our results attest the efficiency of Banners as a privacy-preserving inference technique (10.1145/3437880.3460394)
    DOI : 10.1145/3437880.3460394
  • General-order observation-driven models: ergodicity and consistency of the maximum likelihood estimator
    • Sim Tepmony
    • Douc Randal
    • Roueff François
    Electronic Journal of Statistics, Shaker Heights, OH : Institute of Mathematical Statistics, 2021. The class of observation-driven models (ODMs) includes many models of non-linear time series which, in a fashion similar to, yet different from, hidden Markov models (HMMs), involve hidden variables. Interestingly, in contrast to most HMMs, ODMs enjoy likelihoods that can be computed exactly with computational complexity of the same order as the number of observations, making maximum likelihood estimation the privileged approach for statistical inference for these models. A celebrated example of general order ODMs is the GARCH$(p,q)$ model, for which ergodicity and inference has been studied extensively. However little is known on more general models, in particular integer-valued ones, such as the log-linear Poisson GARCH or the NBIN-GARCH of order $(p,q)$ about which most of the existing results seem restricted to the case $p=q=1$. Here we fill this gap and derive ergodicity conditions for general ODMs. The consistency and the asymptotic normality of the maximum likelihood estimator (MLE) can then be derived using the method already developed for first order ODMs. (10.1214/21-EJS1858)
    DOI : 10.1214/21-EJS1858
  • Natural Strategic Abilities in Voting Protocols
    • Jamroga Wojciech
    • Kurpiewski Damian
    • Malvone Vadim
    , 2021, 12812, pp.45-62. (10.1007/978-3-030-79318-0_3)
    DOI : 10.1007/978-3-030-79318-0_3
  • Telepathic Headache: Mitigating Cache Side-Channel Attacks on Convolutional Neural Networks
    • Chabanne Hervé
    • Danger Jean-Luc
    • Guiga Linda
    • Kühne Ulrich
    , 2021, pp.363-392. (10.1007/978-3-030-78372-3_14)
    DOI : 10.1007/978-3-030-78372-3_14
  • Dynamics of epitaxial quantum dot laser on silicon subject to chip-scale back-reflection for isolator-free photonics integrated circuits
    • Dong Bozhang
    • Chen Jun-Da
    • Norman Justin
    • Bowers John
    • Lin Fan-Yi
    • Grillot Frederic
    , 2021, pp.1-1. (10.1109/CLEO/Europe-EQEC52157.2021.9542300)
    DOI : 10.1109/CLEO/Europe-EQEC52157.2021.9542300
  • An analysis of linear digital equalization in 50Gbit/s HS-PONs to compensate the combined effect of chirp and chromatic dispersion
    • Nogueira Sampaio Flávio Andre
    • Pincemin Erwan
    • Genay Naveena
    • Anet Neto Luiz
    • Le Bidan Raphaël
    • Jaouën Yves
    , 2021. We study the impacts of frequency chirp and chromatic dispersion CD in 50Gbits Non-Return-to-Zero NRZ transmissions in an Intensity Modulation and Direct Detection IMDD channel with a Minimum-Mean-Square Error Equalizer MMSE-LE at reception.
  • Exposure assessment in near-field : methodology and application in FM frequencies for occupational exposure
    • Fetouri Bader Mustafa
    , 2021. FM radio is still popular among all segments of the general population. FM antenna arrays are usually placed on metallic structures known as pylons that workers have to climb in order to do maintenance and repair work. Exposure monitoring is required by regulation when workers are exposed to high-power emitters. The purpose of this research is to characterize electromagnetic fields (EMF) in pylon environments and to assess EMF exposure in such cases.EMF in pylon environments tend to be in the near-field region of the antenna arrays, but the characterization and understanding of such environments in the literature is limited to specific exposure cases. This research has therefore focused on defining a new methodology by generalizing exposure assessment in the near-field.Using field metrics analysis in human-sized volumes, this study analyzed the near-field environments found in the transmission pylons and generated random incident fields that have the same characteristics. The random incident fields were subjected to a validation and selection process in order to be used in FDTD simulations for specific absorption rate (SAR) assessment.Five hundred FDTD simulations for SAR assessments were performed. The results showed a high correlation between local & whole-body SAR and averaged electric field strength. Surrogate models linking SAR to electric field strength were built using machine learning techniques. The uncertainty of the SAR results and the surrogate models was quantified.
  • Implantable medical system for measuring a physiological parameter
    • Maldari Mirko
    , 2021.
  • AI Transformation in the Public Sector: Ongoing Research
    • Peretz-Andersson Einav
    • Lavesson Niklas
    • Bifet Albert
    • Mikalef Patrick
    , 2021, pp.1--4. Real-world application of data-driven and intelligent systems (AI) is increasing in the private and public sector as well as in society at large. Many organizations transform as a consequence of increased AI implementation. The consequences of such transformations may include new recruitment plans, procurement of additional IT, changes in existing positions and roles, new business models, as well as new policies and regulations. However, it is unclear how this transformation varies across different types of organizations. We study the effects of bottom-up approaches, such as pilot projects and mentoring to specific groups within organizations, and aim to explore how such approaches can complement the top-down approach of strategic AI implementation. Our context is the public sector. Our goal is to acquire an improved understanding of how and when AI transformation occurs in the public sector, which are the consequences, and which strategies are fruitful or detrimental to the organization. We aim to study public sector organizations in Sweden, Norway, New Zealand, Germany, and The Netherlands to learn about potential similarities and differences with regard to AI transformation. (10.1109/SAIS53221.2021.9483960)
    DOI : 10.1109/SAIS53221.2021.9483960
  • Spatio-Temporal Wireless D2D Network With Beamforming
    • Quan Yibo
    • Kélif Jean-Marc
    • Coupechoux Marceau
    , 2021, pp.1-6. In this paper, we consider a dynamic device-todevice (D2D) communication model where transmitters and receivers have multiple antennas and adopt beamforming (BF). A continuous spatio-temporal model for the wireless network is analyzed, which combines a spatial stochastic point process and a dynamic birth-death process. We model BF by using a uniform linear array (ULA) and extend the result of Sankararaman and Baccelli on the stability condition of such a network. We show that the critical arrival rate increases with the number of antennas at the transmitter and the receiver. (10.1109/ICC42927.2021.9500356)
    DOI : 10.1109/ICC42927.2021.9500356
  • Aging Effects on Template Attacks Launched on Dual-Rail Protected Chips
    • Danger Jean-Luc
    • Niknia Farzad
    • Guilley Sylvain
    • Karimi Naghmeh
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, IEEE, 2021, 41 (5), pp.1276 - 1289. Profiling side-channel attacks in which an adversary creates a “profile” of a sensitive device and uses such a profile to model a target device with similar implementation has received the lion’s share of attention in the recent years. In particular, template attacks are known to be the most powerful profiling side-channel attacks from an information theoretic point of view. When launching such an attack, the adversary first builds a model based on the leakage of the profiling (training) device in his disposal, which is then exploited in the second phase of the attack (i.e., matching) to extract the key from the target device. Discrepancies between the device used for modeling and the target device affect the attack success. The effect of process variation and temperature misalignment between the profiling and target devices in the template attack’s success has been studied extensively in the literature, while the impact of device aging on the template attack’s success is yet to be investigated thoroughly. This article moves one step forward and studies the impact of device aging, mainly bias temperature instability (BTI) and hot carrier injection (HCI), in the devices that have been protected against power analysis attacks via dual rail logics. In particular, we focus on the wave dynamic differential logic (WDDL) circuits, and via extensive transistor-level simulations, we will show how device aging misalignments between the profiling and target devices can hinder template attacks for both unprotected and WDDL protected counterparts. We mounted several attacks on the PRESENT cipher, with and without WDDL protection, at different temperatures and aging times. Our results show that the attack is more difficult if there is an aging-duration mismatch between the training and target devices, and the attack-efficiency decrease is especially significant for mismatches of few weeks (10.1109/TCAD.2021.3088803)
    DOI : 10.1109/TCAD.2021.3088803
  • A Multi-View Stereoscopic Video Database With Green Screen (MTF) For Video Transition Quality-of-Experience Assessment
    • Hobloss Nour
    • Zhang Lu
    • Cagnazzo Marco
    , 2021. We introduce a multi-view stereoscopic video database with a green screen, called MTF, for the usages in computer vision applications, in particular for free navigation, free-viewpoint television, and video transition quality-of-experience (QoE) assessment. The MTF contains full-HD videos of real storytelling made up of 3 scenes. One particularity of this dataset is that to understand its storytelling, users must change their point of view in the scene at a given time. To this end, we usually need to generate a transition to link two points of view in the same scene. Computer vision techniques that enable such transitions like view synthesis methods, rely on a set of images of the scene to render some new views from different viewpoints of this scene. However, these methods may have many failure cases that lead to artifacts in the final rendered video transition. In most view synthesis QoE tests, the contents are not designed to make the transition between two points of view useful or interesting for the viewers, e.g. they don't need to make a transition to capture more information to better understand the content. We thus, assume that participants will harshly judge artifacts and imperfections in the rendered transition. Thus, the MTF is expected to enable a better analysis of the visual impact of persistent artifacts in the final rendered transition. In our dataset, all the scenes are recorded in a green screen studio, which is often used to superimpose special effects and scenery during editing according to specific needs. Our dataset also presents a wide baseline camera-setup, a challenging constraint for view synthesis techniques. Finally, The MTF can also be used as a complementary dataset with others in literature in various computer vision applications, such as video compression, 3D video content, immersive virtual reality environment, optical flow estimation ... (10.1109/QoMEX51781.2021.9465458)
    DOI : 10.1109/QoMEX51781.2021.9465458
  • Propagating Information Using SSA
    • Brandner Florian
    • Novillo Diego
    , 2021, pp.95-106. This chapter provides a gentle introduction to classical data-flow analysis and explains how SSA form improves both performance and expressiveness. In the first part, classical data-flow analysis is discussed by introducing the fundamentals of monotone frameworks on control-flow graphs. In the following part, the chapter introduces a simple and elegant data-flow engine that exploits SSA form. The engine relies on SSA form in order to efficiently propagate information on the values of variables on a so-called SSA graph, while at the same time taking into account control dependencies that emerge from the classical control-flow graph. The engine’s operation is explained through several examples based on constant and copy propagation, which illustrate the advantages of jointly processing the program’s data and control flow. (10.1007/978-3-030-80515-9_8)
    DOI : 10.1007/978-3-030-80515-9_8
  • Super-resolution in brain PET Using a Real Time Motion Capture System
    • Chemli Y.
    • Tétrault M.-A.
    • Normandin M.
    • Marin Thibault
    • Bloch Isabelle
    • El Fakhri Georges
    • Ouyang J.
    • Petibon Y.
    , 2021.
  • Weakly supervised change detection using guided anisotropic diffusion
    • Daudt Rodrigo Caye
    • Le Saux Bertrand
    • Boulch Alexandre
    • Gousseau Yann
    Machine Learning, Springer Verlag, 2021. Large scale datasets created from crowdsourced labels or openly available data have become crucial to provide training data for large scale learning algorithms. While these datasets are easier to acquire, the data are frequently noisy and unreliable, which is motivating research on weakly supervised learning techniques. In this paper we propose original ideas that help us to leverage such datasets in the context of change detection. First, we propose the guided anisotropic diffusion (GAD) algorithm, which improves semantic segmentation results using the input images as guides to perform edge preserving filtering. We then show its potential in two weakly-supervised learning strategies tailored for change detection. The first strategy is an iterative learning method that combines model optimisation and data cleansing using GAD to extract the useful information from a large scale change detection dataset generated from open vector data. The second one incorporates GAD within a novel spatial attention layer that increases the accuracy of weakly supervised networks trained to perform pixel-level predictions from image-level labels. Improvements with respect to state-of-the-art are demonstrated on 4 different public datasets. (10.1007/s10994-021-06008-4)
    DOI : 10.1007/s10994-021-06008-4