Sorry, you need to enable JavaScript to visit this website.
Share

Publications

2020

  • Speed of convergence of diffusion approximations
    • Besançon Eustache
    , 2020. In many fields of interest, Markov processes are a primary modelisation tool for random processes. Unfortunately it is often necessary to use very large or even infinite dimension state spaces, making the exact analysis of the various characteristics of interest (stability, stationary law, hitting times of certain domains, etc.) of the process difficult or even impossible . For quite a time, thanks in particular to martingale theory, it has been possible to make use of approximations by brownian diffusions. This enables an approximate analysis of the initial problem. The main drawback of this approach is that it does not measure the error made in this approximation. The purpose is to dévelop a theory of error calculation for diffusion approximations .For some time, the developement of the Stein-Malliavin method has enabled to get some precision over speed of convergence in classical theorems such as the Donsker theorem (functionnal convergence of a random walk towards the Brownian motion) or in the generalisation of the Binomial Poisson approximation path by path. In this work we intend to extend the development of this theory for Markovian processes such as those than can be found in queueing theory, in epidemiology or in other fields of application. Starting from the representation of Markov processes as Poisson measures, we extend the method developped by Laurent Decreusefond and Laure Coutin to assess the speed of convergence in diffusion approximations . To do so, we extend the Stein-Malliavin method to vectors of processes rather than a single process. The limit is a gaussian process changed in time. The Stein Malliavin method being mainly developped to calculate convergence towards the standard Brownian motion, it is adapted to the problem of convergence towards a time changend process using linear approximation methods. We therefore make use of Gaussian analysis to assess the dependency between the various time periods and to functionnal analysis to elect the right probabilistic spaces.
  • Formal methods for the analysis of cache-timing leaks and key generation in cryptographic implementations
    • Schaub Alexander
    , 2020. Cryptography is ubiquitous in today's interconnected world, protecting our communications, securing our payment systems. While the cryptographic algorithms are generally well understood, their implementations have been less subject to formal verification. This has lead to successful breakages of implementions of most modern primitives: AES, RSA, ECDSA... In general, cryptographic implementations would benefit from stronger theoretical guarantees.In this thesis, we apply this line of reasoning to two different topics, one in software security, and the other in hardware security. The first half of this thesis explores cache-timing side channel vulnerabilities that arise when the time taken by a cryptographic operation, or the cache state after this operation, depends on sensitive information. This occurs if any branching operation depends on secret information such as a private key, or if memory is accessed at an address that depends on that secret.We developed a tool to detect and prevent such leaks in programs written in the C programming language. This tool is applied on most candidates of NIST's post-quantum standardization process in order to find cache-timing leakages. This process aims at replacing traditional cryptographic primitives such as RSA or ECDSA, broken by quantum computers, by safer alternatives. The development of such primitives is on the way, but the security of their implementations has received less scrutiny. We show how our tool is able to detect potential cache-timing leaks in a majority of the implementations and what mitigations are possible.The subject of the second half of this thesis are the so-called physically unclonable functions, or PUFs: elementary circuits from which stable but unpredictable identifiers can be extracted. They rely on small, uncontrollable changes in the semiconductor properties to exhibit unpredictable behavior. Theoretical guarantees concerning two fundamental characteristics of PUFs are derived in this thesis, for a large family of PUFs: the stability of the identifier, related to circuit noise, and the exploitable entropy, derived from the mathematical PUF model.
  • Contributions to representation learning of multivariate time series and graphs
    • Pineau Edouard
    , 2020. Machine learning (ML) algorithms are designed to learn models that have the ability to take decisions or make predictions from data, in a large panel of tasks. In general, the learned models are statistical approximations of the true/optimal unknown decision models. The efficiency of a learning algorithm depends on an equilibrium between model richness, complexity of the data distribution and complexity of the task to solve from data. Nevertheless, for computational convenience, the statistical decision models often adopt simplifying assumptions about the data (e.g. linear separability, independence of the observed variables, etc.). However, when data distribution is complex (e.g. high-dimensional with nonlinear interactions between observed variables), the simplifying assumptions can be counterproductive. In this situation, a solution is to feed the model with an alternative representation of the data. The objective of data representation is to separate the relevant information with respect to the task to solve from the noise, in particular if the relevant information is hidden (latent), in order to help the statistical model. Until recently and the rise of modern ML, many standard representations consisted in an expert-based handcrafted preprocessing of data. Recently, a branch of ML called deep learning (DL) completely shifted the paradigm. DL uses neural networks (NNs), a family of powerful parametric functions, as learning data representation pipelines. These recent advances outperformed most of the handcrafted data in many domains.In this thesis, we are interested in learning representations of multivariate time series (MTS) and graphs. MTS and graphs are particular objects that do not directly match standard requirements of ML algorithms. They can have variable size and non-trivial alignment, such that comparing two MTS or two graphs with standard metrics is generally not relevant. Hence, particular representations are required for their analysis using ML approaches. The contributions of this thesis consist of practical and theoretical results presenting new MTS and graphs representation learning frameworks.Two MTS representation learning frameworks are dedicated to the ageing detection of mechanical systems. First, we propose a model-based MTS representation learning framework called Sequence-to-graph (Seq2Graph). Seq2Graph assumes that the data we observe has been generated by a model whose graphical representation is a causality graph. It then represents, using an appropriate neural network, the sample on this graph. From this representation, when it is appropriate, we can find interesting information about the state of the studied mechanical system. Second, we propose a generic trend detection method called Contrastive Trend Estimation (CTE). CTE learns to classify pairs of samples with respect to the monotony of the trend between them. We show that using this method, under few assumptions, we identify the true state underlying the studied mechanical system, up-to monotone scalar transform.Two graph representation learning frameworks are dedicated to the classification of graphs. First, we propose to see graphs as sequences of nodes and create a framework based on recurrent neural networks to represent and classify them. Second, we analyze a simple baseline feature for graph classification: the Laplacian spectrum. We show that this feature matches minimal requirements to classify graphs when all the meaningful information is contained in the structure of the graphs.
  • Low-Complexity Neural Networks for Baseband Signal Processing
    • Larue Guillaume
    • Dhiflaoui Mona
    • Dufrene Louis-Adrien
    • Lampin Quentin
    • Chollet Paul
    • Ghauch Hadi
    • Rekaya Ghaya
    , 2020, pp.1-6. (10.1109/GCWkshps50303.2020.9367521)
    DOI : 10.1109/GCWkshps50303.2020.9367521
  • Enhanced Beam Alignment for Millimeter Wave MIMO Systems: A Kolmogorov Model
    • Duan Qiyou
    • Kim Taejoon
    • Ghauch Hadi
    • Wong Eric W.M.
    , 2020, pp.1-6. (10.1109/GLOBECOM42002.2020.9322149)
    DOI : 10.1109/GLOBECOM42002.2020.9322149
  • Auralization of a Hybrid Sound Field using a Wave-Stress Tensor Based Model
    • Meacham Aidan
    • Badeau Roland
    • Polack Jean-Dominique
    , 2020, pp.523-529. A hybrid approach to room impulse response synthesis and auralization is developed in the context of a wave-stress tensor based model of late reverberation. This method for efficiently computing spatially varying energy envelopes has been demonstrated to represent the sound field in a sufficiently-diffusing 1-dimensional hallway above 250 Hz. To synthesize a realistic impulse response from the computed decay curves, the direct path, early reflections, and low frequency portion of the sound field must be calculated separately and then combined with the late field to form a hybrid scheme. In this work, we propose one strategy for generating the late field from the aforementioned energy envelopes and suggest the use of a typical pressure-velocity wave-based scheme to generate the other necessary sound field components. Because of the efficiency of the wave-stress tensor based method and the reduced demands on the secondary simulation technique, such a hybridization presents a promising architecture for future real-time auralization in large spaces that may be difficult to model using only a single method. (10.48465/fa.2020.0833)
    DOI : 10.48465/fa.2020.0833
  • Influence of bone conduction transducers position and constraint on propagation to the ear
    • Joubaud Thomas
    • Rosier Julie
    • Zimpfer Véronique
    • Lacroix Arthur
    • Dury Jérémy
    • Hamery Pascal
    , 2020, pp.1565-1571. Solid-state transducers are nowadays integrated in communication headsets that open the way to a new category of headsets that are of interest in both military and civil applications. Sounds are stimulated to the inner ear directly through the bones and cartilage of the skull. The main advantage of this technology is to offer the user the possibility to have the ear clear to remain alert to his environment or to use earplugs with a high level of protection while continuing to communicate via a radio system. Different types of transducers are used in a measurement protocol to determine the influence of the bearing force and position of the transducer on the propagation from the skin to the reception by a listener. The measurement setup includes laser vibrometry measurements on the skin and solid-state hearing threshold measurements. (10.48465/fa.2020.0432)
    DOI : 10.48465/fa.2020.0432
  • Catalic: Delegated PSI Cardinality with Applications to Contact Tracing
    • Duong Thai
    • Phan Duong Hieu
    • Trieu Ni
    , 2020, pp.870-899. (10.1007/978-3-030-64840-4_29)
    DOI : 10.1007/978-3-030-64840-4_29
  • Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers
    • Um Kiwon
    • Brand Robert
    • Yun Fei
    • Holl Philipp
    • Thuerey Nils
    , 2020, 33, pp.6111-6122. Finding accurate solutions to partial differential equations (PDEs) is a crucial task in all scientific and engineering disciplines. It has recently been shown that machine learning methods can improve the solution accuracy by correcting for effects not captured by the discretized PDE. We target the problem of reducing numerical errors of iterative PDE solvers and compare different learning approaches for finding complex correction functions. We find that previously used learning approaches are significantly outperformed by methods that integrate the solver into the training loop and thereby allow the model to interact with the PDE during training. This provides the model with realistic input distributions that take previous corrections into account, yielding improvements in accuracy with stable rollouts of several hundred recurrent evaluation steps and surpassing even tailored supervised variants. We highlight the performance of the differentiable physics networks for a wide variety of PDEs, from non-linear advection-diffusion systems to three-dimensional Navier-Stokes flows.
  • Capturing Acoustic Speech Signals with Coherent MIMO Phase-OTDR
    • Dorize Christian
    • Guerrier Sterenn
    • Awwad Elie
    • Renaudier Jeremie
    , 2020, pp.1-4. (10.1109/ECOC48923.2020.9333283)
    DOI : 10.1109/ECOC48923.2020.9333283
  • A survey on time-sensitive resource allocation in the cloud continuum
    • Ramanathan Saravanan
    • Shivaraman Nitin
    • Suryasekaran Seima
    • Easwaran Arvind
    • Borde Etienne
    • Steinhorst Sebastian
    Information Technology, Oldenbourg Verlag, 2020, 62 (5-6), pp.241-255. Artificial Intelligence (AI) and Internet of Things (IoT) applications are rapidly growing in today’s world where they are continuously connected to the internet and process, store and exchange information among the devices and the environment. The cloud and edge platform is very crucial to these applications due to their inherent compute-intensive and resource-constrained nature. One of the foremost challenges in cloud and edge resource allocation is the efficient management of computation and communication resources to meet the performance and latency guarantees of the applications. Numerous research studies have been carried out to address this intricate problem. In this paper, the current state-of-the-art resource allocation techniques for the cloud continuum, in particular those that consider time-sensitive applications, are reviewed. Furthermore, we present the key challenges in the resource allocation problem for the cloud continuum, a taxonomy to classify the existing literature and the potential research gaps. (10.1515/itit-2020-0013)
    DOI : 10.1515/itit-2020-0013
  • General Knowledge Representation and Sharing for Disaster Management
    • Martin Philippe
    • Tanzi Tullio
    , 2021, AICT-622, pp.116-131. The first part of this article first distinguishes "restricted knowledge representation (KR) and sharing (KS)" and the still seldom researched task of "general KR and KS". This parts then highlights the usefulness of the latter for disaster management, and provides a panorama of complementary techniques supporting it. The research question that these techniques collectively answer is: how to let Web users collaboratively build KBs (KR bases) i) that are not implicitly "partially redundant or inconsistent" internally or with each other, ii) that are complete with respect to certain criteria or subjects, iii) without restricting what the users can enter nor forcing them to agree on terminology or beliefs, and iv) without requiring people to duplicate knowledge in various KBs, or manually search knowledge in various KBs and aggregate knowledge from various KBs? In a second part, this article shows the way various kinds of disaster management related information can be categorized or represented for general KS purposes, e.g. terminologies and information objects (these objects are rarely represented via KRs; examples about Search & Rescue procedures are given). (10.1007/978-3-030-81469-4_10)
    DOI : 10.1007/978-3-030-81469-4_10
  • SCALAR - A Platform for Real-time Machine Learning Competitions on Data Streams
    • Radulovic Nedeljko
    • Boulegane Dihia
    • Bifet Albert
    Journal of Open Source Software, Open Journals, 2020, 5 (56), pp.2676. (10.21105/joss.02676)
    DOI : 10.21105/joss.02676
  • A feedback information-theoretic transmission scheme (FITTS) for modeling trajectory variability in aimed movements
    • Gori Julien
    • Rioul Olivier
    Biological Cybernetics (Modeling), Springer Verlag, 2020, 114 (6), pp.621-641. Trajectories in human aimed movements are inherently variable. Using the concept of positional variance profiles, such trajectories are shown to be decomposable into two phases: In a first phase, the variance of the limb position over many trajectories increases rapidly; in a second phase, it then decreases steadily. A new theoretical model, where the aiming task is seen as a Shannon-like communication problem, is developed to describe the second phase: Information is transmitted from a "source" (determined by the position at the end of the first phase), to a "destination" (the movement's end-point) over a "channel" perturbed by Gaussian noise, with the presence of a noiseless feedback link. Information-theoretic considerations show that the positional variance decreases exponentially with a rate equal to the channel capacity C. Two existing datasets for simple pointing tasks are re-analyzed and observations on real data confirm our model. The first phase has constant duration and C is found constant across instructions and task parameters, which thus characterizes the participant's performance. Our model provides a clear understanding of the speed-accuracy tradeoff in aimed movements: Since the participant's capacity is fixed, a higher prescribed accuracy necessarily requires a longer second phase resulting in an increased overall movement time. The well-known Fitts' law is also recovered using this approach. (10.1007/s00422-020-00853-7)
    DOI : 10.1007/s00422-020-00853-7
  • A Statistical Estimation of 5G Massive MIMO Networks’ Exposure Using Stochastic Geometry in mmWave Bands
    • Al Hajj Maarouf
    • Wang Shanshan
    • Thanh Tu Lam
    • Azzi Soumaya
    • Wiart Joe
    Applied Sciences, Multidisciplinary digital publishing institute (MDPI), 2020, 10 (23), pp.8753. This paper aims to derive an analytical modelling of the downlink exposure in 5G massive Multiple Input Multiple Output (MIMO) antenna networks using stochastic geometry. The Poisson point process (PPP) is assumed for base station (BS) distribution. The power received at the transmitter is modeled as a shot-noise process with a modified power law. The distributions of 5G massive MIMO antenna gain and channel gain were obtained by fitting simulation results from the NYUSIM channel simulator. The fitted distributions, e.g., exponential and gamma distribution for antenna and channel gain respectively, were then implemented into an analytical framework. In this paper, we obtained the closed-form expression of the moment-generating function (MGF) for the total exposure in the network. The framework is then validated by numerical simulations. The sensitivity analysis is carried out to investigate the impact of key parameters, e.g., BS density, path loss exponent, and transmission probability. We then proved and quantified the significant impact the transmission probability on global exposure, which indicates the importance of considering the network usage in 5G exposure estimations. (10.3390/app10238753)
    DOI : 10.3390/app10238753
  • Distributed Learning in Noisy-Potential Games for Resource Allocation in D2D Networks
    • Ali Mohammed Shabbir
    • Coucheney Pierre
    • Coupechoux Marceau
    IEEE Transactions on Mobile Computing, Institute of Electrical and Electronics Engineers, 2020, 19 (12), pp.2761-2773. We propose a distributed learning algorithm for the resource allocation problem in Device-to-Device (D2D) wireless networks that takes into account the throughput estimation noise. We first formulate a stochastic optimization problem with the objective of maximizing the generalized alpha fair function of the network. In order to solve it distributively, we then define and use the framework of noisy-potential games. In this context, we propose a distributed Binary Log-linear Learning Algorithm (BLLA) that converges to a Nash Equilibrium of the resource allocation game, which is also an optimal resource allocation for the optimization problem. A key enabler for the analysis of the convergence are the proposed rules for computation of resistance of trees of perturbed Markov chains. The convergence of BLLA is proved for bounded and unbounded noise, with fixed and decreasing temperature parameter. A sufficient number of estimation samples is also provided that guarantees the convergence to an optimal state. At last, we assess the performance of BLLA by extensive simulations by considering both bounded and unbounded noise cases and we show that BLLA achieves higher sum data rate compared to the state-of-the-art. (10.1109/TMC.2019.2936345)
    DOI : 10.1109/TMC.2019.2936345
  • Data Transmission Based on Exact Inverse Periodic Nonlinear Fourier Transform, Part II: Waveform Design and Experiment
    • Goossens Jan-Willem
    • Hafermann Hartmut
    • Jaouën Yves
    Journal of Lightwave Technology, Institute of Electrical and Electronics Engineers (IEEE)/Optical Society of America(OSA), 2020, 38 (23), pp.6520-6528. (10.1109/JLT.2020.3013163)
    DOI : 10.1109/JLT.2020.3013163
  • Energy trading marketplace using Ethereum private network
    • Son Dongmin
    • Al Zahr Sawsan
    • Memmi Gerard
    , 2020. In this paper, we evaluate and analyze the performance of a local electricity market for energy trading that we implemented on the Ethereum platform. The energy trading is based on a double auction with multiple sellers and multiple buyers, and the matched price and volume are determined by a trade reduction mechanism. We benchmark the performance of Ethereum using a systematic blockchain performance evaluation method, and based on this, we propose and analyze an efficient market operation method. In particular, we relate the limits on the scalability and real-time performance of the market to the throughput and latency of the Ethereum platform. We also identify the minimum resources necessary to operate an Ethereum client.
  • Parametric versus nonparametric: The fitness coefficient
    • Portier François
    • Mazo Gildas
    Scandinavian Journal of Statistics, Wiley, 2020, 10 (1). Olkin and Spiegelman introduced a semiparametric estimator of the density defined as a mixture between the maximum likelihood estimator and the kernel density estimator. Due to the absence of any leave-one-out strategy and the hardness of estimating the Kullback–Leibler loss of kernel density estimate, their approach produces unsatisfactory results. This article investigates an alternative approach in which only the kernel density estimate is modified. From a theoretical perspective, the estimated mixture parameter is shown to converge in probability to one if the parametric model is true and to zero otherwise. From a practical perspective, the utility of the approach is illustrated on real and simulated data sets. (10.1111/sjos.12495)
    DOI : 10.1111/sjos.12495
  • Subjective and objective quality assessment of the softcast video transmission scheme
    • Trioux Anthony
    • Valenzise Giuseppe
    • Cagnazzo Marco
    • Kieffer Michel
    • Coudoux François-Xavier
    • Corlay Patrick
    • Gharbi M
    , 2020. SoftCast-based linear video coding and transmission (LVCT) schemes have been proposed as a promising alternative to traditional video coding and transmission schemes in wireless environments. Currently, the performance of LVCT schemes is evaluated by means of traditional objective scores such as PSNR or SSIM. Nevertheless, since the compression is performed in a very different way from traditional coding schemes such as HEVC, visual artifacts are also quite different and deserve to be subjectively assessed. In this paper, we propose a subjective quality assessment of SoftCast, pioneer and standard of the LVCT schemes. This study aims to better understand the trade-offs between the LVCT parameters that can be tuned to improve the quality. These parameters, including different GoP-sizes, Compression Ratios (CR) and Channel Signal-to-Noise Ratio (CSNR), are used to generate a dataset of 85 videos. A Double Stimulus Impairment Scale (DSIS) test is performed on the received videos to assess the perceived quality. Results show that the key characteristic of SoftCast, the linear relation between CSNR and PSNR, is also observed with the Mean-Opinion Scores (MOS), except at high CSNR where the quality saturates. In addition , Bjøntegaard model is used to quantify the trade-offs between CR, GoP-size and CSNR, depending on the intended application. Finally, the performance of objective metrics compared to the obtained MOS is evaluated. Results show that Multi-Scale SSIM (MS-SSIM), SSIM and Video Multimethod Assessment Fusion (VMAF) metrics offer the best correlation with the MOS values. (10.1109/vcip49819.2020.9301778)
    DOI : 10.1109/vcip49819.2020.9301778
  • Fifth special issue on knowledge discovery and business intelligence
    • Cortez Paulo
    • Bifet Albert
    Expert Systems, Wiley, 2020, 37 (6), pp.e12628/1-3. Artificial Intelligence (AI) is impacting our world. In the 1970s and 1980s, Expert Systems (ES) consisted of AI systems that included explicit knowledge, often represented in a symbolic form (e.g., by using the Prologue language), that was extracted from human experts. Since then, there hasbeen an AI shift, due to three main phenomena (Darwiche, 2018): data explosion, with availability of several big data sources (e.g., social media,sensor data); computational power growth, following the famous Moore's law which states that computer processing capacity doubles every2 years; and rise of sophisticated statistical and optimization techniques, including deep learning. Thus, rather than being expert-driven, ES havebecome more data-driven, with the focus on developing “computerized systems that use AI techniques to solve a specific real- world domainapplication task” (Cortez, Moro, Rita, King, & Hall, 2018). (10.1111/EXSY.12628)
    DOI : 10.1111/EXSY.12628
  • Asymptotically Good Multiplicative LSSS over Galois Rings and Applications to MPC over Z/pkZ
    • Abspoel Mark
    • Cramer Ronald
    • Damgård Ivan
    • Escudero Daniel
    • Rambaud Matthieu
    • Xing Chaoping
    • Yuan Chen
    , 2020, 12493, pp.151-180. (10.1007/978-3-030-64840-4_6)
    DOI : 10.1007/978-3-030-64840-4_6
  • Data Transmission Based on Exact Inverse Periodic Nonlinear Fourier Transform, Part I: Theory
    • Goossens Jan-Willem
    • Hafermann Hartmut
    • Jaouën Yves
    Journal of Lightwave Technology, Institute of Electrical and Electronics Engineers (IEEE)/Optical Society of America(OSA), 2020, 38 (23), pp.6499-6519. (10.1109/JLT.2020.3013148)
    DOI : 10.1109/JLT.2020.3013148
  • A Hybrid Model-Based and Data-Driven Approach to Spectrum Sharing in mmWave Cellular Networks
    • Ghadikolaei Hossein
    • Ghauch Hadi
    • Fodor Gabor
    • Skoglund Mikael
    • Fischione Carlo
    IEEE Transactions on Cognitive Communications and Networking, IEEE, 2020, 6 (4), pp.1269-1282. (10.1109/TCCN.2020.2981031)
    DOI : 10.1109/TCCN.2020.2981031
  • Perfect failure detection with very few bits.
    • Fraigniaud Pierre
    • Rajsbaum Sergio
    • Travers Corentin
    • Kuznetsov Petr
    • Rieutord Thibault
    Information and Computation, Elsevier, 2020, 275, pp.104604. A failure detector is a distributed oracle that provides each process with a module that continuously outputs an estimate of which processes in the system have failed. The perfect failure detector provides accurate and eventually complete information about process failures. We show that, in asynchronous failure-prone message-passing systems, perfect failure detection can be achieved using an oracle that outputs at most [log a(n)] + 1 bits per process in n-process systems, where α denotes the inverse-Ackermann function. This result is essentially optimal, as we also show that, in the same environment, no failure detector outputting a constant number of bits per process can achieve perfect failure detection. (10.1016/j.ic.2020.104604)
    DOI : 10.1016/j.ic.2020.104604