Sorry, you need to enable JavaScript to visit this website.
Partager

Publications

 

Les publications de nos enseignants-chercheurs sont sur la plateforme HAL :

 

Les publications des thèses des docteurs du LTCI sont sur la plateforme HAL :

 

Retrouver les publications figurant dans l'archive ouverte HAL par année :

2019

  • SysML Model Transformation for Safety and Security Analysis
    • Ameur-Boulifa Rabéa
    • Lugou Florian
    • Apvrille Ludovic
    , 2019, pp.35-49.
  • Metropolis-Hastings Algorithms for Estimating Betweenness Centrality Talel Abdessalem
    • Chehreghani Mostafa Haghir
    • Abdessalem Talel
    • Bifet Albert
    , 2019. Recently, an optimal probability distribution was proposed to sample vertices for estimating betweenness centrality, that yields the minimum approximation error. However, it is computation-ally expensive to directly use it. In this paper, we investigate exploiting Metropolis-Hastings technique to sample based on this distribution. As a result, first given a network G and a vertex r ∈ V (G), we propose a Metropolis-Hastings MCMC algorithm that samples from the space V (G) and estimates betweenness score of r. The stationary distribution of our MCMC sampler is the optimal distribution. We also show that our MCMC sampler provides an (ϵ, δ)-approximation. Then, given a network G and a set R ⊂ V (G), we present a Metropolis-Hastings MCMC sam-pler that samples from the joint space R and V (G) and estimates relative betweenness scores of the vertices in R. We show that for any pair r i , r j ∈ R, the ratio of the expected values of the estimated relative betweenness scores of r i and r j with respect to each other is equal to the ratio of their betweenness scores. We also show that our joint-space MCMC sampler provides an (ϵ, δ)-approximation of the relative betweenness score of r i with respect to r j . (10.5441/002/edbt.2019.87)
    DOI : 10.5441/002/edbt.2019.87
  • Constant-Delay Enumeration for Nondeterministic Document Spanners
    • Amarilli Antoine
    • Bourhis Pierre
    • Mengel Stefan
    • Niewerth Matthias
    , 2019. We consider the information extraction framework known as document spanners, and study the problem of efficiently computing the results of the extraction from an input document, where the extraction task is described as a sequential variable-set automaton (VA). We pose this problem in the setting of enumeration algorithms, where we can first run a preprocessing phase and must then produce the results with a small delay between any two consecutive results. Our goal is to have an algorithm which is tractable in combined complexity, i.e., in the sizes of the input document and the VA; while ensuring the best possible data complexity bounds in the input document size, i.e., constant delay in the document size. Several recent works at PODS'18 proposed such algorithms but with linear delay in the document size or with an exponential dependency in size of the (generally nondeterministic) input VA. In particular, Florenzano et al. suggest that our desired runtime guarantees cannot be met for general sequential VAs. We refute this and show that, given a nondeterministic sequential VA and an input document, we can enumerate the mappings of the VA on the document with the following bounds: the preprocessing is linear in the document size and polynomial in the size of the VA, and the delay is independent of the document and polynomial in the size of the VA. The resulting algorithm thus achieves tractability in combined complexity and the best possible data complexity bounds. Moreover, it is rather easy to describe, in particular for the restricted case of so-called extended VAs. (10.4230/LIPIcs.ICDT.2019.22)
    DOI : 10.4230/LIPIcs.ICDT.2019.22
  • An Experimental Study of the Treewidth of Real-World Graph Data
    • Maniu Silviu
    • Senellart Pierre
    • Jog Suraj
    , 2019, pp.18. Treewidth is a parameter that measures how tree-like a relational instance is, and whether it can reasonably be decomposed into a tree. Many computation tasks are known to be tractable on databases of small treewidth, but computing the treewidth of a given instance is intractable. This article is the first large-scale experimental study of treewidth and tree decompositions of real-world database instances (25 datasets from 8 different domains, with sizes ranging from a few thousand to a few million vertices). The goal is to determine which data, if any, can benefit of the wealth of algorithms for databases of small treewidth. For each dataset, we obtain upper and lower bound estimations of their treewidth, and study the properties of their tree decompositions. We show in particular that, even when treewidth is high, using partial tree decompositions can result in data structures that can assist algorithms. (10.4230/LIPIcs.ICDT.2019.12)
    DOI : 10.4230/LIPIcs.ICDT.2019.12
  • A New Entropy for Hypergraphs
    • Bloch Isabelle
    • Bretto Alain
    , 2019, pp.143-154.
  • Impairment-aware design and performance evaluation of all-optical wavelength convertible networks
    • Chouman Hussein
    , 2019. The continuous growth of Internet traffic implies an increased power consumption due to the many optical-to-electronic (OEO) conversions required by routers and switches. Transparent networks could curb this uncontrolled growth, but keeping the data in the optical domain has two adverse consequences: physical layer impairments accumulation which strongly degrades the performance due to amplication noise and non-linearities; and the wavelength continuity constraint (WCC) to keep the opticalsignal's wavelength unchanged in wavelength-division-multiplexed (WDM) optical networks which degrades network blocking performance. Wavelength converters (WCs) can alleviate the WCC constraint, but the only commercially available devices are the OEO-based WCs (OEO-WCs), however, their cost increases with bit-rates. On the other hand, all-optical wavelength converters (AO-WCs) have been demonstrated in research laboratories albeit with a limited conversion range and a performance that degrades converted signal's quality.In this thesis, we design the transmission layer using two different modulation formats sets with different bit-rates ranges; and consequently different performance estimation models. At the network level, our analyses show that WCs' contribution depends on traffic demands serving ordering in the online traffic assumption; that using xed-alternate routing (FAR) or least-loaded routing (LLR) algorithms and first-fit (FF) wavelength assignment algorithm, AO-WCs give the same performance enhancement as OEO-WCs. Moreover, we identify an optimum AO-WC conversion range and cascadability which shows that LLR requires lower number of conversions per channel compared to FAR.
  • Dimensionality Reduction and (Bucket) Ranking: a Mass Transportation Approach
    • Clémençon Stéphan
    • Achab Mastane
    • Korba Anna
    , 2019, 98, pp.1-30. Whereas most dimensionality reduction techniques (e.g. PCA, ICA, NMF) for multivariate data essentially rely on linear algebra to a certain extent, summarizing ranking data, viewed as realizations of a random permutation $\Sigma$ on a set of items indexed by $i\in \{1,\ldots,\; n\}$, is a great statistical challenge, due to the absence of vector space structure for the set of permutations $\mathfrak{S}_n$. It is the goal of this article to develop an original framework for possibly reducing the number of parameters required to describe the distribution of a statistical population composed of rankings/permutations, on the premise that the collection of items under study can be partitioned into subsets/buckets, such that, with high probability, items in a certain bucket are either all ranked higher or else all ranked lower than items in another bucket. In this context, $\Sigma$'s distribution can be hopefully represented in a sparse manner by a bucket distribution, i.e. a bucket ordering plus the ranking distributions within each bucket. More precisely, we introduce a dedicated distortion measure, based on a mass transportation metric, in order to quantify the accuracy of such representations. The performance of buckets minimizing an empirical version of the distortion is investigated through a rate bound analysis. Complexity penalization techniques are also considered to select the shape of a bucket order with minimum expected distortion. Beyond theoretical concepts and results, numerical experiments on real ranking data are displayed in order to provide empirical evidence of the relevance of the approach promoted.
  • Digital Predistortion for Wideband 5G Transmitters
    • Pham Dang-Kièn Germain
    , 2019.
  • Improving quality of experience in multimedia streaming by leveraging Information-Centric Networking
    • Samain Jacques
    , 2019. Information-Centric Networking (ICN) is a promising architecture to address today Internet multimedia traffic explosion and increasing user mobility: not only to enhance the user’s quality of experience, but also to naturally and seamlessly extend video sup- port deeper in the network functions. However, to the best of our knowledge, a thorough assessment of the benefits brought by ICN to multimedia delivery has not been done yet. In this thesis, we aim at reducing the gap to such assessment, by considering ICN in various multimedia delivery scenarios.First, we assess the benefits brought by an ICN-based Dynamic Adaptive Streaming (DAS) compared to TCP/IP based streaming, by means of an experimental campaign that includes multiple channels (e.g., emulated Wi-Fi and LTE, real 3G/4G traces), multiple clients (homogeneous vs heterogeneous mixture, synchronous vs asynchronous arrivals) and carefully selected DAS adaptation logics to cover the broad families of available adaptation algorithms. We also warn about potential pitfalls that are nonethelesseasily avoidable.Second, we show how network assistance helps im- proving the users’ quality of experience. To do so, we leverage the in-network caching feature of ICN and propose a simple periodical network signal from the cache (i.e., per-quality hit ratio) to be exploited by DAS adaptation logic to enhance further the user’s quality of experience by avoiding the known cache-induced quality oscillations. We confirm the soundness of our approach through experiments.Finally, as live multimedia delivery is gaining momentum, we propose hICN-RTC by integrating hICN (hybrid ICN), an ICN-in-IP solution, to WebRTC and we design RICTP (Realtime Information Centric Trans- port Protocol), a content-aware transport that minimizes the communication latency. Although still in development, the results we gathered from early experiments are promising as they show that hICN-RTC scales with the number of active speakers rather than the total number of participants.
  • Novel memory and I/O virtualization techniques for next generation data-centers based on disaggregated hardware
    • Bielski Maciej
    , 2019. This dissertation is positioned in the context of the system disaggregation - a novel approach expected to gain popularity in the data center sector. In traditional clustered systems resources are provided by one or multiple machines. Differently to that, in disaggregated systems resources are provided by discrete nodes, each node providing only one type of resources (CPUs, memory and peripherals). Instead of a machine, the term of a slot is used to describe a workload deployment unit. The slot is dynamically assembled before a workload deployment by the unit called system orchestrator.In the introduction of this work, we discuss the subject of disaggregation and present its benefits, compared to clustered architectures. We also add a virtualization layer to the picture as it is a crucial part of data center systems. It provides an isolation between deployed workloads and a flexible resources partitioning. However, the virtualization layer needs to be adapted in order to take full advantage of disaggregation. Thus, the main contributions of this work are focused on the virtualization layer support for disaggregated memory and devices provisioning.The first main contribution presents the software stack modifications related to flexible resizing of a virtual machine (VM) memory. They allow to adjust the amount of guest (running in a VM) RAM at runtime on a memory section granularity. From the software perspective it is transparent whether they come from local or remote memory banks.As a second main contribution we discuss the notions of inter-VM memory sharing and VM migration in the disaggregation context. We first present how regions of disaggregated memory can be shared between VMs running on different nodes. This sharing is performed in a way that involved guests which are not aware of the fact that they are co-located on the same computing node or not. Additionally, we discuss different flavors of concurrent accesses serialization methods. We then explain how the VM migration term gained a twofold meaning. Because of resources disaggregation, a workload is associated to at least one computing node and one memory node. It is therefore possible that it is migrated to a different computing node and keeps using the same memory, or the opposite. We discuss both cases and describe how this can open new opportunities for server consolidation.The last main contribution of this dissertation is related to disaggregated peripherals virtualization. Starting from the assumption that the architecture disaggregation brings many positive effects in general, we explain why it breaks the passthrough peripheral attachment technique (also known as a direct attachment), which is very popular for its near-native performance. To address this limitation we present a design that adapts the passthrough attachment concept to the architecture disaggregation. By this novel design, disaggregated devices can be directly attached to VMs, as if they were plugged locally. Moreover, all modifications do not involve the guest OS itself, for which the setup of the underlying infrastructure is not visible.
  • Apprentissage de représentations pour l'analyse robuste de scènes audiovisuelles
    • Parekh Sanjeel
    , 2019. L'objectif de cette thèse est de concevoir des algorithmes qui permettent la détection robuste d’objets et d’événements dans des vidéos en s’appuyant sur une analyse conjointe de données audio et visuelle. Ceci est inspiré par la capacité remarquable des humains à intégrer les caractéristiques auditives et visuelles pour améliorer leur compréhension de scénarios bruités. À cette fin, nous nous appuyons sur deux types d'associations naturelles entre les modalités d'enregistrements audiovisuels (réalisés à l'aide d'un seul microphone et d'une seule caméra), à savoir la corrélation mouvement/audio et la co-occurrence apparence/audio. Dans le premier cas, nous utilisons la séparation de sources audio comme application principale et proposons deux nouvelles méthodes dans le cadre classique de la factorisation par matrices non négatives (NMF). L'idée centrale est d'utiliser la corrélation temporelle entre l'audio et le mouvement pour les objets / actions où le mouvement produisant le son est visible. La première méthode proposée met l'accent sur le couplage flexible entre les représentations audio et de mouvement capturant les variations temporelles, tandis que la seconde repose sur la régression intermodale. Nous avons séparé plusieurs mélanges complexes d'instruments à cordes en leurs sources constituantes en utilisant ces approches.Pour identifier et extraire de nombreux objets couramment rencontrés, nous exploitons la co-occurrence apparence/audio dans de grands ensembles de données. Ce mécanisme d'association complémentaire est particulièrement utile pour les objets où les corrélations basées sur le mouvement ne sont ni visibles ni disponibles. Le problème est traité dans un contexte faiblement supervisé dans lequel nous proposons un framework d’apprentissage de représentation pour la classification robuste des événements audiovisuels, la localisation des objets visuels, la détection des événements audio et la séparation de sources.Nous avons testé de manière approfondie les idées proposées sur des ensembles de données publics. Ces expériences permettent de faire un lien avec des phénomènes intuitifs et multimodaux que les humains utilisent dans leur processus de compréhension de scènes audiovisuelles.
  • Radio Frequency Electromagnetic Fields Exposure Assessment in Indoor Environments: A Review
    • Chiaramello Emma
    • Bonato Marta
    • Fiocchi Serena
    • Tognola Gabriella
    • Parazzini Marta
    • Ravazzani Paolo
    • Wiart Joe
    International Journal of Environmental Research and Public Health, MDPI, 2019, 16 (6), pp.955. Exposure to radiofrequency (RF) electromagnetic fields (EMFs) in indoor environments depends on both outdoor sources such as radio, television and mobile phone antennas and indoor sources, such as mobile phones and wireless communications applications. Establishing the levels of exposure could be challenging due to differences in the approaches used in different studies. The goal of this study is to present an overview of the last ten years research efforts about RF EMF exposure in indoor environments, considering different RF-EMF sources found to cause exposure in indoor environments, different indoor environments and different approaches used to assess the exposure. The highest maximum mean levels of the exposure considering the whole RF-EMF frequency band was found in offices (1.14 V/m) and in public transports (0.97 V/m), while the lowest levels of exposure were observed in homes and apartments, with mean values in the range 0.13–0.43 V/m. The contribution of different RF-EMF sources to the total level of exposure was found to show slightly different patterns among the indoor environments, but this finding has to be considered as a time-dependent picture of the continuous evolving exposure to RF-EMF. (10.3390/ijerph16060955)
    DOI : 10.3390/ijerph16060955
  • Virtualization YANG Service Model (VYSM)
    • Shytyi Dmytro
    • Iannone Luigi
    • Beylier Laurent
    Internet Engineering Task Force, IETF, 2019. This document provides a specification of the Virtual Network Functions YANG Service Model (VYSM). The VNF YANG Service Model serves as a base framework for managing an universal Customer-Premises Equipment (uCPE) NFV subsystem from the Orchestrator.
  • High availability of VNF Orchestrator
    • Shytyi Dmytro
    • Iannone Luigi
    , 2019. Network Function Virtualization allows transformation from physical proprietary hardware to general purpose servers with virtual services. Depending on the available resources in the network, virtual services (functions) could be placed accordingly. The special entity is required too place functions and manage resources in the network. This entity is an Orchestrator. Usage of Orchestrator raises the problem of which action to undertake in case of orchestrator failure. The problem transforms into questions like: where to place these instances? How to manage fail overs? How many instances should be placed? The Orchestrator has risk of failure thus, to provide the high availability, this paper consider the fail over and placement problems.
  • Computing and Explaining Query Answers over Inconsistent DL-Lite Knowledge Bases
    • Bienvenu Meghyn
    • Bourgaux Camille
    • Goasdoué François
    Journal of Artificial Intelligence Research, Association for the Advancement of Artificial Intelligence, 2019, 64, pp.563-644. Several inconsistency-tolerant semantics have been introduced for querying inconsistent description logic knowledge bases. The first contribution of this paper is a practical approach for computing the query answers under three well-known such semantics, namely the AR, IAR and brave semantics, in the lightweight description logic DL-Lite R. We show that query answering under the intractable AR semantics can be performed efficiently by using IAR and brave semantics as tractable approximations and encoding the AR entail-ment problem as a propositional satisfiability (SAT) problem. The second issue tackled in this work is explaining why a tuple is a (non-)answer to a query under these semantics. We define explanations for positive and negative answers under the brave, AR and IAR semantics. We then study the computational properties of explanations in DL-Lite R. For each type of explanation, we analyze the data complexity of recognizing (preferred) explanations and deciding if a given assertion is relevant or necessary. We establish tight connections between intractable explanation problems and variants of SAT, enabling us to generate explanations by exploiting solvers for Boolean satisfaction and optimization problems. Finally, we empirically study the efficiency of our query answering and explanation framework using a benchmark we built upon the well-established LUBM benchmark. (10.1613/jair.1.11395)
    DOI : 10.1613/jair.1.11395
  • Time-frequency analysis of locally stationary Hawkes processes
    • Roueff François
    • von Sachs Rainer
    Bernoulli, Bernoulli Society for Mathematical Statistics and Probability, 2019, 25 (2), pp.1355-1385. Locally stationary Hawkes processes have been introduced in order to generalise classical Hawkes processes away from stationarity by allowing for a time-varying second-order structure. This class of self-exciting point processes has recently attracted a lot of interest in applications in the life sciences (seismology, genomics, neuro-science,...), but also in the modelling of high-frequency financial data. In this contribution we provide a fully developed nonparametric estimation theory of both local mean density and local Bartlett spectra of a locally stationary Hawkes process. In particular we apply our kernel estimation of the spectrum localised both in time and frequency to two data sets of transaction times revealing pertinent features in the data that had not been made visible by classical non-localised approaches based on models with constant fertility functions over time. (10.3150/18-BEJ1023)
    DOI : 10.3150/18-BEJ1023
  • Efficiency Versus Creativity as Organizing Principles of Socio-Technical Systems: Why Do We Build (Intelligent) Systems? [Commentary]
    • Diaconescu Ada
    IEEE Technology and Society Magazine, Institute of Electrical and Electronics Engineers, 2019, 38 (1), pp.13-22. Considering the vigorous drive to insert ever more artificial intelligence (AI) enabled technology into modern societies - from household appliances and personal assistants to business planning and guidance systems - it seems urgent to ensure that the kinds of AI we develop and deploy provide useful tools in the service of humanity, rather than constraining frameworks to limit humanity. In other words, we need "AI for humanity", rather than humanity for AI; or, "human centered AI" rather than "function oriented AI". This challenge is hardly confined to the AI domain. It extends to most computer-based systems, and, indeed, to all technology. This article aims to bring to the fore: the implicit values behind current technological developments - mostly efficiency-driven; the potential negative impacts of unquestioned technological developments and usage - e.g., the totalizing supremacy of quantity over quality; alternative ways of developing and adopting technology - e.g., as tools rather than controllers; and the necessity to permanently analyze, evaluate and alter technical systems during development, before adoption, and as their side effects become obvious. It also aims to emphasise that criticizing certain kinds of technologies is not at all equivalent to being technophobe, or against progress. That would be like equating a critique of fast food to an unnatural stance against eating. Finally, technical developments cannot be considered in isolation. They are a key part of a self-promoting system of market-driven production and sociopolitical transformation. (10.1109/MTS.2019.2894455)
    DOI : 10.1109/MTS.2019.2894455
  • Autonomous data collection for Disaster management: location aspects
    • Tanzi Tullio Joseph
    • Chandra Madhu
    , 2019.
  • GPR Autonomous system for Disaster management: location aspects for bean forming
    • Chandra Madhu
    • Tanzi Tullio Joseph
    , 2019.
  • Apprentissage automatique rapide et lent
    • Montiel López Jacob
    , 2019. L'ère du Big Data a révolutionné la manière dont les données sont créées et traitées. Dans ce contexte, de nombreux défis se posent, compte tenu de la quantité énorme de données disponibles qui doivent être efficacement gérées et traitées afin d’extraire des connaissances. Cette thèse explore la symbiose de l'apprentissage en mode batch et en flux, traditionnellement considérés dans la littérature comme antagonistes, sur le problème de la classification à partir de flux de données en évolution. L'apprentissage en mode batch est une approche bien établie basée sur une séquence finie: d'abord les données sont collectées, puis les modèles prédictifs sont créés, finalement le modèle est appliqué. Par contre, l’apprentissage par flux considère les données comme infinies, rendant le problème d’apprentissage comme une tâche continue (sans fin). De plus, les flux de données peuvent évoluer dans le temps, ce qui signifie que la relation entre les caractéristiques et la réponse correspondante peut changer. Nous proposons un cadre systématique pour prévoir le surendettement, un problème du monde réel ayant des implications importantes dans la société moderne. Les deux versions du mécanisme d'alerte précoce (batch et flux) surpassent les performances de base de la solution mise en œuvre par le Groupe BPCE, la deuxième institution bancaire en France. De plus, nous introduisons une méthode d'imputation évolutive basée sur un modèle pour les données manquantes dans la classification. Cette méthode présente le problème d'imputation sous la forme d'un ensemble de tâches de classification / régression résolues progressivement.Nous présentons un cadre unifié qui sert de plate-forme d'apprentissage commune où les méthodes de traitement par batch et par flux peuvent interagir de manière positive. Nous montrons que les méthodes batch peuvent être efficacement formées sur le réglage du flux dans des conditions spécifiques. Nous proposons également une adaptation de l'Extreme Gradient Boosting algorithme aux flux de données en évolution. La méthode adaptative proposée génère et met à jour l'ensemble de manière incrémentielle à l'aide de mini-lots de données. Enfin, nous présentons scikit-multiflow, un framework open source en Python qui comble le vide en Python pour une plate-forme de développement/recherche pour l'apprentissage à partir de flux de données en évolution.
  • Advanced Optical Communications and Networking
    • Gallion Philippe
    , 2019.
  • AT2: Asynchronous Trustworthy Transfers
    • Guerraoui Rachid
    • Kuznetsov Petr
    • Monti Matteo
    • Pavlovic Matej
    • Seredinschi Dragos-Adrian
    Computing Research Repository, ACM / ArXiv, 2019. Many blockchain-based protocols, such as Bitcoin, implement a decentralized asset transfer (or exchange) system. As clearly stated in the original paper by Nakamoto, the crux of this problem lies in prohibiting any participant from engaging in double-spending. There seems to be a common belief that consensus is necessary for solving the double-spending problem. Indeed, whether it is for a permissionless or a permissioned environment, the typical solution uses consensus to build a totally ordered ledger of submitted transfers. In this paper we show that this common belief is false: consensus is not needed to implement of a decentralized asset transfer system. We do so by introducing AT2 (Asynchronous Trustworthy Transfers), a class of consensusless algorithms. To show formally that consensus is unnecessary for asset transfers, we consider this problem first in the shared-memory context. We introduce AT2SM, a wait-free algorithm that asynchronously implements asset transfer in the read-write shared-memory model. In other words, we show that the consensus number of an asset-transfer object is one. In the message passing model with Byzantine faults, we introduce a generic asynchronous algorithm called AT2MP and discuss two instantiations of this solution. First, AT2D ensures deterministic guarantees and consequently targets a small scale deployment (tens to hundreds of nodes), typically for a permissioned environment. Second, AT2P provides probabilistic guarantees and scales well to a very large system size (tens of thousands of nodes), ensuring logarithmic latency and communication complexity. Instead of consensus, we construct AT2D and AT2P on top of a broadcast primitive with causal ordering guarantees offering deterministic and probabilistic properties, respectively.
  • Adaptive restart of accelerated gradient methods under local quadratic growth condition
    • Fercoq Olivier
    • Qu Zheng
    IMA Journal of Numerical Analysis, Oxford University Press (OUP), 2019. By analyzing accelerated proximal gradient methods under a local quadratic growth condition, we show that restarting these algorithms at any frequency gives a globally linearly convergent algorithm. This result was previously known only for long enough frequencies. Then, as the rate of convergence depends on the match between the frequency and the quadratic error bound, we design a scheme to automatically adapt the frequency of restart from the observed decrease of the norm of the gradient mapping. Our algorithm has a better theoretical bound than previously proposed methods for the adaptation to the quadratic error bound of the objective. We illustrate the efficiency of the algorithm on a Lasso problem and on a regularized logistic regression problem. (10.1093/imanum/drz007)
    DOI : 10.1093/imanum/drz007
  • Experimental demonstration of data transmission based on the exact inverse periodic nonlinear Fourier transform
    • Goossens Jan-Willem
    • Jaouën Yves
    • Harfermann Hartmut
    , 2019.
  • Integrity Probe: Using Programmer as Root of Trust for Bare Metal Blockchain Crypto Terminal. Invited Paper
    • Urien Pascal
    , 2019, pp.1-5. (10.1109/MOBISECSERV.2019.8686637)
    DOI : 10.1109/MOBISECSERV.2019.8686637