Sorry, you need to enable JavaScript to visit this website.
Partager

Publications

 

Les publications de nos enseignants-chercheurs sont sur la plateforme HAL :

 

Les publications des thèses des docteurs du LTCI sont sur la plateforme HAL :

 

Retrouver les publications figurant dans l'archive ouverte HAL par année :

2017

  • Formalism to assess and enhance the entropy and reliability of a Loop-PUF
    • Danger Jean-Luc
    , 2017.
  • Méthode de Monte Carlo à dynamique hamiltonienne pour estimation d'un modèle thermique de bâtiment
    • Nabil Tahar
    • Moulines Eric
    • Jicquel Jean-Marc
    • Girard Alexandre
    • Lajaunie Christian
    , 2017.
  • One-to-One Matching of RTT and Path Changes
    • Wenqin Shao
    • Rougier Jean-Louis
    • Paris Antoine
    • Devienne Francois
    • Viste Mateusz
    , 2017. Route selection based on performance measurements is an essential task in inter-domain Traffic Engineering. It can benefit from the detection of significant changes in RTT measurements and the understanding on potential causes of change. Among the extensive works on change detection methods and their applications in various domains, few focus on RTT measurements. It is thus unclear which approach works the best on such data.In this paper, we present an evaluation framework for change detection on RTT times series, consisting of: 1) a carefully labelled 34,008-hour RTT dataset as ground truth;2) a scoring method specifically tailored for RTT measurements. Furthermore, we proposed a data transformation that improves the detection performance of existing methods.Path changes are as well attended to. We fix shortcomings of previous works by distinguishing path changes due to routing protocols (IGP and BGP) from those caused by load balancing.Finally, we apply our change detection methods to a large set of measurements from RIPE Atlas. The characteristics of both RTT and path changes are analyzed; the correlation between the two are also illustrated. We identify extremely frequent AS path changes yet with few consequences on RTT, which has not been reported before. (10.23919/ITC.2017.8064356)
    DOI : 10.23919/ITC.2017.8064356
  • Bayesian Collaborative Denoising for Monte Carlo Rendering
    • Boughida Malik
    • Boubekeur Tamy
    Computer Graphics Forum (Proc. EGSR 2017), 2017, 36 (4), pp.137-153. The stochastic nature of Monte Carlo rendering algorithms inherently produces noisy images. Essentially, three approaches have been developed to solve this issue: improving the ray-tracing strategies to reduce pixel variance, providing adaptive sampling by increasing the number of rays in regions needing so, and filtering the noisy image as a post-process. Although the algorithms from the latter category introduce bias, they remain highly attractive as they quickly improve the visual quality of the images, are compatible with all sorts of rendering effects, have a low computational cost and, for some of them, avoid deep modifications of the rendering engine. In this paper, we build upon recent advances in both non-local and collaborative filtering methods to propose a new efficient denoising operator for Monte Carlo rendering. Starting from the local statistics which emanate from the pixels sample distribution, we enrich the image with local covariance measures and introduce a nonlocal bayesian filter which is specifically designed to address the noise stemming from Monte Carlo rendering. The resulting algorithm only requires the rendering engine to provide for each pixel a histogram and a covariance matrix of its color samples. Compared to state-of-the-art sample-based methods, we obtain improved denoising results, especially in dark areas, with a large increase in speed and more robustness with respect to the main parameter of the algorithm. We provide a detailed mathematical exposition of our bayesian approach, discuss extensions to multiscale execution, adaptive sampling and animated scenes, and experimentally validate it on a collection of scenes.
  • Distributed Simplicial Homology Based Load Balancing Algorithm for Cellular Networks
    • Le Ngoc-Khuyen
    • Vergne Anais
    • Martins Philippe
    • Decreusefond Laurent
    , 2017. —In this paper, we introduce a distributed load balancing algorithm for cellular networks. Traffic load in cellular networks is sometimes unbalanced. Some cells are overloaded, while others remain free. Simplicial homology is a tool from algebraic topology that allows to compute the coverage of a network by using only simple matrix computations. Our algorithm, which is based on simplicial homology, controls the transmission power of each cell in the network, not only to satisfy the coverage constraint, but also to redirect users from the overloaded cells to the underloaded ones. As a result, the traffic load of the cellular network is more balanced. The simulation results show that this algorithm improves the capacity of the whole network by 2.3% when the user demand is fast varying.
  • A scalable and systolic architectures of montgomery modular multiplication for public key cryptosystems based on dsps
    • El Mrabet Nadia
    • Mrabet Amine
    • Lashermes Ronan
    • Rigaud Jean-Baptiste
    • Bouallegue Belgacem
    • Mesnager Sihem
    • Machhout Mohsen
    Journal Hardware and Systems Security, Springer, 2017, 1 (3), pp.219-236. (10.1007/s41635-017-0018-x)
    DOI : 10.1007/s41635-017-0018-x
  • Max K-armed Bandits: On EstremeHunter and an Alternative Approach
    • Achab Mastane
    • Clémençon Stéphan
    • Garivier Aurélien
    • Sabourin Anne
    • Vernade Claire
    , 2017.
  • Chaotic Dynamic of Quantum Cascade Lasers
    • Grillot Frédéric
    , 2017.
  • A Curriculum for Developing Serious Games for Children with Autism: A Success Story
    • Hulusic Vedad
    • Pistoljevic Nirvana
    , 2017. Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder, detectable early in development and characterized by the lack of socialization, development of language and patterns of rigid, repetitive, auto-stimulating behaviors that interfere with overall functioning of a person. Due to reduced level of attention and different style of learning, teaching children with ASD requires a particular set of tools and methods. Studies have shown that computer-based intervention, typically in form of serious games, can be effectively utilized for developing various skills, allowing children with disabilities both learning with teachers and practicing on their own time, when the taught concepts are presented in a fun, informal, and engaging way. Nonetheless, there is a limited amount of appropriately designed serious games for children with ASD, especially in less spread languages native to the children. In this paper we present a complete curriculum for final year Computer Science (CS) undergraduate students, aimed at developing web-based serious games for teaching children with and without autism basic concepts. In addition, we present multiple outcomes of such course taught by the authors, computer scientist and a psychologist and a special educator. We believe that inclusion of such curriculum in CS undergraduate programs could benefit both the students, children with ASD, teachers of both groups and the community in general.
  • International Broadcasting Convention 2017
    • van Deventer Oskar
    • Dufourd Jean-Claude
    • Oh Sejin
    • Lim Seong Yong
    • Lim Youngkwon
    • Chandramouli Krishna
    • Koenen Rob
    , 2017. The proliferation of new capabilities in affordable smart devices capable of capturing, processing and rendering audio-visual media content triggers a need for coordination and orchestration between these devices and their capabilities, and of the content flowing from and to such devices. The upcoming Moving Picture Experts Group (MPEG) Media Orchestration (‘MORE’, ISO/IEC 23001-13) standard enables the temporal and spatial orchestration of multiple media and metadata streams. Temporal orchestration is about time synchronisation of media and sensor captures, processing and renderings, for which the MORE standard uses and extends a DVB standard. Spatial orchestration is about the alignment of (global) position, altitude and orientation, for which the MORE standard provides dedicated timed metadata. Other types of orchestration involve timed metadata for region of interest, perceptual quality of media, audio-feature extraction and media timeline correlation. This study presents the status of the MORE standard, as well as associated technical and experimental support materials. The authors also link MORE to the recently initiated MPEG-I (MPEG Immersive) project.
  • Distances entre lois de probabilités définies sur R+
    • Nicolas Jean-Marie
    • Tupin Florence
    • Sportouche H.
    , 2017.
  • Transductive Attributes For Ship Category Recognition
    • Oliveau Quentin
    • Sahbi Hichem
    , 2017.
  • Spatio-temporal image analysis and regularization
    • Tupin Florence
    , 2017.
  • A Web-Based Platform for Annotating Sentiment-Related Phenomena in Human-Agent Conversations.
    • Langlet Caroline
    • Clavel Chloé
    , 2017.
  • Optical feedback dynamics of a mid-infrared semiconductor quantum cascade laser
    • Jumpertz Louise
    • Schires Kevin
    • Spitz Olivier
    • Sciamanna Marc
    • Grillot Frédéric
    , 2017.
  • Design of a Thin Ultra Wideband Metamaterial Absorber
    • Begaud Xavier
    • Varault Stefan
    • Lepage A. C.
    • Soiron M.
    • Barka André
    , 2017.
  • Electromagnetic Fault Injection: from Attack to Countermeasure Design
    • Sauvage Laurent
    , 2017.
  • RF EMF Risk Perception Revisited: Is the Focus on Concern Sufficient for Risk Perception Studies?
    • Wiedemann Peter M
    • Freudenstein Frederik
    • Böhmert Christoph
    • Wiart Joe
    • Croft Rodney J
    International Journal of Environmental Research and Public Health, MDPI, 2017, 14 (6), pp.620. An implicit assumption of risk perception studies is that concerns expressed in questionnaires reflect concerns in everyday life. The aim of the present study is to check this assumption, i.e., the extrapolability of risk perceptions expressed in a survey, to risk perceptions in everyday life. To that end, risk perceptions were measured by a multidimensional approach. In addition to the traditional focus on measuring the magnitude of risk perceptions, the thematic relevance (how often people think about a risk issue) and the discursive relevance (how often people think about or discuss a risk issue) of risk perceptions were also collected. Taking into account this extended view of risk perception, an online survey was conducted in six European countries with 2454 respondents, referring to radio frequency electromagnetic field (RF EMF) risk potentials from base stations, and access points, such as WiFi routers and cell phones. The findings reveal that the present study’s multidimensional approach to measuring risk perception provides a more differentiated understanding of RF EMF risk perception. High levels of concerns expressed in questionnaires do not automatically imply that these concerns are thematically relevant in everyday life. We use thematic relevance to distinguish between enduringly concerned (high concern according to both questionnaire and thematic relevance) and not enduringly concerned participants (high concern according to questionnaire but no thematic relevance). Furthermore, we provide data for the empirical value of this distinction: Compared to other participants, enduringly concerned subjects consider radio frequency electromagnetic field exposure to a greater extent as a moral and affective issue. They also see themselves as highly exposed to radio frequency electromagnetic fields. However, despite these differences, subjects with high levels of thematic relevance are nevertheless sensitive to exposure reduction as a means for improving the acceptance of base stations in their neighborhood. This underlines the value of exposure reduction for the acceptance of radio frequency electromagnetic field communication technologies. (10.3390/ijerph14060620)
    DOI : 10.3390/ijerph14060620
  • LISP EID Block
    • Iannone Luigi
    • Lewis Darrel
    • Meyer Dave
    • Fuller Vince
    , 2017.
  • Sparsity Analysis using a Mixed Approach with Greedy and LS Algorithms on Channel Estimation
    • Maciel Nilson
    • Crespo Marques Elaine
    • Naviner Lirida
    , 2017, pp.91-95.
  • Subject-specific time-frequency selection for multi-class motor imagery-based BCIs using few Laplacian EEG channels
    • Yang Yuan
    • Chevallier Sylvain
    • Wiart Joe
    • Bloch Isabelle
    Biomedical Signal Processing and Control, Elsevier, 2017, 38, pp.302-311. The essential task of a motor imagery brain–computer interface (BCI) is to extract the motor imagery-related features from electroencephalogram (EEG) signals for classifying motor intentions. However, the optimal frequency band and time segment for extracting such features differ from subject to subject. In this work, we aim to improve the multi-class classification and to reduce the required EEG channel in motor imagery-based BCI by subject-specific time-frequency selection. Our method is based on a criterion namely Fisher discriminant analysis-type F-score to simultaneously select the optimal frequency band and time segment for multi-class classification. The proposed method uses only few Laplacian EEG channels (C3, Cz and C4) located around the sensorimotor area for classification. Applied to a standard multi-class BCI dataset (BCI competition III dataset IIIa), our method leads to better classification performance and smaller standard deviation across subjects compared to the state-of-art methods. Moreover , adding artifacts contaminated trials to the training dataset does not necessarily deteriorate our classification results, indicating that our method is tolerant to artifacts. (10.1016/j.bspc.2017.06.016)
    DOI : 10.1016/j.bspc.2017.06.016
  • A Hybrid Methodology for the Performance Evaluation of Internet-scale Cache Networks
    • Leonardi E.
    • Rossi Dario
    • Tortelli Michele
    Elsevier Computer Networks, 2017, 125, pp.146-159. Two concurrent factors challenge the evaluation of large-scale cache networks: complex algorithmic interactions, which are hardly represented by analytical models, and catalog/network size, which limits the scalability of event-driven simulations. To solve these limitations, we propose a new hybrid technique, that we colloquially refer to as ModelGraft, which combines elements of stochastic analysis within a simulative Monte-Carlo approach. In ModelGraft, large scenarios are mapped to a downscaled counterpart built upon Time-To-Live (TTL) caches, to achieve CPU and memory scalability. Additionally, a feedback loop ensures convergence to a consistent state, whose performance accurately represent those of the original system. Finally, the technique also retains simulation simplicity and flexibility, as it can be seamlessly applied to numerous forwarding, meta-caching, and replacement algorithms. We implement and make ModelGraft available as an alternative simulation engine of ccnSim. Performance evaluation shows that, with respect to classic event-driven simulation, ModelGraft gains over two orders of magnitude in both CPU time and memory complexity, while limiting accuracy loss below 2%. Ultimately, ModelGraft pushes the boundaries of the performance evaluation well beyond the limits achieved in the current state of the art, enabling the study of Internet-scale scenarios with content catalogs comprising hundreds billions objects.
  • Characterization of the 3F4-3H6 transition in thulium-doped silica fibres and simulation of a 2µm single clad amplfier
    • Romano Clément
    • Tench Robert E.
    • Delavaux Jean-Marc
    • Jaouën Yves
    , 2017, pp.paper P1.SC1.1.
  • 2017
    • Eagan James R
    , 2017.
  • Experimental demonstration of space-time coding for MDL mitigation in few-mode fiber transmission system
    • Amhoud El Mehdi
    • Rekaya-Ben Othman Ghaya
    • Bigot Laurent
    • Song Mengdi
    • Andresen Esben-Ravn
    • Labroille Guillaume
    • Bigot-Astruc Marianne
    • Jaouën Yves
    , 2017, paper M.1.D.2.