Sorry, you need to enable JavaScript to visit this website.
Partager

Publications

 

Les publications de nos enseignants-chercheurs sont sur la plateforme HAL :

 

Les publications des thèses des docteurs du LTCI sont sur la plateforme HAL :

 

Retrouver les publications figurant dans l'archive ouverte HAL par année :

2018

  • High-speed per-flow software monitoring with limited resources
    • Zhang Tianzhu
    • Linguaglossa Leonardo
    • Gallo Massimo
    • Giaccone Paolo
    • Rossi Dario
    , 2018.
  • Practical Random Linear Coding for MultiPath TCP: MPC-TCP
    • Paul-Louis Ageneau
    • Boukhatem Nadia
    • Gerla Mario
    , 2018. MPTCP is a TCP extension that enables transparent multipath for multihomed hosts. However, MPTCP is subject to head-of-line blocking, a problem that degrades delay and throughput. This problem is especially critical when used in wireless environments. On wireless, unreliable links, for example, traffic can get stalled on one path, slowing down the entire flow. A related problem is rescheduling the packets in other subflows too early, which could result in increased overhead. Random linear network coding is a potential approach to solve this problem among others, and we choose to focus in its practical capability to attenuate performance drops caused by blocking while guaranteeing full network compatibility. We have developed a version of MPTCP with network coding, MPC-TCP (MultiPath Coded TCP) and implemented it in the Linux kernel. This scheme offers a simple, practical implementation of network coding across subflows, requires minimal changes to MPTCP and preserves the TCP subflows compatibility with middleboxes. We then use our implementation to investigate the network scenarios where efficiency gains are the highest compared to vanilla MPTCP.
  • Integral estimation based on Markovian design
    • Azaïs Romain
    • Delyon Bernard
    • Portier François
    Advances in Applied Probability, Applied Probability Trust, 2018, 50 (3), pp.833-857. Suppose that a mobile sensor describes a Markovian trajectory in the ambient space. At each time the sensor measures an attribute of interest, e.g., the temperature. Using only the location history of the sensor and the associated measurements, the aim is to estimate the average value of the attribute over the space. In contrast to classical probabilistic integration methods, e.g., Monte Carlo, the proposed approach does not require any knowledge on the distribution of the sensor trajectory. Probabilistic bounds on the convergence rates of the estimator are established. These rates are better than the traditional "root n"-rate, where n is the sample size, attached to other probabilistic integration methods. For finite sample sizes, the good behaviour of the procedure is demonstrated through simulations and an application to the evaluation of the average temperature of oceans is considered. (10.1017/apr.2018.38)
    DOI : 10.1017/apr.2018.38
  • IoT technologies for smart cities
    • Hammi Badis
    • Khatoun Rida
    • Zeadally Sherali
    • Fayad Achraf
    • Khoukhi Lyes
    IET Networks, John Wiley & Sons Inc., 2018, 7 (1), pp.1-13. The large deployment of Internet of things (IoT) is actually enabling smart city projects and initiatives all over the world. Objects used in daily life are being equipped with electronic devices and protocol suites in order to make them interconnected and connected to the Internet. According to a recent Gartner study, 50 billion connected objects will be deployed in smart cities by 2020. These connected objects will make the authors’ cities smart. However, they will also open up risks and privacy issues. As various smart city initiatives and projects have been launched in recent years, theyhave witnessed not only the expected benefits, but the risks introduced. They describe the current and future trends of smart city and IoT. They also discuss the interaction between smart cities and IoT and explain some of the drivers behind the evolution and development of IoT and smart city. Finally, they discuss some of the IoT weaknesses and how they can be addressed when used for smart cities. (10.1049/iet-net.2017.0163)
    DOI : 10.1049/iet-net.2017.0163
  • Belief revision, minimal change and relaxation: A general framework based on satisfaction systems, and applications to description logics
    • Aiguier Marc
    • Atif Jamal
    • Bloch Isabelle
    • Hudelot Céline
    Artificial Intelligence (AIJ), Elsevier, 2018, 256, pp.160 - 180. Belief revision of knowledge bases represented by a set of sentences in a given logic has been extensively studied but for specific logics, mainly propositional, and also recently Horn and description logics. Here, we propose to generalize this operation from a model-theoretic point of view, by defining revision in the abstract model theory of satisfaction systems. In this framework, we generalize to any satisfaction system the characterization of the AGM postulates given by Katsuno and Mendelzon for propositional logic in terms of minimal change among interpretations. In this generalization, the constraint on syntax independence is partially relaxed. Moreover, we study how to define revision, satisfying these weakened AGM postulates, from relaxation notions that have been first introduced in description logics to define dissimilarity measures between concepts, and the consequence of which is to relax the set of models of the old belief until it becomes consistent with the new pieces of knowledge. We show how the proposed general framework can be instantiated in different logics such as propositional, first-order, description and Horn logics. In particular for description logics, we introduce several concrete relaxation operators tailored for the description logic ALC and its fragments EL and ELU, discuss their properties and provide some illustrative examples. (10.1016/j.artint.2017.12.002)
    DOI : 10.1016/j.artint.2017.12.002
  • Technologies d’antennes- De l’antenne élémentaire aux grandes antennes
    • Begaud Xavier
    , 2018.
  • Operations research and voting theory
    • Hudry Olivier
    , 2018. One main concern of voting theory is to determine a procedure for choosing a winner from among a set of candidates, based on the preferences of the voters or, more ambitiously, for ranking all the candidates or a part of them. In this presentation, we pay attention to some contributions of operations research to the design and the study of some voting procedures. First, we show through an easy example that the voting procedure plays an important role in the determination of the winner: for an election with four candidates, the choice of the voting procedure allows electing anyone of the four candidates with the same individual preferences of the voters. This provides also the opportunity to recall some main procedures, including Condorcet’s procedure, and leads to the statement of Arrow’s theorem. In a second step, more devoted to a mathematical approach, we detail a voting procedure based on the concept of Condorcet winner, namely the so-called median procedure. In this procedure, the aim is to rank the candidates in order to minimize the number of disagreements with respect to the voters’ preferences. Thus we obtain a combinatorial optimization problem. We show how to state it as a linear programming problem with binary variables. We specify the complexity of this median procedure. Last, we show, once again through easy examples, that the lack of some desirable properties for the considered voting procedure may involve some “paradoxes”.
  • Scikit-Multiflow: A Multi-output Streaming Framework
    • Montiel Jacob
    • Read Jesse
    • Bifet Albert
    • Abdessalem Talel
    Journal of Machine Learning Research, Microtome Publishing, 2018, 19.
  • Perceived dynamic range of HDR images with no semantic information
    • Hulusic Vedad
    • Valenzise Giuseppe
    • Dufaux Frédéric
    , 2018, 30 (14), pp.1-6. Computing dynamic range of high dynamic range (HDR) content is an important procedure when selecting the test material , designing and validating algorithms, or analyzing aesthetic attributes of HDR content. It can be computed on a pixel-based level, measured through subjective tests or predicted using a mathematical model. However, all these methods have certain limitations. This paper investigates whether dynamic range of modeled images with no semantic information, but with the same first order statistics as the original, natural content, is perceived the same as for the corresponding natural images. If so, it would be possible to improve the perceived dynamic range (PDR) pre-dictor model by using additional objective metrics, more suitable for such synthetic content. Within the subjective study, three experiments were conducted with 43 participants. The results show significant correlation between the mean opinion scores for the two image groups. Nevertheless, natural images still seem to provide better cues for evaluation of PDR. (10.2352/ISSN.2470-1173.2018.14.HVEI-507)
    DOI : 10.2352/ISSN.2470-1173.2018.14.HVEI-507
  • Task Computability in Unreliable Anonymous Networks
    • Kuznetsov Petr
    • Yanagisawa Nayuta
    , 2018, pp.23:1-23:13. (10.4230/LIPIcs.OPODIS.2018.23)
    DOI : 10.4230/LIPIcs.OPODIS.2018.23
  • Parallel Combining: Benefits of Explicit Synchronization
    • Aksenov Vitaly
    • Kuznetsov Petr
    • Shalyto Anatoly
    , 2018, pp.11:1-11:16. (10.4230/LIPIcs.OPODIS.2018.11)
    DOI : 10.4230/LIPIcs.OPODIS.2018.11
  • Finding events in temporal networks: Segmentation meets densest-subgraph discovery
    • Rozenshtein Polina
    • Bonchi Francesco
    • Gionis Aristides
    • Sozio Mauro
    • Tatti Nikolaj
    , 2018. In this paper we study the problem of discovering a timeline of events in a temporal network. We model events as dense subgraphs that occur within intervals of network activity. We formulate the event-discovery task as an optimization problem, where we search for a partition of the network timeline into k non-overlapping intervals, such that the intervals span subgraphs with maximum total density. The output is a sequence of dense subgraphs along with corresponding time intervals, capturing the most interesting events during the network lifetime. A naïve solution to our optimization problem has polynomial but prohibitively high running time complexity. We adapt existing recent work on dynamic densest-subgraph discovery and approximate dynamic programming to design a fast approximation algorithm. Next, to ensure richer structure, we adjust the problem formulation to encourage coverage of a larger set of nodes. This problem is NP-hard even for static graphs. However, on static graphs a simple greedy algorithm leads to approximate solution due to submodularity. We extended this greedy approach for the case of temporal networks. However, the approximation guarantee does not hold. Nevertheless, according to the experiments, the algorithm finds good quality solutions. (10.1109/ICDM.2018.00055)
    DOI : 10.1109/ICDM.2018.00055
  • Modèles symboliques pour la reconnaissance de structures dans les images médicales
    • Bloch Isabelle
    , 2018, pp.39-48. En imagerie médicale, il est difficile de fournir une analyse et une interprétation pertinentes en s'appuyant uniquement sur les données. L'association entre méthodes symboliques et structurelles d'une part, et méthodes numériques d'autre part est donc primordiale. Cet article résume quelques uns de nos travaux au carrefour de l'intelligence artificielle et de l'interprétation d'images, avec des applications en imagerie médicale. Nous présentons l'intérêt de la modélisation de connaissances pour guider l'inteprétation d'images médicales, en insistant sur les connaissances structurelles telles que des relations spatiales. Ces connaissances peuvent être modélisées sous forme d'ontologies, de graphes, ou encore de réseaux de contraintes, associés à des représentations floues de relations spatiales. Nous illustrons quelques méthodes de reconnaissance d'objets et de scènes, guidées par ces modèles, en particulier en imagerie cérébrale, pour la segmentation et la reconnaissance de structures internes du cerveau, y compris en présence de tumeurs.
  • EviDense: a Graph-based Method for Finding Unique High-impact Events with Succinct Keyword-based Descriptions
    • Balalau Oana
    • Castillo Carlos
    • Sozio Mauro
    , 2018. Despite the significant efforts made by the research community in recent years, automatically acquiring valuable information about high impact-events from social media remains challenging. We present EVIDENSE, a graph-based approach for finding high-impact events (such as disaster events) in social media. Our evaluation shows that our method outper-forms state-of-the-art approaches for the same problem, in terms of having higher precision, lower number of duplicates, while providing a keyword-based description that is succinct and informative.
  • Spoofing Attack and Surveillance Game in Geo-location Database Driven Spectrum Sharing
    • Nguyen-Thanh Nhan
    • Ta Duc-Tuyen
    • Nguyen van Tam
    IET Communications, Institution of Engineering and Technology, 2018. The geo-location database (GDB) driven is the enforcement method for dynamic spectrum sharing in TV White Space and 3.5 GHz spectrum bands, as well as a preferred option for the other spectrum sharing applications. Although providing accurate and reliable spectrum information services, the GDB driven spectrum sharing suffers from a critical security threat of spoofing attack. Under a spoofing attack, an adversary could spoof either the identification (ID) or the location information in its request messages. This breaks the fairness and reduces the efficiency of the GDB driven spectrum sharing system. In order to counteract the location and ID spoofing attacks, we consider the location verification of request messages and the ID verification of communicating data. Because a resource manager and an adversary are independent and self-interested, we formulate two corresponding surveillance games to analyze the conflict interaction between spoofing attack and the countermeasures. By expressing the surveillance game on requests’ location in a strategic form and representing the surveillance game on data ID in a sequence form, we find out Nash equilibrium. The analytical and numerical results show that a resource manager can mitigate the spoofing attack by adequately adapting its penalty policy and surveillance strategy.
  • Method, device, and computer program for improving transmission of encoded media data
    • Denoual Franck
    • Mazé Frédéric
    • Le Feuvre J.
    • Ouedraogo Nael
    , 2018.
  • Training and Compensation of Class-conditioned NMF Bases for Speech Enhancement
    • Chung Hanwook
    • Badeau Roland
    • Plourde Eric
    • Champagne Benoît
    Neurocomputing, Elsevier, 2018. In this paper, we introduce a training and compensation algorithm of the class-conditioned basis vectors in the non-negative matrix factorization (NMF) model for single-channel speech enhancement. The main goal is to estimate the basis vectors of different signal sources in a way that prevents them from representing other sources, in order to reduce the residual noise components that have features similar to the speech signal. During the proposed training stage, the basis matrices for the clean speech and noises are estimated jointly by constraining them to belong to different classes. To this end, we employ the probabilistic generative model (PGM) of classification, specified by class-conditional densities, as an a priori distribution for the basis vectors. The update rules of the NMF and the PGM parameters of classification are jointly obtained by using the variational Bayesian expectation-maximization (VBEM) algorithm, which guarantees convergence to a stationary point. Another goal of the proposed algorithm is to handle a mismatch between the characteristics of the training and test data. This is accomplished during the proposed enhancement stage, where we implement a basis compensation scheme. Specifically, we use extra free basis vectors to capture the features which are not included in the training data. Objective experimental results for different combination of speaker and noise types show that the proposed algorithm can provide better speech enhancement performance than the benchmark algorithms under various conditions.
  • Biometric Systems Private by Design: Reasoning about privacy properties of biometric system architectures
    • Bringer Julien
    • Chabanne Hervé
    • Le Métayer Daniel
    • Lescuyer Roch
    Transactions on Data Privacy, IIIA-CSIC, 2018, 11 (2), pp.111-137. The goal of the work presented in this paper is to show the applicability of the privacy by design approach to biometric systems and the benefit of using formal methods to this end. We build on a general framework for the definition and verification of privacy architectures introduced at STM 2014 and show how it can be adapted to biometrics. The choice of particular techniques and the role of the components (central server, secure module, biometric terminal, smart card, etc.) in the architecture have a strong impact on the privacy guarantees provided by a biometric system. Some architectures have already been analysed but on a case by case basis, which makes it difficult to draw comparisons and to provide a rationale for the choice of specific options. In this paper, we describe the application of a general privacy architecture framework to specify different design options for biometric systems and to reason about them in a formal way.
  • Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods
    • Gower Robert M.
    • Roux Nicolas Le
    • Bach Francis
    , 2018. Our goal is to improve variance reducing stochastic methods through better control variates. We first propose a modification of SVRG which uses the Hessian to track gradients over time, rather than to recondition, increasing the correlation of the control variates and leading to faster theoretical convergence close to the optimum. We then propose accurate and computationally efficient approximations to the Hessian, both using a diagonal and a low-rank matrix. Finally, we demonstrate the effectiveness of our method on a wide range of problems.
  • Platypus – A Multilingual Question Answering Platform for Wikidata
    • Tanon Thomas Pellissier
    • Dias de Assuncao Marcos
    • Caron Eddy
    • Suchanek Fabian
    , 2018. In this paper we present Platypus, a natural language question answering system. Our objective is to provide the research community with a production-ready multilingual question answering platform that targets Wiki-data, the largest general-purpose knowledge base on the Semantic Web. Our platform can answer complex queries in several languages, using hybrid grammatical and template based techniques.
  • Physical Security Versus Masking Schemes
    • Danger Jean-Luc
    • Guilley Sylvain
    • Heuser Annelie
    • Legay Axel
    • Ming Tang
    , 2018, pp.269-284. (10.1007/978-3-319-98935-8_13)
    DOI : 10.1007/978-3-319-98935-8_13
  • Optimization of classification and regression analysis of four monoclonal antibodies from Raman spectra using collaborative machine learning approach
    • Le Laetitia Minh Maï
    • Kégl Balázs
    • Gramfort Alexandre
    • Marini Camille
    • Nguyen David
    • Cherti Mehdi
    • Tfaili Sana
    • Tfayli Ali
    • Baillet-Guffroy Arlette
    • Prognon Patrice
    • Chaminade Pierre
    • Caudron Eric
    Talanta, Elsevier, 2018, 184, pp.260-265. The use of monoclonal antibodies (mAbs) constitutes one of the most important strategies to treat patients suffering from cancers such as hematological malignancies and solid tumors. These antibodies are prescribed by the physician and prepared by hospital pharmacists. An analytical control enables the quality of the preparations to be ensured. The aim of this study was to explore the development of a rapid analytical method for quality control. The method used four mAbs (Infliximab, Bevacizumab, Rituximab and Ramucirumab) at various concentrations and was based on recording Raman data and coupling them to a traditional chemometric and machine learning approach for data analysis. Compared to conventional linear approach, prediction errors are reduced with a data-driven approach using statistical machine learning methods. In the latter, preprocessing and predictive models are jointly optimized. An additional original aspect of the work involved on submitting the problem to a collaborative data challenge platform called Rapid Analytics and Model Prototyping (RAMP). This allowed using solutions from about 300 data scientists in collaborative work. Using machine learning, the prediction of the four mAbs samples was considerably improved. The best predictive model showed a combined error of 2.4% versus 14.6% using linear approach. The concentration and classification errors were 5.8% and 0.7%, only three spectra were misclassified over the 429 spectra of the test set. This large improvement obtained with machine learning techniques was uniform for all molecules but maximal for Bevacizumab with an 88.3% reduction on combined errors (2.1% versus 17.9%). (10.1016/j.talanta.2018.02.109)
    DOI : 10.1016/j.talanta.2018.02.109
  • An Improved Path Optimum Algorithm for Container Relocation Problems in Port Terminals Worldwide
    • Wang A.
    • Mehmood Fahad
    • Mohmand Y.T.
    • Zheng S.
    Journal of Coastal Research, Coastal Education and Research Foundation, 2018, 34 (3), pp.752--765. The container relocation problem (CRP) for ports worldwide has been a highly significant research topic because of its contribution to the improvement of yard-running efficiency. It can be defined as a sequence that allows each container to be extracted with the least number of relocations when identical containers in a cluster have been stacked in a block. This study differs from previous research mainly in five aspects: (1) a two-level goal programming model for CRP is presented that can help in understanding of this problem and provide a solid theoretical ground for this research; (2) because of the NP time hardness of the CRP, heuristic rules are proposed to reduce hunting space by the way of dividing solution space; (3) an improved path optimum algorithm (I-POA) is proposed to reduce unfeasible solutions and find a high-quality solution for any three-dimensional case in a shorter running time; (4) the numerical experiments show that the algorithm proposed in this research achieves better performance than similar algorithms because of its higher levels of efficiency and more robust property; and (5) a general expression for the utilization level of the storage area and the number of relocations is proposed to check the reliability of the result by conducting a sensitivity analysis. Based on this research, the following conclusion can be obtained: I-POA possesses significant practical value in the improvement of intelligent resource scheduling standards of coastal container ports worldwide. \textcopyright 2018 Coastal Education and Research Foundation, Inc. (10.2112/JCOASTRES-D-17-00056.1)
    DOI : 10.2112/JCOASTRES-D-17-00056.1
  • Depiction of the perfusion components’ volume fraction distribution in generalized intravoxel incoherent motion by using Gaussian mixture model
    • Wang Shunli
    • Liu Wanyu
    • Kuai Zixiang
    • Zhu Yuemin
    Concepts in Magnetic Resonance Part B: Magnetic Resonance Engineering, Wiley, 2018, 48 (3), pp.e21399. Gaussian mixture model (GMM) was proposed to depict the perfusion volume fraction distribution in the generalized intravoxel incoherent motion model (GIVIM) to improve GIVIM's ability of describing complex perfusion conditions and their changes. Different hepatic perfusion conditions were accounted for by performing different combinations of imaging sequence and diffusion time on six normal livers. In order to evaluate GIVIM-GMM's reliability in perfusion condition analysis, the fitting to diffusion-weighted (DW) data and the consistency between diffusion-related parameters' change and the data's change were tested and the recent GIVIM and the triexponential models were chosen for comparison. The difference of the fitting results was evaluated by performing the extra-sum-of-squares F test and information criteria on normal human DW data. The difference of the consistency was assessed by using two-tailed paired Student's t test. In the extra-sum-of-squares F test, the relative difference ratio F values derived from theGIVIM and GIVIM-GMM and that derived from the triexponential model and the GIVIM-GMM are respectively 25.334 and 27.976, which indicates that significant difference existed and that the GIVIM-GMM provides better fit to the normal human liver DW data. In information criteria test, the evidence ratio values were determined by dividing the GIVIM's or triexponential model's correct probability by the GIVIM-GMM's. Both evidence ratio values (2.3942x10(-10), 8.6167x10(-9), respectively) are much smaller than 1, which also expresses that the best model used to fit the normal human liver DW data was the GIVIM-GMM. In two-tailed paired student's t test, the GIVIM-GMM provides more parameters to give a finer description of perfusion than the triexponential model or GIVIM. In short, all the results demonstrated that the GIVIM-GMM provides better performance than the existing IVIM models for depicting the signal attenuation in DW imaging. (10.1002/cmr.b.21399)
    DOI : 10.1002/cmr.b.21399
  • Efficient Bayesian Computation by Proximal Markov Chain Monte Carlo: When Langevin Meets Moreau.
    • Durmus Alain
    • Moulines Éric
    • Pereyra Marcelo
    SIAM Journal on Imaging Sciences, Society for Industrial and Applied Mathematics, 2018, 11 (1). In this paper, two new algorithms to sample from possibly non-smooth log-concave probability measures are introduced. These algorithms use Moreau-Yosida envelope combined with the Euler-Maruyama discretization of Langevin diffusions. They are applied to a de-convolution problem in image processing, which shows that they can be practically used in a high dimensional setting. Finally, non-asymptotic bounds for one of the proposed methods are derived. These bounds follow from non-asymptotic results for ULA applied to probability measures with a convex continuously differentiable log-density with respect to the Lebesgue measure. (10.1137/16M110834)
    DOI : 10.1137/16M110834