Sorry, you need to enable JavaScript to visit this website.
Share

Publications

2021

  • A Stochastic Geometry Approach to EMF Exposure Modeling
    • Gontier Quentin
    • Petrillo Lucas
    • Rottenberg Francois
    • Horlin Francois
    • Wiart Joe
    • Oestges Claude
    • de Doncker Philippe
    IEEE Access, IEEE, 2021, 9, pp.91777-91787. (10.1109/ACCESS.2021.3091804)
    DOI : 10.1109/ACCESS.2021.3091804
  • Optimization of wireless sensor networks deployment with coverage and connectivity constraints
    • Elloumi Sourour
    • Hudry Olivier
    • Marie Estel
    • Martin Agathe
    • Plateau Agnès
    • Rovedakis Stephane
    Annals of Operations Research, Springer Verlag, 2021, 298 (1-2), pp.183-206. Wireless sensor networks have been widely deployed in the last decades to provide various services, like environmental monitoring or object tracking. Such a network is composed of a set of sensor nodes which are used to sense and transmit collected information to a base station. To achieve this goal, two properties have to be guaranteed: (i) the sensor nodes must be placed such that the whole environment of interest (represented by a set of targets) is covered, and (ii) every sensor node can transmit its data to the base station (through other sensor nodes). In this paper, we consider the Minimum Connected k-Coverage (MCkC) problem, where a positive integer k ≥ 1 defines the coverage multiplicity of the targets. We propose two mathematical programming formulations for the MCkC problem on square grid graphs and random graphs. We compare them to a recent model proposed by (Rebai et al 2015). We use a standard mixed integer linear programming solver to solve several instances with different formulations. In our results, we point out the quality of the LP-bound of each formulation as well as the total CPU time or the proportion of solved instances to optimality within a given CPU time. (10.1007/s10479-018-2943-7)
    DOI : 10.1007/s10479-018-2943-7
  • Self-improving system integration: Mastering continuouschange
    • Bellman Kirstie
    • Botev Jean F
    • Diaconescu Ada
    • Esterle Lukas
    • Gruhl Christian
    • Landauer Christopher
    • Lewis Peter R.
    • Nelson Phyllis
    • Pournaras Evangelos
    • Stein Anthony
    • Tomforde Sven
    Future Generation Computer Systems, Elsevier, 2021.
  • Association between estimated whole-brain radiofrequency electromagnetic fields dose and cognitive function in preadolescents and adolescents
    • Cabré-Riera Alba
    • van Wel Luuk
    • Liorni Ilaria
    • Thielens Arno
    • Birks Laura Ellen
    • Pierotti Livia
    • Joseph Wout
    • González-Safont Llúcia
    • Ibarluzea Jesús
    • Ferrero Amparo
    • Huss Anke
    • Wiart Joe
    • Santa-Marina Loreto
    • Torrent Maties
    • Vrijkotte Tanja
    • Capstick Myles
    • Vermeulen Roel
    • Vrijheid Martine
    • Cardis Elisabeth
    • Röösli Martin
    • Guxens Mònica
    International Journal of Hygiene and Environmental Health, Elsevier, 2021, 231, pp.113659. (10.1016/j.ijheh.2020.113659)
    DOI : 10.1016/j.ijheh.2020.113659
  • Maximizing the Number of Scheduled Lightpath Demands in Optical Networks by Conflict Graphs
    • Hudry Olivier
    International Journal of Mathematics, Statistics and Operations Research, Academic Research Foundations, 2021.
  • Optical injection of mid-infrared extreme events in unilaterally coupled quantum cascade lasers
    • Spitz Olivier
    • Herdt Andreas
    • Elsassaer Wolfgang
    • Grillot Frédéric
    , 2021.
  • Depth for Curve Data and Applications
    • de Micheaux Pierre Lafaye
    • Mozharovskyi Pavlo
    • Vimond Myriam
    Journal of the American Statistical Association, Taylor & Francis, 2021, 116 (536), pp.1881-1897. In 1975, John W. Tukey defined statistical data depth as a function that determines the centrality of an arbitrary point with respect to a data cloud or to a probability measure. During the last decades, this seminal idea of data depth evolved into a powerful tool proving to be useful in various fields of science. Recently, extending the notion of data depth to the functional setting attracted a lot of attention among theoretical and applied statisticians. We go further and suggest a notion of data depth suitable for data represented as curves, or trajectories, which is independent of the parameterization. We show that our curve depth satisfies theoretical requirements of general depth functions that are meaningful for trajectories. We apply our methodology to diffusion tensor brain images and also to pattern recognition of handwritten digits and letters. Supplementary materials for this article are available online. (10.1080/01621459.2020.1745815)
    DOI : 10.1080/01621459.2020.1745815
  • Feature Clustering for Support Identification in Extreme Regions
    • Jalalzai Hamid
    • Leluc Rémi
    Proceedings of Machine Learning Research, PMLR, 2021, 139, pp.4733-4743. Understanding the complex structure of multivariate extremes is a major challenge in various fields from portfolio monitoring and environmental risk management to insurance. In the framework of multivariate Extreme Value Theory, a common characterization of extremes' dependence structure is the angular measure. It is a suitable measure to work in extreme regions as it provides meaningful insights concerning the subregions where extremes tend to concentrate their mass. The present paper develops a novel optimization-based approach to assess the dependence structure of extremes. This support identification scheme rewrites as estimating clusters of features which best capture the support of extremes. The dimension reduction technique we provide is applied to statistical learning tasks such as feature clustering and anomaly detection. Numerical experiments provide strong empirical evidence of the relevance of our approach.
  • Risks and security of internet and systems
    • Garcia‐alfaro Joaquin
    • Leneutre Jean
    • Cuppens Nora
    • Yaich Reda
    , 2021, 12528, pp.xi-378. This book constitutes the proceedings of the 15th International Conference on Risks and Security of Internet and Systems, CRiTIS 2020, which took place during November 4-6, 2020. The conference was originally planned to take place in Paris, France, but had to change to an online format due to the COVID-19 pandemic. The 16 full and 7 short papers included in this volume were carefully reviewed and selected from 44 submissions. In addition, the book contains one invited talk in full paper length. The papers were organized in topical sections named: vulnerabilities, attacks and intrusion detection; TLS, openness and security control; access control, risk assessment and security knowledge; risk analysis, neural networks and Web protection; infrastructure security and malware detection. (10.1007/978-3-030-68887-5)
    DOI : 10.1007/978-3-030-68887-5
  • Infinite-dimensional gradient-based descent for alpha-divergence minimisation
    • Daudel Kamélia
    • Douc Randal
    • Portier François
    Annals of Statistics, Institute of Mathematical Statistics, 2021, 49 (4), pp.2250 - 2270. This paper introduces the $(\alpha, \Gamma)$-descent, an iterative algorithm which operates on measures and performs $\alpha$-divergence minimisation in a Bayesian framework. This gradient-based procedure extends the commonly-used variational approximation by adding a prior on the variational parameters in the form of a measure. We prove that for a rich family of functions $\Gamma$, this algorithm leads at each step to a systematic decrease in the $\alpha$-divergence and derive convergence results. Our framework recovers the Entropic Mirror Descent algorithm and provides an alternative algorithm that we call the Power Descent. Moreover, in its stochastic formulation, the $(\alpha, \Gamma)$-descent allows to optimise the mixture weights of any given mixture model without any information on the underlying distribution of the variational parameters. This renders our method compatible with many choices of parameters updates and applicable to a wide range of Machine Learning tasks. We demonstrate empirically on both toy and real-world examples the benefit of using the Power descent and going beyond the Entropic Mirror Descent framework, which fails as the dimension grows.
  • A Latent Transformer for Disentangled Face Editing in Images and Videos
    • Yao Xu
    • Newson Alasdair
    • Gousseau Yann
    • Hellier Pierre
    , 2021, pp.13789-13798.
  • Screening Rules and its Complexity for Active Set Identification
    • Ndiaye Eugene
    • Fercoq Olivier
    • Salmon Joseph
    Journal of Convex Analysis, Heldermann, 2021, 28 (4), pp.1053--1072. Screening rules were recently introduced as a technique for explicitly identifying active structures such as sparsity, in optimization problem arising in machine learning. This has led to new methods of acceleration based on a substantial dimension reduction. We show that screening rules stem from a combination of natural properties of subdifferential sets and optimality conditions, and can hence be understood in a unified way. Under mild assumptions, we analyze the number of iterations needed to identify the optimal active set for any converging algorithm. We show that it only depends on its convergence rate. (10.48550/arXiv.2009.02709)
    DOI : 10.48550/arXiv.2009.02709
  • Approximate Inference and Learning of State Space Models with Laplace Noise
    • Neri Julian
    • Depalle Philippe
    • Badeau Roland
    IEEE Transactions on Signal Processing, Institute of Electrical and Electronics Engineers, 2021, 69, pp.3176 - 3189. State space models have been extensively applied to model and control dynamical systems in disciplines including neuroscience, target tracking, and audio processing. A common modeling assumption is that both the state and data noise are Gaussian because it simplifies the estimation of the system's state and model parameters. However, in many real-world scenarios where the noise is heavy-tailed or includes outliers, this assumption does not hold, and the performance of the model degrades. In this aper, we present a new approximate inference algorithm for state space models with Laplace-distributed multivariate data that is robust to a wide range of non-Gaussian noise. Exact inference is combined with an expectation propagation algorithm, leading to filtering and smoothing that outperforms existing approximate inference methods for Laplace-distributed data, while retaining a fast speed similar to the Kalman filter. Further, we present a maximum posterior expectation-maximization (EM) algorithm that learns the parameters of the model in an unsupervised way, automatically avoids over-fitting the data, and provides better model estimation than existing methods for the Gaussian model. The quality of the inference and learning algorithms are exemplified through a diverse set of experiments and an application to non-linear tracking of audio frequency. (10.1109/tsp.2021.3075146)
    DOI : 10.1109/tsp.2021.3075146
  • Resolution of a Routing and Wavelength Assignment Problem by Independent Sets in Conflict Graphs
    • Hudry Olivier
    , 2021.
  • Méta-apprentissage : classification de messages en catégories émotionnelles inconnues en entraînement
    • Guibon Gaël
    • Labeau Matthieu
    • Flamein Hélène
    • Lefeuvre Luce
    • Clavel Chloé
    , 2021, pp.199-208. Dans cet article nous reproduisons un scénario d’apprentissage selon lequel les données cibles ne sont pas accessibles et seules des données connexes le sont. Nous utilisons une approche par méta-apprentissage afin de déterminer si les méta-informations apprises à partir de messages issus de médias sociaux, finement annotés en émotions, peuvent produire de bonnes performances une fois utilisées sur des messages issus de conversations, étiquetés en émotions avec une granularité différente. Nous mettons à profit l’apprentissage sur quelques exemples (few-shot learning) pour la mise en place de ce scénario. Cette approche se montre efficace pour capturer les méta-informations d’un jeu d’étiquettes émotionnelles pour prédire des étiquettes jusqu’alors inconnues au modèle. Bien que le fait de varier le type de données engendre une baisse de performance, notre approche par méta-apprentissage atteint des résultats décents comparés au référentiel d’apprentissage supervisé.
  • Sum-capacity of Uplink Multiband Satellite Communications with Nonlinear Impairments
    • Louchart Arthur
    • Ciblat Philippe
    • Poulliat Charly
    , 2021. A compact and closed-form expression of capacity is derived for a uplink multiband satellite system in the presence of nonlinear interference. The nonlinear effect comes from the satellite high-power amplifier modeled by a Volterra series expansion. The derivations reveal that the nonlinear interference can provide a constructive power contribution that could be used to increase the transmission rate. Consequently, decoders designed by viewing this interference as only an additional noise are suboptimal. Numerical results confirm this claim and also shows that an appropriate power allocation amongst the subbands may be of interest.
  • Dual Optimization for Kolmogorov Model Learning Using Enhanced Gradient Descent
    • Duan Qiyou
    • Ghauch Hadi
    • Kim Taejoon
    IEEE Transactions on Signal Processing, Institute of Electrical and Electronics Engineers, 2021. Data representation techniques have made a substantial contribution to advancing data processing and machine learning (ML). Improving predictive power was the focus of previous representation techniques, which unfortunately perform rather poorly on the interpretability in terms of extracting underlying insights of the data. Recently, Kolmogorov model (KM) was studied, which is an interpretable and predictable representation approach to learning the underlying probabilistic structure of a set of random variables. The existing KM learning algorithms using semi-definite relaxation with randomization (SDRwR) or discrete monotonic optimization (DMO) have, however, limited utility to big data applications because they do not scale well computationally. In this paper, we propose a computationally scalable KM learning algorithm, based on the regularized dual optimization combined with enhanced gradient descent (GD) method. To make our method more scalable to large-dimensional problems, we propose two acceleration schemes, namely, eigenvalue decomposition (EVD) elimination strategy and proximal EVD algorithm. When applied to big data applications, it is demonstrated that the proposed method can achieve compatible training/prediction performance with significantly reduced computational complexity; roughly two orders of magnitude improvement in terms of the time overhead, compared to the existing KM learning algorithms. Furthermore, it is shown that the accuracy of logical relation mining for interpretability by using the proposed KM learning algorithm exceeds 80%.
  • The Role of Digital Technologies in Responding to the Grand Challenges of the Natural Environment: The Windermere Accord
    • Blair Gordon
    • Bassett Richard
    • Bastin Louis
    • Beevers L.
    • Borrajo Garcia Maribel
    • Brown Mike
    • Dance Sarah L
    • Diaconescu Ada
    • Edwards Elizabeth
    • Ferrario Maria Angela
    • Fraser Robert
    • Harriet Fraser
    Patterns, Cell Press Elsevier, 2021.
  • Distributed Learning assisted Fronthaul Compression for Multi-Antenna Uplink C-RAN
    • Askri Aymen
    • Zhang Chao
    • Rekaya-Ben Othman Ghaya
    IEEE Access, IEEE, 2021.
  • BAYESIAN NODE CLASSIFICATION FOR NOISY GRAPHS
    • Hafidi Hakim
    • Ghogho Mounir
    • Ciblat Philippe
    • Swami Ananthram
    , 2021. Graph neural networks (GNN) have been recognized as powerful tools for learning representations in graph structured data. The key idea is to propagate and aggregate information along edges of the given graph. However, little work has been done to analyze the effect of noise on their performance. By conducting a number of simulations, we show that GNN are very sensitive to the graph noise. We propose a graphassisted Bayesian node classifier which takes into account the degree of impurity of the graph, and show that it consistently outperforms GNN based classifiers on benchmark datasets, particularly when the degree of impurity is moderate to high.
  • River: machine learning for streaming data in Python
    • Montiel Jacob
    • Halford Max
    • Mastelini Saulo Martiello
    • Bolmier Geoffrey
    • Sourty Raphaël
    • Vaysse Robin
    • Zouitine Adil
    • Gomes Heitor Murilo
    • Read Jesse
    • Abdessalem Talel
    • Bifet Albert
    Journal of Machine Learning Research, Microtome Publishing, 2021, 22, pp.1-8. River is a machine learning library for dynamic data streams and continual learning. It provides multiple state-of-the-art learning methods, data generators/transformers, performance metrics and evaluators for different stream learning problems. It is the result from the merger of two popular packages for stream learning in Python: Creme and scikitmultiflow. River introduces a revamped architecture based on the lessons learnt from the seminal packages. River’s ambition is to be the go-to library for doing machine learning on streaming data. Additionally, this open source package brings under the same umbrella a large community of practitioners and researchers. The source code is available at https://github.com/online-ml/river. (10.48550/arXiv.2012.04740)
    DOI : 10.48550/arXiv.2012.04740
  • Un robot capable de calculer sa responsabilité sera-t-il responsable de ses actes?
    • Dessalles Jean-Louis
    , 2021. Il peut être choquant d’imaginer que des notions comme la responsabilité, l’intention, le jugement ou la négligence puissent faire l’objet de calculs. Or la décision juridique n’est pas ineffable, puisqu’elle est censée être motivée après coup par référence à des principes. Peut-on traduire ces principes sous une forme utilisable par des machines ?
  • Probabilistic semi-nonnegative matrix factorization: a Skellam-based framework
    • Fuentes Benoît
    • Richard Gael
    Computing Research Repository, ACM / ArXiv, 2021. We present a new probabilistic model to address semi-nonnegative matrix factorization (SNMF), called Skellam-SNMF. It is a hierarchical generative model consisting of prior components, Skellam-distributed hidden variables and observed data. Two inference algorithms are derived: Expectation-Maximization (EM) algorithm for maximum \emph{a posteriori} estimation and Variational Bayes EM (VBEM) for full Bayesian inference, including the estimation of parameters prior distribution. From this Skellam-based model, we also introduce a new divergence D between a real-valued target data x and two nonnegative parameters λ0 and λ1 such that D(x∣λ0,λ1)=0⇔x=λ0−λ1, which is a generalization of the Kullback-Leibler (KL) divergence. Finally, we conduct experimental studies on those new algorithms in order to understand their behavior and prove that they can outperform the classic SNMF approach on real data in a task of automatic clustering.
  • Power Allocation for Uplink Multiband Satellite Communications with Nonlinear Impairments
    • Louchart Arthur
    • Ciblat Philippe
    • Poulliat Charly
    IEEE Communications Letters, Institute of Electrical and Electronics Engineers, 2021, 25 (8), pp.2713-2717. In this letter, we develop some generic power allocation strategies in an uplink multiband satellite communications system when nonlinear impairments on the High-Power Amplifier onboard satellite occur. Based on the capacity closed-form expression related to receivers seeing nonlinear interference as a noise, we propose practical and scalable algorithms for three power allocation problems: i) sum-power minimization, ii) maximization of minimum per-user data rate, iii) sum-rate maximization. We show that the solutions mainly rely on Geometric Programming and/or Successive Convex Approximation approaches. The proposed solutions outperform naive approaches while enabling user scalability contrary to optimal brute-force grid search algorithms. (10.1109/LCOMM.2021.3087408)
    DOI : 10.1109/LCOMM.2021.3087408
  • Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging
    • Pirovano A.
    • Heuberger H.
    • Berlemont S.
    • Ladjal Saïd
    • Bloch Isabelle
    Machine Learning and Knowledge Extraction, MDPI, 2021, 3 (1), pp.243-262. Deep learning methods are widely used for medical applications to assist medical doctors in their daily routine. While performances reach expert's level, interpretability (highlighting how and what a trained model learned and why it makes a specific decision) is the next important challenge that deep learning methods need to answer to be fully integrated in the medical field. In this paper, we address the question of interpretability in the context of whole slide images (WSI) classification with the formalization of the design of WSI classification architectures and propose a piece-wise interpretability approach, relying on gradient-based methods, feature visualization and multiple instance learning context. After training two WSI classification architectures on Camelyon-16 WSI dataset, highlighting discriminative features learned, and validating our approach with pathologists, we propose a novel manner of computing interpretability slide-level heat-maps, based on the extracted features, that improves tile-level classification performances. We measure the improvement using the tile-level AUC that we called Localization AUC, and show an improvement of more than 0.2. We also validate our results with a RemOve And Retrain (ROAR) measure. Then, after studying the impact of the number of features used for heat-map computation, we propose a corrective approach, relying on activation colocalization of selected features, that improves the performances and the stability of our proposed method. (10.3390/make3010012)
    DOI : 10.3390/make3010012