Sorry, you need to enable JavaScript to visit this website.
Partager

Publications

 

Les publications de nos enseignants-chercheurs sont sur la plateforme HAL :

 

Les publications des thèses des docteurs du LTCI sont sur la plateforme HAL :

 

Retrouver les publications figurant dans l'archive ouverte HAL par année :

2019

  • Smoothing Structured Decomposable Circuits
    • Shih Andy
    • Broeck Guy van Den
    • Beame Paul
    • Amarilli Antoine
    , 2019. We study the task of smoothing a circuit, i.e., ensuring that all children of a plus-gate mention the same variables. Circuits serve as the building blocks of state-of-the-art inference algorithms on discrete probabilistic graphical models and probabilistic programs. They are also important for discrete density estimation algorithms. Many of these tasks require the input circuit to be smooth. However, smoothing has not been studied in its own right yet, and only a trivial quadratic algorithm is known. This paper studies efficient smoothing for structured decomposable circuits. We propose a near-linear time algorithm for this task and explore lower bounds for smoothing decomposable circuits, using existing results on range-sum queries. Further, for the important case of All-Marginals, we show a more efficient linear-time algorithm. We validate experimentally the performance of our methods.
  • RSN: Randomized Subspace Newton
    • Gower Robert M
    • Kovalev Dmitry
    • Lieder Felix
    • Richtárik Peter
    , 2019. We develop a randomized Newton method capable of solving learning problems with huge dimensional feature spaces, which is a common setting in applications such as medical imaging, genomics and seismology. Our method leverages ran-domized sketching in a new way, by finding the Newton direction constrained to the space spanned by a random sketch. We develop a simple global linear convergence theory that holds for practically all sketching techniques, which gives the practitioners the freedom to design custom sketching approaches suitable for particular applications. We perform numerical experiments which demonstrate the efficiency of our method as compared to accelerated gradient descent and the full Newton method. Our method can be seen as a refinement and randomized extension of the results of Karimireddy, Stich, and Jaggi [18].
  • Towards closing the gap between the theory and practice of SVRG
    • Sebbouh Othmane
    • Gazagnadou Nidham
    • Jelassi Samy
    • Bach Francis
    • Gower Robert M
    , 2019. Amongst the very first variance reduced stochastic methods for solving the empiri- cal risk minimization problem was the SVRG method [11]. SVRG is an inner-outer loop based method, where in the outer loop a reference full gradient is evaluated, after which m ∈ N steps of an inner loop are executed where the reference gradient is used to build a variance reduced estimate of the current gradient. The simplicity of the SVRG method and its analysis has lead to multiple extensions and variants for even non-convex optimization. Yet there is a significant gap between the param- eter settings that the analysis suggests and what is known to work well in practice. Our first contribution is that we take several steps here towards closing this gap. In particular, the current analysis shows that m should be of the order of the condition number so that the resulting method has a favourable complexity. Yet in practice m = n works well irregardless of the condition number, where n is the number of data points. Furthermore, the current analysis shows that the inner iterates have to be reset using averaging after every outer loop. Yet in practice SVRG works best when the inner iterates are updated continuously and not reset. We provide an anal- ysis of these aforementioned practical settings and show that they achieve the same favourable complexity as the original analysis (with slightly better constants). Our second contribution is to provide a more general analysis than had been previously done by using arbitrary sampling, which allows us to analyse virtually all forms of mini-batching through a single theorem. Since our setup and analysis reflects what is done in practice, we are able to set the parameters such as the mini-batch size and step size using our theory in such a way that produces a more efficient algorithm in practice, as we show in extensive numerical experiments.
  • Stochastic Conditional Gradient Method for Composite Convex Minimization
    • Locatello Francesco
    • Yurtsever Alp
    • Fercoq Olivier
    • Cevher Volkan
    , 2019. In this paper, we propose the first practical algorithm to minimize stochastic composite optimization problems over compact convex sets. This template allows for affine constraints and therefore covers stochastic semidefinite programs (SDPs), which are vastly applicable in both machine learning and statistics. In this setup, stochastic algorithms with convergence guarantees are either not known or not tractable. We tackle this general problem and propose a convergent, easy to implement and tractable algorithm. We prove $\mathcal{O}(k^{-1/3})$ convergence rate in expectation on the objective residual and $\mathcal{O}(k^{-5/12})$ in expectation on the feasibility gap. These rates are achieved without increasing the batchsize, which can contain a single sample. We present extensive empirical evidence demonstrating the superiority of our algorithm on a broad range of applications including optimization of stochastic SDPs.
  • First Order Motion Model for Image Animation
    • Siarohin Aliaksandr
    • Lathuilière Stéphane
    • Tulyakov Sergey
    • Ricci Elisa
    • Sebe Nicu
    , 2019. Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video. Our framework addresses this problem without using any annotation or prior information about the specific object to animate. Once trained on a set of videos depicting objects of the same category (e.g. faces, human bodies), our method can be applied to any object of this class. To achieve this, we decouple appearance and motion information using a self-supervised formulation. To support complex motions, we use a representation consisting of a set of learned keypoints along with their local affine transformations. A generator network models occlusions arising during target motions and combines the appearance extracted from the source image and the motion derived from the driving video. Our framework scores best on diverse benchmarks and on a variety of object categories. Our source code is publicly available.
  • First Exit Time Analysis of Stochastic Gradient Descent Under Heavy-Tailed Gradient Noise
    • Nguyen Thanh Huy
    • Şimşekli Umut
    • Gürbüzbalaban Mert
    • Richard Gael
    , 2019. Stochastic gradient descent (SGD) has been widely used in machine learning due to its computational efficiency and favorable generalization properties. Recently, it has been empirically demonstrated that the gradient noise in several deep learning settings admits a non-Gaussian, heavy-tailed behavior. This suggests that the gradient noise can be modeled by using α-stable distributions, a family of heavytailed distributions that appear in the generalized central limit theorem. In this context, SGD can be viewed as a discretization of a stochastic differential equation (SDE) driven by a Lévy motion, and the metastability results for this SDE can then be used for illuminating the behavior of SGD, especially in terms of ‘preferring wide minima’. While this approach brings a new perspective for analyzing SGD, it is limited in the sense that, due to the time discretization, SGD might admit a significantly different behavior than its continuous-time limit. Intuitively, the behaviors of these two systems are expected to be similar to each other only when the discretization step is sufficiently small; however, to the best of our knowledge, there is no theoretical understanding on how small the step-size should be chosen in order to guarantee that the discretized system inherits the properties of the continuous-time system. In this study, we provide formal theoretical analysis where we derive explicit conditions for the step-size such that the metastability behavior of the discrete-time system is similar to its continuous-time limit. We show that the behaviors of the two systems are indeed similar for small step-sizes and we identify how the error depends on the algorithm and problem parameters. We illustrate our results with simulations on a synthetic model and neural networks.
  • Joint distance and azimuth angle estimation using an UWB-based indoor localization system
    • Awarkeh Nour
    • Cousin Jean-Christophe
    • Muller Muriel
    • Samama Nel
    , 2019, 8 (1), pp.88-92. This paper presents a ranging test conducted in a standard indoor environment using an ultra-wideband (UWB) radar system. The position is determined by jointly estimating the distance using the Energy Detection (ED) method, and the Azimuth angle using Phase Correlation (PC) method. The obtained results show that this UWB ranging system can improve the angular resolution compared to traditional Indoor Localization Systems (ILSs).
  • Coordinate descent methods for non-differentiable convex optimisation problems
    • Fercoq Olivier
    , 2019.
  • Cross-layer hybrid and optical packet switching
    • Minakhmetov Artur
    , 2019. Transparent optical telecommunication networks constitute a development step from all-electronic networks. Current data network technologies already actively employ optical fibers and transparent networks in the core, metro, and residential area networks. However, these networks still rely on Electronic Packet Switching (EPS) for packets routing, constituting obligatory for each packet optical-to-electronic-to-optical (OEO) signal conversion. On the other hand, Optical Packet Switching (OPS), seemed to be as replacement of EPS, has long promised performance and energy consumption improvements by going away from OEO conversions; however, the absence of practical optical buffers made OPS highly vulnerable to contention, incurring performance reduction, and getting in the way of profiting from OPS gains. The subject of this research lies in the investigation of the performance of OPS networks under all-optical and hybrid switches, while server-side transmission activities are regulated by Transport Control Protocols based on Congestion Control Algorithms (TCP CCAs). We consider that OPS could be enabled by use hybrid switch, i.e. device-level solution, as well by use of specially designed TCP CCAs, i.e. networklevel solution, giving birth to Hybrid Optical Packet Switching (HOPS) networks. We extensively study OPS, HOPS and EPS types of Data Center Networks (DCN) coupled with different TCP CCAs use by following the next three axes of DCN performance: Throughput, Energy Consumption, and Latency. As for TCP CCAs we consider not only existing but also newly developed solutions. If Stop-And-Wait (SAW), Selective Acknowledgment (SACK), modified SACK (mSACK) and Data Center TCP (DCTCP) are already known to the world, StopAnd-Wait-Longer (SAWL) is newly presented and is designed to bring the best out of the HOPS DCN. As a result, it is shown that hybrid switch solutions significantly outperform bufferless all-optical switches and reach the level of all-electronic switches in DCNs in terms of throughput. In terms of energy consumption, hybrid solutions can save up to 4 times on energy on switching compared to all-electronic solutions. As well HOPS DCNs can exhibit microseconds-scale average latencies, surpassing EPS and performing on the level with OPS. The question of the introduction of Classes of Service to HOPS DCN is also investigated: it was found that class-specific switching rules to hybrid switch can ameliorate the performance of certain classes without almost performance loss in others.
  • Sparse high dimensional regression in the presence of colored heteroscedastic noise : application to M/EEG source imaging
    • Massias Mathurin
    , 2019. Understanding the functioning of the brain under normal and pathological conditions is one of the challenges of the 21textsuperscript{st} century.In the last decades, neuroimaging has radically affected clinical and cognitive neurosciences.Amongst neuroimaging techniques, magneto- and electroencephalography (M/EEG) stand out for two reasons: their non-invasiveness, and their excellent time resolution.Reconstructing the neural activity from the recordings of magnetic field and electric potentials is the so-called bio-magnetic inverse problem.Because of the limited number of sensors, this inverse problem is severely ill-posed, and additional constraints must be imposed in order to solve it.A popular approach, considered in this manuscript, is to assume spatial sparsity of the solution: only a few brain regions are involved in a short and specific cognitive task.Solutions exhibiting such a neurophysiologically plausible sparsity pattern can be obtained through L21-penalized regression approaches.However, this regularization requires to solve time-consuming high-dimensional and non-smooth optimization problems, with iterative (block) proximal gradients solvers.% Issues of M/EEG: noise:Additionally, M/EEG recordings are usually corrupted by strong non-white noise, which breaks the classical statistical assumptions of inverse problems. To circumvent this, it is customary to whiten the data as a preprocessing step,and to average multiple repetitions of the same experiment to increase the signal-to-noise ratio.Averaging measurements has the drawback of removing brain responses which are not phase-locked, ie do not happen at a fixed latency after the stimuli presentation onset.%Making it faster.In this work, we first propose speed improvements of iterative solvers used for the L21-regularized bio-magnetic inverse problem.Typical improvements, screening and working sets, exploit the sparsity of the solution: by identifying inactive brain sources, they reduce the dimensionality of the optimization problem.We introduce a new working set policy, derived from the state-of-the-art Gap safe screening rules.In this framework, we also propose duality improvements, yielding a tighter control of optimality and improving feature identification techniques.This dual construction extrapolates on an asymptotic Vector AutoRegressive regularity of the dual iterates, which we connect to manifold identification of proximal algorithms.Beyond the L21-regularized bio-magnetic inverse problem, the proposed methods apply to the whole class of sparse Generalized Linear Models.%Better handling of the noiseSecond, we introduce new concomitant estimators for multitask regression.Along with the neural sources estimation, concomitant estimators jointly estimate the noise covariance matrix.We design them to handle non-white Gaussian noise, and to exploit the multiple repetitions nature of M/EEG experiments.Instead of averaging the observations, our proposed method, CLaR, uses them all for a better estimation of the noise.The underlying optimization problem is jointly convex in the regression coefficients and the noise variable, with a ``smooth + proximable'' composite structure.It is therefore solvable via standard alternate minimization, for which we apply the improvements detailed in the first part.We provide a theoretical analysis of our objective function, linking it to the smoothing of Schatten norms.We demonstrate the benefits of the proposed approach for source localization on real M/EEG datasets.Our improved solvers and refined modeling of the noise pave the way for a faster and more statistically efficient processing of M/EEG recordings, allowing for interactive data analysis and scaling approaches to larger and larger M/EEG datasets.
  • Misbehavior Detection in C-ITS: A comparative approach of local detection mechanisms
    • Kamel Joseph
    • Jemaa Ines Ben
    • Kaiser Arnaud
    • Cantat Loic
    • Urien Pascal
    , 2019. MisBehavior Detection (MBD) is an important security mechanism in Cooperative Intelligent Transport Systems (C-ITS). It involves monitoring C-ITS communications to detect potentially misbehaving entities. This monitoring is based on local plausibility and consistency checks done by the Intelligent Transport Systems (ITS) Station (ITS-S) on every received Vehicle-to-Everything (V2X) message. These checks are then analyzed by a local detection mechanisms to estimate the overall plausibility of a message. In this paper we focus on the logic behind different local detection mechanisms. First, we propose different local detection solutions based on logics extracted from the state of the art. Then we present a comparative review of the detection quality and the computation latency of each proposed mechanisms.
  • On cooperative and concurrent detection in distributed hypothesis testing
    • Escamilla Pierre
    , 2019. Statistical inference plays a major role in the development of new technologies and inspires a large number of algorithms dedicated to detection, identification and estimation tasks. However, there is no theoretical guarantee for the performance of these algorithms. In this thesis we try to understand how sensors can best share their information in a network with communication constraints to detect the same or distinct events. We investigate different aspects of detector cooperation and how conflicting needs can best be met in the case of detection tasks. More specifically we study a hypothesis testing problem where each detector must maximize the decay exponent of the Type II error under a given Type I error constraint. As the detectors are interested in different information, a compromise between the achievable decay exponents of the Type II error appears. Our goal is to characterize the region of possible trade-offs between Type II error decay exponents. In massive sensor networks, the amount of information is often limited due to energy consumption and network saturation risks. We are therefore studying the case of the zero rate compression communication regime (i.e. the messages size increases sub-linearly with the number of observations). In this case we fully characterize the region of Type II error decay exponent. In configurations where the detectors have or do not have the same purposes. We also study the case of a network with positive compression rates (i.e. the messages size increases linearly with the number of observations). In this case we present subparts of the region of Type II error decay exponent. Finally, in the case of a single sensor single detector scenario with a positive compression rate, we propose a complete characterization of the optimal Type II error decay exponent for a family of Gaussian hypothesis testing problems.
  • Adaptive optics assisted space-ground coherent optical links: impact of turbulence on carrier recovery
    • Paillier Laurie
    • Le Bidan Raphaël
    • Conan Jean-Marc
    • Artaud Géraldine
    • Vedrenne Nicolas
    • Jaouën Yves
    , 2019.
  • Sources of light for quantum communications
    • Zaquine Isabelle
    , 2019.
  • Dynamic and environmental study of a mid-infrared wavelength channel for a horizontal telecom link
    • Sauvage Chloé
    • Robert Clélia
    • Sorrente Béatrice
    • Grillot Frédéric
    • Erasme Didier
    , 2019. This study characterizes a horizontal atmospheric telecom channel in terms of evolution time of the intensity and the phase of the electromagnetic field received. The channel temporal characterization comes from a turbulence data base recorded during a Cn² profiler experiment (SCINDAR). The coherence times of the intensity and phase for an overcast and unclear sky are computed, these values bring information on the bit rate achievable and the speed of an adaptive optics system. (10.34693/COAT2019-S6-001)
    DOI : 10.34693/COAT2019-S6-001
  • A Language-based Multi-view Approach for Combining Functional and Security Models
    • Zhao Hui
    • Mallet Frédéric
    • Apvrille Ludovic
    , 2019. The design flaws and attacks on Cyber-Physical Systems (CPSs) can lead to severe consequences. Thus, security and safety (S&S) issues should be taken into account with functional design as early as possible during the developing process. However, it's rare to see "one-size-fits-all" modeling language and/or design tool. One way to solve this issue is to integrate different nature models into one model system, but this requires a unified semantic among modeling languages. We explore a model-based approach for systems engineering that facilitates the composition of several heterogeneous artifacts (called views) into a sound and consistent system model. Rather than trying to extend either SysML or SysML-sec into more expressive languages to add the missing features, we extract proper subsets of both languages to build a view adequate for conducting a security and safety analysis of Capella (SysML-based) functional models. Our language is generic enough to extract proper subsets of languages and combine them to build views for different experts. Moreover, it maintains a global consistency between the different views.
  • Dissociating task acquisition from expression during learning reveals latent knowledge
    • Kuchibhotla Kishore
    • Sten Tom Hindmarsh
    • Papadoyannis Eleni S
    • Elnozahy Sarah
    • Fogelson Kelly A
    • Kumar Rupesh
    • Boubenec Yves
    • Holland Peter C
    • Ostojic Srdjan
    • Froemke Robert C
    Nature Communications, Nature Publishing Group, 2019, 10 (1). Performance on cognitive tasks during learning is used to measure knowledge, yet it remains controversial since such testing is susceptible to contextual factors. To what extent does performance during learning depend on the testing context, rather than underlying knowl-edge? We trained mice, rats and ferrets on a range of tasks to examine how testing context impacts the acquisition of knowledge versus its expression. We interleaved reinforced trials with probe trials in which we omitted reinforcement. Across tasks, each animal species performed remarkably better in probe trials during learning and inter-animal variability was strikingly reduced. Reinforcement feedback is thus critical for learning-related behavioral improvements but, paradoxically masks the expression of underlying knowledge. We capture these results with a network model in which learning occurs during reinforced trials while context modulates only the read-out parameters. Probing learning by omitting reinforcement thus uncovers latent knowledge and identifies context-not "smartness"-as the major source of individual variability. (10.1038/s41467-019-10089-0)
    DOI : 10.1038/s41467-019-10089-0
  • The weakest failure detector for eventual consistency
    • Dubois Swan
    • Guerraoui Rachid
    • Kuznetsov Petr
    • Petit Franck
    • Sens Pierre
    Distributed Computing, Springer Verlag, 2019, 32 (6), pp.479-492. In its classical form, a consistent replicated service requires all replicas to witness the same evolution of the service state. If we consider an asynchronous message-passing environment in which processes might fail by crashing, and assume that a majority of processes are correct, then the necessary and sufficient information about failures for implementing a general state machine replication scheme ensuring consistency is captured by the Ω failure detector. This paper shows that in such a message-passing environment, Ω is also the weakest failure detector to implement an eventually consistent replicated service, where replicas are expected to agree on the evolution of the service state only after some (a priori unknown) time. In fact, we show that Ω is the weakest to implement eventual consistency in any message-passing environment, i.e., under any assumption on when and where failures might occur. Ensuring (strong) consistency in any environment requires, in addition to Ω, the quorum failure detector Σ. Our paper thus captures, for the first time, an exact computational difference between building a replicated state machine that ensures consistency and one that only ensures eventual consistency. (10.1007/s00446-016-0292-9)
    DOI : 10.1007/s00446-016-0292-9
  • Phenotypic similarity for rare disease: ciliopathy diagnoses and subtyping
    • Chen Xiaoyi
    • Garcelon Nicolas
    • Neuraz Antoine
    • Billot Katy
    • Lelarge Marc
    • Bonald Thomas
    • Garcia Hugo
    • Martin Yoann
    • Benoit Vincent
    • Vincent Marc
    • Faour Hassan
    • Douillet Maxime
    • Lyonnet Stanislas
    • Saunier Sophie
    • Burgun Anita
    Journal of Biomedical Informatics, Elsevier, 2019, 100, pp.103308. Rare diseases are often hard and long to be diagnosed precisely, and most of them lack approved treatment. For some complex rare diseases, precision medicine approach is further required to stratify patients into homogeneous subgroups based on the clinical, biological or molecular features. In such situation, deep phenotyping of these patients and comparing their profiles based on subjacent similarities are thus essential to help fast and precise diagnoses and better understanding of pathophysiological processes in order to develop therapeutic solutions. In this article, we developed a new pipeline of using deep phenotyping to define patient similarity and applied it to ciliopathies, a group of rare and severe diseases caused by ciliary dysfunction. As a French national reference center for rare and undiagnosed diseases, the Necker-Enfants Malades Hospital (Necker Children's Hospital) hosts the Imagine Institute, a research institute focusing on genetic diseases. The clinical data warehouse contains on one hand EHR data, and on the other hand, clinical research data. The similarity metrics were computed on both data sources, and were evaluated with two tasks: diagnoses with EHRs and subtyping with ciliopathy specific research data. We obtained a precision of 0.767 in the top 30 most similar patients with diagnosed ciliopathies. Subtyping ciliopathy patients with phenotypic similarity showed concordances with expert knowledge. Similarity metrics applied to rare disease offer new perspectives in a translational context that may help to recruit patients for research, reduce the length of the diagnostic journey, and better understand the mechanisms of the disease. (10.1016/j.jbi.2019.103308)
    DOI : 10.1016/j.jbi.2019.103308
  • Deep Tone Mapping Operator for High Dynamic Range Images
    • Rana Aakanksha A
    • Singh Praveer
    • Valenzise Giuseppe
    • Dufaux Frédéric
    • Komodakis Nikos
    • Smolic Aljosa
    IEEE Transactions on Image Processing, Institute of Electrical and Electronics Engineers, 2019, 29 (1), pp.1285-1298. A computationally fast tone mapping operator (TMO) that can quickly adapt to a wide spectrum of high dynamic range (HDR) content is quintessential for visualization on varied low dynamic range (LDR) output devices such as movie screens or standard displays. Existing TMOs can successfully tone-map only a limited number of HDR content and require an extensive parameter tuning to yield the best subjective-quality tone-mapped output. In this paper, we address this problem by proposing a fast, parameter-free and scene-adaptable deep tone mapping operator (DeepTMO) that yields a high-resolution and high-subjective quality tone mapped output. Based on conditional generative adversarial network (cGAN), DeepTMO not only learns to adapt to vast scenic-content (e.g., outdoor, indoor, human, structures, etc.) but also tackles the HDR related scene-specific challenges such as contrast and brightness, while preserving the fine-grained details. We explore 4 possible combinations of Generator-Discriminator architectural designs to specifically address some prominent issues in HDR related deep-learning frameworks like blurring, tiling patterns and saturation artifacts. By exploring different influences of scales, loss-functions and normalization layers under a cGAN setting, we conclude with adopting a multi-scale model for our task. To further leverage on the large-scale availability of unlabeled HDR data, we train our network by generating targets using an objective HDR quality metric, namely Tone Mapping Image Quality Index (TMQI). We demonstrate results both quantitatively and qualitatively, and showcase that our DeepTMO generates high-resolution, high-quality output images over a large spectrum of real-world scenes. Finally, we evaluate the perceived quality of our results by conducting a pair-wise subjective study which confirms the versatility of our method. (10.1109/TIP.2019.2936649)
    DOI : 10.1109/TIP.2019.2936649
  • An initial evaluation of 6Stor, a dynamically scalable IPv6-centric distributed object storage system
    • Ruty Guillaume
    • Rougier Jean-Louis
    • Surcouf André
    • Augustin Aloys
    Cluster Computing, Springer Verlag, 2019, 22 (4), pp.1123-1142. The exponentially growing demand for storage puts a huge stress on traditional distributed storage systems. Historically, I/Ops (Inputs/Outputs per second) of hard drives have been the main limitation of storage systems. With the rapid deployment of solid state drives (SSDs) and the expected evolutions of their capacities, price and performance, we claim that CPU and network capacities will become bottlenecks in the future. In this context, we introduce 6Stor, an innovative, software-defined distributed storage system fully integrated with the networking layer. This storage system departs from traditional approaches in two manners: it leverages IPv6 new capabilities to increase the efficiency of its data plane—notably by using directly UDP and TCP rather than HTTP—and thus its performance; and it circumvents scalability limitations of other distributed systems by using a fully distributed metadata layer of indirection to offer flexibility. In this paper, we introduce and describe in details the architecture of 6Stor, with an emphasis on dynamic scalability and robustness to failure. We also present a testbed that we use to evaluate our novel approach by using Ceph—another well known distributed storage system—as baseline. Results obtained on an extensive testbed are presented and some initial conclusions are drawn. (10.1007/s10586-018-02897-8)
    DOI : 10.1007/s10586-018-02897-8
  • Fast stimulated Raman and second harmonic generation imaging for intraoperative gastro-intestinal cancer detection
    • Sarri Barbara
    • Canonge Rafaël
    • Audier Xavier
    • Simon Emma
    • Wojak Julien
    • Caillol Fabrice
    • Cador Cécile
    • Marguet Didier
    • Poizat Flora
    • Giovannini Marc
    • Rigneault Herve
    Scientific Reports, Nature Publishing Group, 2019, 9 (1), pp.10052. Conventional haematoxylin, eosin and saffron (HES) histopathology, currently the 'gold-standard' for pathological diagnosis of cancer, requires extensive sample preparations that are achieved within time scales that are not compatible with intra-operative situations where quick decisions must be taken. providing to pathologists a close to real-time technology revealing tissue structures at the cellular level with Hes histologic quality would provide an invaluable tool for surgery guidance with evident clinical benefit. Here, we specifically develop a stimulated Raman imaging based framework that demonstrates gastro-intestinal (GI) cancer detection of unprocessed human surgical specimens. The generated stimulated Raman histology (SRH) images combine chemical and collagen information to mimic conventional Hes histopathology staining. We report excellent agreements between sRH and Hes images acquire on the same patients for healthy, pre-cancerous and cancerous colon and pancreas tissue sections. We also develop a novel fast sRH imaging modality that captures at the pixel level all the information necessary to provide instantaneous sRH images. these developments pave the way for instantaneous label free GI histology in an intra-operative context. (10.1038/s41598-019-46489-x)
    DOI : 10.1038/s41598-019-46489-x
  • Multi-cell MIMO Transceiver Design for Mission-Critical Communication
    • Jagyasi Deepa
    • Daher Alaa
    • Coupechoux Marceau
    , 2019. Business and Mission critical communication (MCC) is a major communication paradigm that is used by public agencies, e.g., during emergency situations, or critical infrastructure companies, e.g., airports, transportation, etc. MCC has very stringent requirements in terms of reliability, coverage and should offer group communications. Coordinated Multimedia Multicast/Broadcast single frequency network (MBSFN) is considered as a potential technology for MCC as it benefits from increased coverage and inter-cell interference mitigation. In this paper, we propose multi-input-multi-output (MIMO) multimedia MBSFN system design wherein each base station (BS) of a coordinated cluster multicasts a common message to all the users in a group. We use a greedy algorithm to dynamically form the cluster of synchronized BSs for optimal utilization of resources within an MBSFN. We assume the availability of perfect channel state information (CSI) knowledge and jointly obtain the optimal precoder and receive filters by minimizing the overall sum-mean-square-error (sum-MSE) constrained over the total transmit power. We further extend the proposed design to a robust case by considering the imperfections in available channel knowledge and obtain the transceiver matrices that are resilient to channel errors. We also present both the joint and robust system design for Single-Cell point-to-multipoint (SC-PTM) which is an alternative solution to MBSFN in MCC. Numerical results show the effectiveness of the proposed network architecture for future mission critical communication. Furthermore, the comparison results show that the proposed robust design demonstrate better performance and is resilient to the presence of CSI errors.
  • Experimental realization of fermi- pasta-Ulam-tsingou recurrence in a long-haul optical fiber transmission system
    • Goossens Jan-Willem
    • Hafermann Hartmut
    • Jaouën Yves
    Scientific Reports, Nature Publishing Group, 2019, 9 (1). The integrable nonlinear Schrödinger equation (NLSE) is a fundamental model of nonlinear science which also has important consequences in engineering. The powerful framework of the periodic inverse scattering transform (IST) provides a description of the nonlinear phenomena modulational instability and Fermi-Pasta-Ulam-Tsingou (FPUT) recurrence in terms of exact solutions. It associates the complex nonlinear dynamics with invariant nonlinear spectral degrees of freedom that may be used to encode information. While optical fiber is an ideal testing ground of its predictions, maintaining integrability over sufficiently long distances to observe recurrence, as well as synthesizing and measuring the field in both amplitude and phase on the picosecond timescales of typical experiments is challenging. Here we report on the experimental realization of FPUT recurrence in terms of an exact space-time-periodic solution of the integrable NLSE in a testbed for optical communication experiments. The complex-valued initial condition is constructed by means of the finite-gap integration method, modulated onto the optical carrier driven by an arbitrary waveform generator and launched into a recirculating fiber loop with periodic amplification. The measurement with an intradyne coherent receiver after a predetermined number of revolutions provides a non-invasive full-field characterization of the space-time dynamics. The recurrent space-time evolution is in close agreement with theoretical predictions over a distance of 9000 km. Nonlinear spectral analysis reveals an invariant nonlinear spectrum. The space-time scale exceeds that of previous experiments on FPUT recurrence in fiber by three orders of magnitude. The NLSE is an important exactly solvable model for the study of nonlinear phenomena. An example is modu-lational instability (MI) 1 , an exponential amplification of periodic random fluctuations at the expense of a pump wave that has been suggested as a possible mechanism for the generation of rogue waves 2. The reversal of this process can give rise to repeated cycles of growth and decay, which constitute a realization of FPUT recurrence 3,4. In the framework of the IST, these phenomena find a description in terms of exact solutions 5,6 associated with conserved nonlinear spectral degrees of freedom. From an engineering perspective, the prospect of encoding information in the invariant nonlinear spectrum is of high interest for optical communication systems, which today are limited by nonlinear interference 7. Various predictions of the underlying analytical NLSE theory have been observed in optical fiber experiments, including solitons 8 , Akhmediev breathers 9 and their collisions 10 , the Peregrine soliton 11 , and the Kuznetsov-Ma soliton 12. Such experiments are not without challenges. They are typically conducted at average signal powers up to a few Watts 10,12-17. The dynamics take place over distances of several hundred meters up to few kilometers and on the picosecond scale. At such timescales, the generation of arbitrary initial conditions is difficult and has been approximated by amplitude modulation based on dual-frequency excitation 9,18 , by beating of two narrow-linewidth lasers to create a low-frequency modulation 11 , or excitation of superpositions of complex expo-nentials with tuned relative phases and amplitudes. The latter can be obtained from an optical frequency comb shaped with a programmable optical filter 10. The observation of the spatial dynamics has been achieved with fiber cutback experiments 12. Simultaneous observation of amplitude and phase information can be realized by frequency-resolved optical gating (FROG) 11 or nonlinear digital holography 14 . (10.1038/s41598-019-54825-4)
    DOI : 10.1038/s41598-019-54825-4
  • SILP: A Stochastic Imitative Learning Protocol for Multi-Carrier Spectrum Access
    • Iellamo Stefano
    • Coupechoux Marceau
    • Khan Zaheer
    IEEE Transactions on Cognitive Communications and Networking, IEEE, 2019, 5 (4), pp.990-1003. Decentralized wireless networks require efficient channel access protocols to enable wireless nodes (WNs) to access dedicated frequency channels without any coordination. In this paper, we develop a distributed spectrum access protocol for the case where the WNs are equipped with multiple radio transceivers. We consider the case where the channels are identical and duly separated so that each of the users' antenna can access only one of the available channels. To model the competition amongst WNs, we formulate a particular multi-agent multi-carrier spectrum access game, where each WN has to decide at each iteration how many antennas and which frequency channels it has to access. To study the resulting equilibrium, we solve a multi-objective optimization problem and design a bi-level learning algorithm which is proven to converge towards a socially efficient and max-min fair equilibrium state. (10.1109/TCCN.2019.2924925)
    DOI : 10.1109/TCCN.2019.2924925