Sorry, you need to enable JavaScript to visit this website.
Share

Publications

2024

  • Large Satellite Constellations: Challenges and Impact
    • Baccelli François
    • Candel Sébastien
    • Perrin Guy
    • Puget Jean-Loup
    , 2024. The New Space Age (NewSpace) marks the advent of a new era in the use of space, characterized by the opening of space to new players, the use of new space technologies, new functionalities for satellites in orbit, and the development of satellite constellations, mainly in the fields of communications and Earth observation. These developments are underpinned by first-rate scientific and technological advances, as well as considerable public and private investment, in particular in the USA, China and, to a lesser extent, Europe. Fleets of small low- and medium-orbit satellites are replacing or complementing the large geostationary satellites that predominated in the previous period. Whereas space used to be reserved to a small number of states and major industrial groups, one is now witnessing the emergence of new space states, new industrial groups such as SpaceX or Amazon, and many start-ups. One also observes the emergence of companies with launching and satellite manufacturing capacities, which are also taking on the role of telecommunication operators and content producers. The most visible result of the deployment of these new space networks is the ability to provide high-speed, low-latency Internet connections to any point on the globe. Combined with Earth observation capabilities, these new communications resources also enable real-time action to be taken in any region, including those with no equipment other than terminals. In addition, these space networks are remarkably resilient compared with terrestrial networks. Geostrategic and military considerations combine with rapidly evolving business models to explain the massive investments currently being made in this domain. However, the lack of international regulation in the field is leading to a race to occupy orbits and frequencies, which has already had serious consequences for a whole range of scientific activities. These constellations have a potentially negative impact on astronomy in the visible and infrared optical domains, as well as on radio astronomy. They also raise a major problem in terms of space congestion, with an increase in the amounts of satellite debris resulting from launches or collisions between satellites, and the possibility of reaching a phase of chain reaction collisions. In addition, from an environmental point of view, the consequences of the proliferation of launches and uncontrolled re-entries into the atmosphere are equally worrying. What’s more, the lack of regulation in the field also leads to a loss of sovereignty, since these new satellite communication networks do not comply with any of the rules that states impose on terrestrial communication networks operating on their territories. A sustainable, global solution must be found to these problems, before major and potentially irreversible damage is inflicted on the planet’s environment, geostrategic balances, democracy, and science. While the Acad´emie des Sciences considers that France and Europe need to step up their scientific and industrial actions in this field in order to benefit from the remarkable advances of these new networks, and ultimately leverage the benefits of a resilient and secure communications network, the Acad´emie also recommends working in parallel to strengthen regulation of the field with the aim of assuring sustainable access to orbital and frequency resources, as well as protection for negatively impacted fields, foremost among which are astronomy and the environment. (10.62686/3)
    DOI : 10.62686/3
  • Grandes Constellations de Satellites : Enjeux et Impacts
    • Baccelli François
    • Candel Sébastien
    • Perrin Guy
    • Puget Jean-Loup
    , 2024. Le nouvel âge spatial (NewSpace) marque l’avènement d’une ère nouvelle dans l’utilisation de l’espace caractérisée par une ouverture de l’espace à de nouveaux acteurs fondée sur de nouvelles technologies spatiales, par de nouvelles fonctionnalités pour les satellites mis en orbite et par le développement de constellations de satellites, principalement dans les domaines des communications et de l’observation de la Terre. Ces développements s’appuient sur des avancées scientifiques et technologiques de premier plan ainsi que sur des investissements publics et privés considérables, notamment aux États-Unis, en Chine et dans une moindre mesure en Europe. Des flottes de petits satellites en orbite basse ou moyenne viennent remplacer ou s’ajouter aux grands satellites géostationnaires qui prédominaient dans la phase précédente. Alors que l’espace était auparavant réservé à un petit nombre d’États et de grands groupes industriels, on observe actuellement l’émergence de nouveaux États spatiaux, de nouveaux groupes industriels comme SpaceX ou Amazon, ainsi que d’un grand nombre de jeunes pousses. On note l’émergence d’entreprises disposant de capacités de lancement, de fabrication de satellites et qui prennent également le rôle d’opérateurs de télécommunications et de producteurs de contenu. Le résultat le plus visible du déploiement de ces nouveaux réseaux spatiaux est de permettre des connexions Internet à haut débit et faible latence en tout point du globe terrestre. Combinés à des capacités d’observation de la Terre, ces nouveaux moyens de communications permettent aussi d’agir en temps-réel dans toute région, y compris dans celles dépourvues d’équipements autres que les terminaux. Ces réseaux spatiaux ont, de plus, de remarquables propriétés de résilience en comparaison des réseaux terrestres. Des considérations géostratégiques et militaires se combinent donc avec des modèles économiques en évolution rapide pour expliquer les investissements massifs en cours dans ce domaine. Toutefois, l’absence de régulation internationale du domaine conduit à une course pour l’occupation des orbites et des fréquences, course qui a déjà de graves conséquences sur tout un ensemble de domaines. Ces constellations auront potentiellement un impact très négatif sur l’astronomie dans le domaine optique visible et infrarouge ainsi que sur la radioastronomie. Elles posent aussi un problème majeur qui est celui de l’encombrement de l’espace avec l’augmentation du nombre des débris satellisés issus des lancements ou des collisions entre satellites et avec la possible atteinte d’une phase de collisions en chaîne. De plus, d’un point de vue environnemental, les conséquences d’une multiplication des lancements et des rentrées incontrôlées dans l’atmosphère sont, elles aussi, préoccupantes. Par ailleurs, cette absence de régulation du domaine conduit aussi à une perte de souveraineté puisque ces nouveaux réseaux de communications satellitaires n’appliquent aucune des règles que les États imposent aux réseaux de communications terrestres opérant sur leurs territoires. Une solution durable et globale doit être apportée à ces problèmes, avant que des dommages majeurs et potentiellement irréversibles soient infligés à l’environnement de la planète, aux équilibres géostratégiques, à la démocratie et à la science. Si l’Académie des sciences considère que la France et l’Europe doivent renforcer leurs actions scientifiques et industrielles dans ce domaine pour pouvoir bénéficier des avancées remarquables de ces nouveaux réseaux et disposer à terme d’un réseau de communication résilient et sécurisé, elle recommande aussi de travailler en parallèle au renforcement de la régulation du domaine dans le but de garantir un accès durable aux ressources orbitales et fréquentielles, ainsi qu’une protection des domaines négativement impactés, au premier rang desquels l’astronomie et l’environnement. (10.62686/2)
    DOI : 10.62686/2
  • A Highly Flexible Passive/Active Discrete-Time Delta-Sigma Receiver
    • Nguyen Minh Tien
    • Jabbour Chadi
    • Ben Kalaia Karim
    • Le Hanh-Phuc
    • Nguyen Ngoc
    • Nguyen Van-Tam
    Electronics, MDPI, 2024, 13 (7), pp.1295. This paper presents a fourth-order discrete-time direct RF-to-digital Delta-Sigma receiver architecture for flexible receivers with a wide frequency range. The use of a current-driven passive mixer with RF feedback enables high-Q bandpass filtering and relaxes the linearity requirement of the RF amplifier. In addition, the reconfigurable passive/active loop filter offers a good compromise between power consumption, linearity, and dynamic range. The other important feature of the proposed architecture is the use of a sampling frequency that is a divisor of the LO frequency. This solves several problems such as the upmixing of quantization noise, the need to reconfigure the Delta-Sigma loop when changing the LO frequency, and the use of two independent clocks for the LO and the sampling frequency. The circuit was implemented using 65 nm CMOS technology. The I/Q Direct Delta-Sigma receiver has an RF bandwidth of 20 MHz and a sampling frequency of 400 MHz. Measurement results show a very high dynamic range of up to 80 dB with a peak SNDR of 46 dB for a power consumption of 46 mW at 800 MHz. (10.3390/electronics13071295)
    DOI : 10.3390/electronics13071295
  • Monitoring the Convergence Speed of PDHG to Find Better Primal and Dual Step Sizes
    • Fercoq Olivier
    , 2024. Primal-dual algorithms for the resolution of convex-concave saddle point problems usually come with one or several step size parameters. Within the range where convergence is guaranteed, choosing well the step size can make the difference between a slow or a fast algorithm. A usual way to adaptively set step sizes is to ensure that there is a fair balance between primal and dual variable's amount of change. In this work, we show how to find even better step sizes for the primal-dual hybrid gradient. Getting inspiration from quadratic problems, we base our method on a spectral radius estimation procedure and try to minimize this spectral radius, which is directly related to the rate of convergence. Building on power iterations, we could produce spectral radius estimates that are always smaller than 1 and work also in the case of conjugate principal eigenvalues. For strongly convex quadratics, we show that our step size rule yields an algorithm as fast as inertial gradient descent. Moreover, since our spectral radius estimates only rely on residual norms, our method can be readily adapted to more general convex-concave saddle point problems. In a second part, we extend these results to a randomized version of PDHG called PURE-CD. We design a statistical test to compare observed convergence rates and decide whether a step size is better than another. Numerical experiments on least squares, sparse SVM, TV-L1 denoising and TV-L2 denoising problems support our findings.
  • Skyline Operators for Document Spanners
    • Amarilli Antoine
    • Kimelfeld Benny
    • Labbé Sébastien
    • Mengel Stefan
    , 2024, 290, pp.7:1-7:18. When extracting a relation of spans (intervals) from a text document, a common practice is to filter out tuples of the relation that are deemed dominated by others. The domination rule is defined as a partial order that varies along different systems and tasks. For example, we may state that a tuple is dominated by tuples that extend it by assigning additional attributes, or assigning larger intervals. The result of filtering the relation would then be the skyline according to this partial order. As this filtering may remove most of the extracted tuples, we study whether we can improve the performance of the extraction by compiling the domination rule into the extractor. To this aim, we introduce the skyline operator for declarative information extraction tasks expressed as document spanners. We show that this operator can be expressed via regular operations when the domination partial order can itself be expressed as a regular spanner, which covers several natural domination rules. Yet, we show that the skyline operator incurs a computational cost (under combined complexity). First, there are cases where the operator requires an exponential blowup on the number of states needed to represent the spanner as a sequential variable-set automaton. Second, the evaluation may become computationally hard. Our analysis more precisely identifies classes of domination rules for which the combined complexity is tractable or intractable. (10.4230/LIPICS.ICDT.2024.7)
    DOI : 10.4230/LIPICS.ICDT.2024.7
  • Conjunctive Queries on Probabilistic Graphs: The Limits of Approximability
    • Amarilli Antoine
    • van Bremen Timothy
    • Meel Kuldeep
    , 2024, 27th International Conference on Database Theory. Query evaluation over probabilistic databases is a notoriously intractable problem - not only in combined complexity, but for many natural queries in data complexity as well [Antoine Amarilli et al., 2017; Nilesh N. Dalvi and Dan Suciu, 2012]. This motivates the study of probabilistic query evaluation through the lens of approximation algorithms, and particularly of combined FPRASes, whose runtime is polynomial in both the query and instance size. In this paper, we focus on tuple-independent probabilistic databases over binary signatures, which can be equivalently viewed as probabilistic graphs. We study in which cases we can devise combined FPRASes for probabilistic query evaluation in this setting. We settle the complexity of this problem for a variety of query and instance classes, by proving both approximability and (conditional) inapproximability results. This allows us to deduce many corollaries of possible independent interest. For example, we show how the results of [Marcelo Arenas et al., 2021] on counting fixed-length strings accepted by an NFA imply the existence of an FPRAS for the two-terminal network reliability problem on directed acyclic graphs: this was an open problem until now [Rico Zenklusen and Marco Laumanns, 2011]. We also show that one cannot extend a recent result [Timothy van Bremen and Kuldeep S. Meel, 2023] that gives a combined FPRAS for self-join-free conjunctive queries of bounded hypertree width on probabilistic databases: neither the bounded-hypertree-width condition nor the self-join-freeness hypothesis can be relaxed. Finally, we complement all our inapproximability results with unconditional lower bounds, showing that DNNF provenance circuits must have at least moderately exponential size in combined complexity. (10.4230/LIPIcs.ICDT.2024.15)
    DOI : 10.4230/LIPIcs.ICDT.2024.15
  • Informed Speech Self-supervised Representation Learning
    • Zaiem Mohamed Salah
    , 2024. Feature learning has been driving machine learning advancement with the recently proposed methods getting progressively rid of handcrafted parts within the transformations from inputs to desired labels. Self-supervised learning has emerged within this context, allowing the processing of unlabeled data towards better performance on low-labeled tasks. The first part of my doctoral work is aimed towards motivating the choices in the speech selfsupervised pipelines learning the unsupervised representations. In this thesis, I first show how conditional-independence-based scoring can be used to efficiently and optimally select pretraining tasks tailored for the best performance on a target task. The second part of my doctoral work studies the evaluation and usage of pretrained self-supervised representations. I explore, first, the robustness of current speech self-supervision benchmarks to changes in the downstream modeling choices. I propose, second, fine-tuning approaches for better efficicency and generalization.
  • Ranked Enumeration for MSO on Trees via Knowledge Compilation
    • Amarilli Antoine
    • Bourhis Pierre
    • Capelli Florent
    • Monet Mikaël
    , 2024, 290 (25), pp.5:1–25:18. We study the problem of enumerating the satisfying assignments for circuit classes from knowledge compilation, where assignments are ranked in a specific order. In particular, we show how this problem can be used to efficiently perform ranked enumeration of the answers to MSO queries over trees, with the order being given by a ranking function satisfying a subset-monotonicity property. Assuming that the number of variables is constant, we show that we can enumerate the satisfying assignments in ranked order for so-called multivalued circuits that are smooth, decomposable, and in negation normal form (smooth multivalued DNNF). There is no preprocessing and the enumeration delay is linear in the size of the circuit times the number of values, plus a logarithmic term in the number of assignments produced so far. If we further assume that the circuit is deterministic (smooth multivalued d-DNNF), we can achieve linear-time preprocessing in the circuit, and the delay only features the logarithmic term. (10.4230/LIPIcs.ICDT.2024.25)
    DOI : 10.4230/LIPIcs.ICDT.2024.25
  • Internal methods for the generation and inpainting of images and videos
    • Cherel Nicolas
    , 2024. Image editing and generation are complex problems in image processing. Recently, we have seen a great leap in development by using learning-based approaches that take advantage of large image databases.In this thesis, we study internal methods i.e. methods based on a single image as a data source.This includes patch-based methods, internal learning approaches that train a neural network on a single image, and attention mechanisms that combine patches and deep networks.First, we present a contribution to single image generation with a patch-based method. This classical approach is competitive with recent network-based approaches but does not require a learning phase.Second, attention mechanisms are important for modeling long-range dependencies and are more flexible than convolutions but suffer from poor computational complexity. In fact, the complexity grows quadratically with the number of input elements making such layers unusable for high-resolution images or videos. We propose an efficient approximation based on nearest neighbor search.Finally, we look at the recent diffusion models for image and video inpainting. In the single image setting, we show how very lightweight architectures are competitive with the state-of-the-art. Our models run and train at a fraction of the computational cost of popular models.We also propose an application to video inpainting with a specific training strategy that significantly improves the results over the baseline. This strategy is particularly adapted to these one-shot models.
  • Digital Coherent Sensing over Deployed Fibers for Advanced Network Telemetry
    • Guerrier Sterenn
    • Dorize Christian
    • Pavani Henrique
    • Mardoyan Haïk
    • Awwad Élie
    • Renaudier Jérémie
    , 2024, pp.Tu2J.1. We discuss the performance of Coherent-MIMO-DFS over deployed optical networks in various configurations and address technological challenges such as adaptation to various fiber types, disturbance identification. (10.1364/OFC.2024.Tu2J.1)
    DOI : 10.1364/OFC.2024.Tu2J.1
  • Experimental Disaggregation of Propagation Effects in Optical Links
    • Girard-Jollet Joana
    • Antona Jean-Christophe
    • Carbo Meseguer Alexis
    • Boitier Fabien
    • Ramantanis Petros
    • Rekaya Ben Othman Ghaya
    , 2024, pp.1-3. We introduce a protocol to evaluate experimentally the fiber nonlinear coefficients for intra- and inter-channel effects. We characterized a three-span transmission using highly dispersed QPSK signals, observing good agreement with the eGN model.
  • A Study on Hierarchical Text Classification as a Seq2seq Task
    • Torba Fatos
    • Gravier Christophe
    • Laclau Charlotte
    • Kammoun Abderrhammen
    • Subercaze Julien
    , 2024. With the progress of generative neural models, Hierarchical Text Classification (HTC) can be cast as a generative task. In this case, given an input text, the model generates the sequence of predicted class labels taken from a label tree of arbitrary width and depth. Treating HTC as a generative task introduces multiple modeling choices. These choices vary from choosing the order for visiting the class tree and therefore defining the order of generating tokens, choosing either to constrain the decoding to labels that respect the previous level predictions, up to choosing the pre-trained Language Model itself. Each HTC model therefore differs from the others from an architectural standpoint, but also from the modeling choices that were made. Prior contributions lack transparent modeling choices and open implementations, hindering the assessment of whether model performance stems from architectural or modeling decisions. For these reasons, we propose with this paper an analysis of the impact of different modeling choices along with common model errors and successes for this task. This analysis is based on an open framework coming along this paper that can facilitate the development of future contributions in the field by providing datasets, metrics, error analysis toolkit and the capability to readily test various modeling choices for one given model. (10.1007/978-3-031-56063-7_20)
    DOI : 10.1007/978-3-031-56063-7_20
  • EME-CNTK: Infinite Limits of Convolutional Neural Network for Urban Electromagnetic Field Exposure Reconstruction
    • Mallik Mohammed
    • Allaert Benjamin
    • Egea-Lopez E.
    • Gaillot D.P.
    • Wiart J.
    • Clavier L.
    IEEE Access, IEEE, 2024, 12, pp.49476-49488. Electromagnetic field (EMF) exposure has grown to be a critical concern as a consequence of the ongoing installation of fifth-generation cellular networks (5G). The lack of measurements makes it difficult to accurately assess the EMF in a specific urban area, as Spectrum cartography (SC) relies on a set of measurements recorded by spatially distributed sensors for the generation of exposure maps. However, when the spatial sampling rate is limited, significant estimation errors occur. To overcome this issue, the exposure map estimation is addressed as a missing data imputation task. We compute a convolutional neural tangent kernel (CNTK) for an infinitely wide convolutional neural network whose training dynamics can be completely described by a closed-form formula. This CNTK is employed to impute the target matrix and estimate EMF exposure from few sensors sparsely located in an urban environment. Experimental results show that the kernel, even when only sparse sensor data are available, can produce accurate estimates. It is a promising solution for exposure map reconstruction that does not require large training sets. The proposed method is compared with other deep learning approaches and Gaussian Process regression. (10.1109/ACCESS.2024.3380835)
    DOI : 10.1109/ACCESS.2024.3380835
  • A Reconfigurable Programmable Logic Block for a Multi-Style Asynchronous FPGA resistant to Side-Channel Attacks
    • Hoogvorst Philippe
    • Guilley Sylvain
    • Chaudhuri Sumanta
    • Danger Jean-Luc
    • Beyrouthy Taha
    • Fesquet Laurent
    , 2008. Side-channel attacks are efficient attacks against cryptographic devices. They use only quantities observable from outside, such as the duration and the power consumption. Attacks against synchronous devices using electric observations are facilitated by the fact that all transitions occur simultaneously with some global clock signal. Asynchronous control remove this synchronization and therefore makes it more difficult for the attacker to insulate \emph{interesting intervals}. In addition the coding of data in an asynchronous circuit is inherently more difficult to attack. This article describes the Programmable Logic Block of an asynchronous FPGA resistant against \emph{side-channel attacks}. Additionally it can implement different styles of asynchronous control and of data representation. (10.48550/arXiv.0809.3942)
    DOI : 10.48550/arXiv.0809.3942
  • Domain Adaptation for Learned Image Compression with Supervised Adapters
    • Presta Alberto
    • Spadaro Gabriele
    • Tartaglione Enzo
    • Fiandrotti Attilio
    • Grangetto Marco
    , 2024, pp.33-42. In Learned Image Compression (LIC), a model is trained at encoding and decoding images sampled from a source domain, often outperforming traditional codecs on natural images; yet its performance may be far from optimal on images sampled from different domains. In this work, we tackle the problem of adapting a pre-trained model to multiple target domains by plugging into the decoder an adapter module for each of them, including the source one. Each adapter improves the decoder performance on a specific domain, without the model forgetting about the images seen at training time. A gate network computes the weights to optimally blend the contributions from the adapters when the bitstream is decoded. We experimentally validate our method over two state-of-the-art pre-trained models, observing improved rate-distortion efficiency on the target domains without penalties on the source domain. Furthermore, the gate’s ability to find similarities with the learned target domains enables better encoding efficiency also for images outside them. (10.1109/DCC58796.2024.00011)
    DOI : 10.1109/DCC58796.2024.00011
  • On the Resiliency of Protected Masked S-Boxes Against Template Attack in the Presence of Temperature and Aging Misalignments
    • Anik Md Toufiq Hasan
    • Danger Jean-Luc
    • Guilley Sylvain
    • Karimi Naghmeh
    IEEE Transactions on Very Large Scale Integration (VLSI) Systems, IEEE, 2024, pp.1-14. Profiling side-channel analysis (SCA) attacks have received a lot of attention in the recent years. To perpetrate these attacks, the adversary creates a profile of a sensitive device at her disposal, and uses it to model a target device with a similar implementation to extract its key. Template attacks are recognized to be the most powerful profiling attacks when the measurement noise is Gaussian. To tackle SCA attacks, different countermeasures have been proposed in the literature, among which masking schemes have received the utmost attention. By adding randomness to the circuit, masking schemes prevent the adversary from relating the power consumption to the evaluated data, thus making the attack more difficult. In this article, we study the protection provided by several masking schemes against template attacks. More precisely, we investigate how the success of the template attack is changed when there is a misalignment between the target and profiling devices in terms of temperature and process variations. As another innovative analysis angle, we extensively study the impact of device aging on the template attack and demonstrate quantitatively how aging misalignments in side-channel traces, between the profiling and the target devices, do hinder the attack. The main objective of this study is to get accurate and numerous results allowing the designer to compare different implementations of masking and accordingly choose one which corresponds to the best compromise among complexity, security, and sensitivity to temperature and aging. We target the S-Box module of the unprotected PRESENT cipher along with its five masking variants including global lookup table (GLUT), rotating S-Box masking (referred to as RSM-LOG hereafter), RSM with read-only memory (RSM-ROM), Ishai–Sahai–Wagner masking (ISW), and threshold implementation (TI). The unprotected circuit gets impacted by such aging misalignments with $\approx$ 12.5% increase in the number of traces needed to reach 80% success rate (SR) in the course of 20 weeks of aging at 105 $^{\circ}$ C. Such increase is 23.3%, 37.19%, and 38.24% for ISW, GLUT, and RSM-LOG, respectively. For RSM-ROM such increase is 193.37% for ten weeks of aging. Interestingly, TI is not much affected by aging in this regard. (10.1109/TVLSI.2024.3374257)
    DOI : 10.1109/TVLSI.2024.3374257
  • Machine-learning-based technique to establish ASE or Kerr impairment dominance in optical transmission
    • Andrenacci Isaia
    • Lonardi Matteo
    • Ramantanis Petros
    • Awwad Élie
    • Irurozki Ekhiñe
    • Clémençon Stephan
    • Almonacil Sylvain
    Journal of Optical Communications and Networking, Piscataway, NJ ; Washington, DC : IEEE : Optical Society of America, 2024, 16 (4), pp.481-492. Data extraction from optical networks has increased substantially with the evolution of monitoring and telemetry methods. Using data analysis and machine learning, this paper aims to derive insights from this data, contributing to the development of self-optimized optical networks. More particularly, it focuses on predicting the Kerr and amplified spontaneous emission dominance by examining the fluctuations in the signal-to-noise ratio due to polarization-dependent loss. Building on previous work, which used the SNR statistic as the input feature of machine learning, our primary goal is to enhance prediction precision while concurrently decreasing the computational model’s complexity. After refining the selection parameters of the input features, we observed a 70% reduction in the input feature length with respect to our previous work. The model reached a 98% accuracy rate, and it was able to successfully classify the regimes in a limited set of unseen experimental instances. (10.1364/JOCN.506931)
    DOI : 10.1364/JOCN.506931
  • Feasibility Study of Joint Modeling of Environmental and Morphological Effects for WBAN
    • Youssef Badre
    • Roblin Christophe
    , 2024, pp.1-5. <div><p>In this article, we assess the feasibility of the joint modeling of the effects of the environment and morphology in the context of the Wireless Body Area Networks (WBAN) for the scenario based approach in the 1 st Ultra Wide Band (UWB) sub-band [3.1, 4.8] GHz. Previous works have enabled to obtain important representative statistical samples for these two variables, to our knowledge not reached to date for WBAN communications, with a dedicated methodology combining simulations results and experimental designs supported by measurements. Our approach is to combine these variables in a disjointed manner in order to extract a more complete model and naturally limit the simulations campaigns to a reference subject for all the environments to be considered. Thus, for a given subject we therefore estimate the propagation channel transfer function from its on-body contribution and that of the environment obtained with the reference subject. This feasibility study is the first step in extracting interesting future models and a huge saving of time. The comparison results obtained between the estimated transfer function and the nominally calculated one highlight fairly small differences in the majority of cases, which is encouraging for the use of this simplifying approach.</p></div> (10.23919/EuCAP60739.2024.10500941)
    DOI : 10.23919/EuCAP60739.2024.10500941
  • Learning High-Quality and General-Purpose Phrase Representations
    • Chen Lihu
    • Varoquaux Gaël
    • Suchanek Fabian M.
    , 2024. Phrase representations play an important role in data science and natural language processing, benefiting various tasks like Entity Alignment, Record Linkage, Fuzzy Joins, and Paraphrase Classification. The current state-of-theart method involves fine-tuning pre-trained language models for phrasal embeddings using contrastive learning. However, we have identified areas for improvement. First, these pretrained models tend to be unnecessarily complex and require to be pre-trained on a corpus with context sentences. Second, leveraging the phrase type and morphology gives phrase representations that are both more precise and more flexible. We propose an improved framework to learn phrase representations in a context-free fashion. The framework employs phrase type classification as an auxiliary task and incorporates character-level information more effectively into the phrase representation. Furthermore, we design three granularities of data augmentation to increase the diversity of training samples. Our experiments across a wide range of tasks show that our approach generates superior phrase embeddings compared to previous methods while requiring a smaller model size. The code is available at https: //github.com/tigerchen52/PEARL
  • Understanding Interaction and Breakouts of Safety Boundaries in Virtual Reality Through Mixed-Method Studies
    • Tseng Wen-Jie
    • Kontrazis Petros Dimitrios
    • Lecolinet Eric
    • Huron Samuel
    • Gugenheimer Jan
    , 2024, pp.482-492. Virtual Reality (VR) technologies become ubiquitous, allowing people to employ immersive experiences in their homes. Since VR participants are visually disconnected from their real-world environment, commercial products propose safety boundaries to prevent colliding with their surroundings. However, there is a lack of empirical knowledge on how people perceive and interact with safety boundaries in everyday VR usage. This paper investigates this re- search gap with two mixed-method empirical studies. Study 1 reports an online survey (n=48) collecting data about attitudes towards safety boundaries, behavior while interacting with them, and reasons for breakout. Our analysis with open coding reveals that some VR participants ignored safety boundaries intentionally, even breaking out of them and continuing their actions. Study 2 investigates how and why VR participants intentionally break out when interacting close to safety boundaries and obstacles by replicating breakouts in a lab study (n=12). Our interview and breakout data discover three strategies, revealing VR participants sometimes break out of boundaries based on their real-world spatial information. Finally, we discuss improving future VR safety mechanisms by supporting participants’ real-world spatial mental models using landmarks. (10.1109/VR58804.2024.00069)
    DOI : 10.1109/VR58804.2024.00069
  • The Social Impact of Extended Reality Spatial Productivity in Constrained, Public and Passenger Spaces
    • Medeiros Daniel
    • Wilson Graham
    • Brewster Stephen
    • Mcgill Mark
    , 2024. In this workshop submission, we reflect on the need to balance a breadth of design considerations when supporting mobile, spatial productivity. Whilst performance, ergonomics and usability remain key, there is an increasing realisation that the social impact of our designs must also be considered - from the social comfort and acceptability of a given workspace or interaction technique, to the social collisions they provoke with other passengers, to the environmental and social awareness the design facilitates in allowing the user to focus on their task whilst maintaining awareness of their environment and those around them.
  • METHOD FOR CHARACTERIZING AN ORGAN OF A PATIENT IN A MEDICAL IMAGE
    • Vétil Rebeca
    • Abi-Nader Clément
    • Bône Alexandre
    • Rohé Marc-Michel
    • Gori Pietro
    • Bloch Isabelle
    , 2024.
  • Reinforcement Learning for Uncoordinated Multiple Access
    • Robaglia Benoît-Marie
    , 2024. Distributed Medium Access Control (MAC) protocols are fundamental in wireless communication, yet traditional random access-based protocols face significant limitations dealing with the Internet-of-Things (IoT) use cases. Indeed, they struggle with latency guarantees, making them unsuitable for Ultra Reliable Low Latency Communications (URLLC). This thesis addresses these challenges by leveraging the potential of Deep Reinforcement Learning (DRL), a paradigm where decision-makers optimize actions by interacting with an environment.This thesis tackles key challenges in the Medium Access (MA) problem for URLLC networks, including the latency in centralized protocols, the collision and retransmission issues in Grant-Free (GF) protocols, the complexities to handle device heterogeneity and dynamic environments. Furthermore, the thesis explores the integration of new physical layer techniques like Non-Orthogonal Multiple Access (NOMA).Our methodology applies DRL to develop intelligent protocols, which has already shown effectiveness in addressing IoT applications. Initially, we model the URLLC problem within a centralized paradigm, where the Base Station (BS) orchestrates device transmissions. This setup has the benefit to ensure collision-free communication but introduces partial observability as the BS does not have access to the users' buffer and channel state. We tackle this problem by introducing two algorithms: FilteredPPO and NOMA-PPO. While the former outperforms the benchmarks in scenarios with periodic traffic patterns, the latter demonstrates superior performance over the state-of-the-art baselines on scenarios with sporadic traffic. The third and fourth contributions, SeqDQN and MCA-PPO, study the application of Multi-Agent Reinforcement Learning (MARL) for URLLC where each device is equipped by a DRL algorithm. While SeqDQN explores a method to reduce non-stationarity and enhances scalability and training efficiency, MCA-PPO presents a theoretically robust solution for the Dynamic Multi-Channel Access (DMCA) challenge allowing users to optimize bandwidth utilization, and thus enhancing the URLLC performance.
  • Pair-Matching: Link Prediction with Adaptive Queries
    • Giraud Christophe
    • Issartel Yann
    • Lehericy Luc
    • Lerasle Matthieu
    Mathematical Statistics and Learning, EMS Publishing House, 2024. The pair-matching problem appears in many applications where one wants to discover good matches between pairs of entities or individuals. Formally, the set of individuals is represented by the nodes of a graph where the edges, unobserved at first, represent the good matches. The algorithm queries pairs of nodes and observes the presence/absence of edges. Its goal is to discover as many edges as possible with a fixed budget of queries. Pair-matching is a particular instance of multi-armed bandit problem in which the arms are pairs of individuals and the rewards are edges linking these pairs. This bandit problem is non-standard though, as each arm can only be played once. Given this last constraint, sub-linear regret can be expected only if the graph presents some underlying structure. This paper shows that sub-linear regret is achievable in the case where the graph is generated according to a Stochastic Block Model (SBM) with two communities. Optimal regret bounds are computed for this pair-matching problem. They exhibit a phase transition related to the Kesten-Stigum threshold for community detection in SBM. The pair-matching problem is considered in the case where each node is constrained to be sampled less than a given amount of times. We show how optimal regret rates depend on this constraint. The paper is concluded by a conjecture regarding the optimal regret when the number of communities is larger than 2. Contrary to the two communities case, we argue that a statistical-computational gap would appear in this problem. (10.4171/msl/46)
    DOI : 10.4171/msl/46
  • Développement d'un modèle de «Machine Learning» d'aide à la prescription de posologies individualisées destinées aux patients traités par amoxicilline en perfusion continue
    • Guillot Robin
    • El-Helali N.
    • Mory Celine
    • Gutton Johann
    • Hocquet Grégory
    • Le Monnier Alban
    • Buronfosse Anne
    • Billuart Olivier
    • Le Folgoc Loïc
    • Maynadier Xavier
    Journal of Epidemiology and Population Health, Elsevier Masson SAS, 2024, 72 (Supplément 1), pp.202294. Introduction Dans les situations infectieuses impliquant une hospitalisation, l'obtention d'une concentration sanguine efficace d'antibiotique constitue un enjeu qui passe par le choix d'une posologie adaptée. Si un pharmacologue peut être sollicité afin de tenir compte des caractéristiques individuelles, sa démarche d'adaptation posologique repose encore sur des principes très généraux. L'objectif de notre étude était de construire un modèle de Machine Learning de prédiction de la concentration d'amoxicilline à partir de la dose administrée et des caractéristiques du patient impliqué. Méthodes Etude rétrospective sur données qui porte sur les patients hospitalisés au sein du Groupe hospitalier Paris Saint-Joseph entre 2018 et 2023, traités par amoxicilline en perfusion continue et pour lesquels un dosage de la concentration sérique de cet antibiotique a été réalisé. Les caractéristiques démographiques (âge, sexe, IMC) et biologiques (fonctions rénale, hépatique, cardiaque) ont été extraites du dossier médical informatisé. Plusieurs modèles de prédiction de la concentration sanguine ont été construits. Le plus simple s'est fondé sur une régression linéaire utilisant la seule dose administrée. Trois modèles plus avancés - régression linéaire multivariée, forêt aléatoire, XGBoost – se sont appuyés sur un sous-ensemble de caractéristiques patient déterminé par la méthode de sélection de variables LASSO. La prédiction a été considérée comme performante lorsqu'elle correspondait à la concentration observée avec une marge d'erreur de 20 %. Résultats L'entrainement des modèles a porté sur 237 dosages sériques, l’évaluation de leur performance sur 57 dosages; 19 % des prédictions de la concentration sanguine à partir de la seule dose administrée étaient adaptées. Les modèles prenant aussi en compte les caractéristiques liées au poids et à la fonction rénale du patient amélioraient les performances. La régression linéaire multivariée, la forêt aléatoire et le XGBoost atteignaient respectivement 47 % [35 %-60 %] (IC95%), 47 % [35 %-61 %] et 51 % [37 %-63 %] de bonnes prédictions. Conclusion Les résultats suggèrent qu'un ajustement des posologies en fonction de caractéristiques individuelles est bénéfique. Une évaluation de la performance de ces modèles en prospectif est en cours afin de déterminer leur plus-value par rapport aux pratiques actuelles des pharmacologues. (10.1016/j.jeph.2024.202294)
    DOI : 10.1016/j.jeph.2024.202294