Sorry, you need to enable JavaScript to visit this website.
Partager

Publications

 

Les publications de nos enseignants-chercheurs sont sur la plateforme HAL :

 

Les publications des thèses des docteurs du LTCI sont sur la plateforme HAL :

 

Retrouver les publications figurant dans l'archive ouverte HAL par année :

2024

  • Superselection Rules and Bosonic Quantum Computational Resources
    • Descamps Eloi
    • Fabre Nicolas
    • Saharyan Astghik
    • Keller Arne
    • Milman Pérola
    Physical Review Letters, American Physical Society, 2024, 133 (26), pp.260605. <div><p>We present a method to systematically identify and classify quantum optical nonclassical states as classical or nonclassical based on the resources they create on a bosonic quantum computer. This is achieved by converting arbitrary bosonic states into multiple modes, each occupied by a single photon, thereby defining qubits of a bosonic quantum computer. Starting from a bosonic classical-like state in a representation that explicitly respects particle number superselection rules, we apply universal gates to create arbitrary superpositions of states with the same total particle number. The nonclassicality of the corresponding states can then be associated with the operations they induce in the quantum computer. We also provide a correspondence between the adopted representation and the more conventional one in quantum optics, where superpositions of Fock states describe quantum optical states, and we identify how multimode states can lead to quantum advantage. Our work contributes to establish a seamless transition from continuous to discrete properties of quantum optics while laying the grounds for a description of nonclassicality and quantum computational advantage that is applicable to spin systems as well.</p></div> (10.1103/PhysRevLett.133.260605)
    DOI : 10.1103/PhysRevLett.133.260605
  • Interference Networks with Random User Activity and Heterogeneous Delay Constraints
    • Nikbakht Homa
    • Wigger Michèle
    • Shamai Shlomo
    • Gorce Jean-Marie
    • Poor H. Vincent
    IEEE Transactions on Information Theory, Institute of Electrical and Electronics Engineers, 2024, 71 (2), pp.1043-1076. To answer the call for a new theoretical framework to simultaneously accommodate random user activity and heterogeneous delay traffic in Internet of Things (IoT) systems, in this paper we propose coding schemes and information-theoretic converse results for the transmission of heterogeneous delay traffic over interference networks with random user activity and random data arrivals. The heterogeneous traffic is composed of delay-tolerant traffic and delay-sensitive traffic where only the former can benefit from transmitter and receiver cooperation since the latter is subject to stringent decoding delays. The total number of cooperation rounds at transmitter and receiver sides is limited to $\D$ rounds. Each transmitter is active with probability $\rho \in [0,1]$. We consider two different models for the arrival of the mixed-delay traffic: in Model~$1$, each active transmitter sends a delay-tolerant message, and with probability $\rho_f \in [0,1]$ also transmits an additional delay-sensitive message; in Model~$2$, each active transmitter sends either a delay-sensitive message with probability $\rho_f$ or a delay-tolerant message with probability $1-\rho_f$. We derive inner and outer bounds on the fundamental per-user multiplexing gain (MG) region of the symmetric Wyner network as well as inner bounds on the fundamental MG region of the hexagonal model. Our inner and outer bounds are generally very close and coincide in special cases. They also show that when both transmitters and receivers can cooperate, then under Model~$1$, transmitting delay-sensitive messages hardly causes any penalty on the sum per-user MG, and under Model~$2$, operating at large delay-sensitive per-user MGs incurs no penalty on the delay-tolerant per-user MG and thus increases the sum per-user MG. (10.1109/TIT.2024.3523775)
    DOI : 10.1109/TIT.2024.3523775
  • Analyse, définition et conception d'une messagerie résiliente et sécurisée
    • Dubos Charles
    , 2024. La messagerie asynchrone s’est imposée comme un moyen de communication incontournable, autant dans le cadre privé qu’organisationnel. Pourtant, elle demeure un vecteur d’attaque privilégié par l’exploitation de modes d’attaques mêlant ingénierie sociale et structures protocolaires. Cette thèse étudie dans un premier temps les mécanismes et extensions concourant à la sécurisation des protocoles de messagerie asynchrone. À partir d’une classification des messageries interpersonnelles, elle présente les principes modernes de sécurisation mis en œuvre dans les alternatives à l’e-mail. Elle décrit notamment les échanges nécessaires à la mise en place de clés éphémères propres aux échanges synchrones. Une deuxième contribution majeure est la description de cas d’usages actualisés de la messagerie asynchrone, fournissant une cartographie des besoins en sécurisation en fonction de contextes variés. Ces travaux introduisent la nécessité de niveaux de sécurisation adaptables pour s’adapter aux contextes, en soulignant des fonctions parfois contradictoires d’un même principe de sécurité (tel que la signature éphémère ou non-répudiable). Un catalogue des exploitations existantes adossées aux ambigüités d’interprétation est réalisée à partir de la littérature récente. Il permet ensuite d’envisager les menaces qui pèsent aujourd’hui sur la messagerie et d’en déduire les risques afférents. Dans un troisième temps, des aménagements de la messagerie sont imaginés, en s’écartant progressivement des tropismes actuels pour pouvoir prendre en compte les évolutions des besoins et des technologies. Nos travaux nous amènent alors à introduire un mécanisme permettant de maintenir la signature S/MIME en cas de transfert de message par un intermédiaire utilisateur, mettant en relief la complexité de l’écosystème sécuritaire. La volonté de rapporter l’utilisation de la sécurité au niveau de l’utilisateur nous conduit aussi à concevoir un mécanisme implémentant des politiques avant émission. Par la suite, le dispositif PACMAIL offrant une pseudonymisation des émetteurs est envisagé. Il introduit notamment le concept clé de distinction identification / authentification dans l’architecture de messagerie. Finalement, ces travaux convergent vers un nouveau modèle de messagerie asynchrone exploitant le SMTP comme canal de contrôle, et invitant à s’orienter vers un protocole alternatif pour la gestion des contenus.
  • e-Services Mobiles et Sécurisés Légers
    • Agbezoutsi Kodjo Edem
    , 2024. Dans cette thèse, nous exposons nos contributions à l'amélioration de l'écosystème du Mobile Money en identifiant ses défis majeurs et en proposant des solutions adaptées. Ces solutions visent à renforcer la sécurité et l'interopérabilité des services de Mobile Money, tout en prenant en compte les capacités limitées des terminaux mobiles. Pour ce faire, nous avons réaliséun état des lieux qui met en lumière des enjeux clés, tels que l'absence de fédération, de traçabilité et d'interopérabilité entre les plateformes Mobile Money des opérateurs de téléphonie mobile, ces dernières étant gérées par des bases de données distinctes.La blockchain est présentée comme une solution pour améliorer la sécurité, la transparence et la fiabilité des transactions. L'outil BTOOLS, un logiciel open source compatible avec plusieurs plateformes, a été développé pour générer des transactions blockchain sécurisées à l'aide de services cryptographiques. Une nouvelle architecture de Mobile Money intégrant lablockchain et l'USSD a également été proposée pour garantir une interconnexion transparente entre les différents acteurs de l'écosystème, y compris les banques, les MNO, les régulateurs et les clients.Le protocole « Mobile Money Using Blockchain » (2MUB) est un élément central des contributions de la thèse. Il a été développé en deux versions, la seconde apportant des améliorations en matière d'interopérabilité de traçabilité et de fédération.Ce protocole utilise une architecture décentralisée basée sur des smart contracts pour définir les règles decompensation entre les acteurs du Mobile Money. Trois scénarii d'implémentation ont été proposés : deux via lecanal USSD, et un via TCP/IP.Enfin, une plateforme expérimentale a été développée pour valider le protocole 2MUB. Elle utilise Node.js, Ganache, Hardhat et Sepolia pour implémenter une blockchain à deux niveaux, et son interface utilisateur est accessible via USSD grâce à Africa’s Talking. Des analyses ont montré le bon fonctionnement de la solution proposée.
  • Explainable algorithms for anomaly detection and time series forecasting
    • Kong Lanfang
    , 2024. Artificial intelligence has shown dominant performance in the data mining field, with applications across diverse domains, including critical ones such as medicine, finance, justice and so on. As a result, the explainability of black-box models is becoming moreand more demanding. We focus on two specific applications: anomaly detection and time series forecasting, and present XTREK and ADAPATCH for each task, respectively. XTREK is an unsupervised tree-based approach for explainable anomaly detection, which maximizes Kendall's τ between the anomaly scores of the source anomaly detector and those of XTREK. The tree produced by our algorithm is relatively small in size, therebyboasting the renowned off-the-shelf transparency and explainability of tree-based approaches. Moreover, its explanations are sample-based. In particular, the anomalyscores are computed to be the inverse of the size of the corresponding leaf, thereby providing meaningful explanations when comparing examples with different anomaly scores. XTREK can also be used as an in-model approach, which is capable of providing concise explanations for its own decisions. ADAPATCH is a post-hoc saliency map-based approach for explainable time series forecasting, which provides local visual explanationswith the help of perturbation-based gradient descent. With a differential encoding module in the mask of input, a more intuitive and higher-level semantic explanation can be provided. Both methods are model-agnostic, which means the architecture of the black-box model can be hidden from the users. They provide accurate and simple explanations and their accuracy are validated by extensive experiments.
  • Monitoring of the exposure to electromagnetic fields with autonomous probes installed outdoors in France
    • Jawad Ourouk
    • Conil Emmanuelle
    • Agnani Jean-Benoît
    • Wang Shanshan
    • Wiart Joe
    Comptes Rendus. Physique, Académie des sciences (Paris), 2024, 25 (S1), pp.41-61. The study is based on a new temporal analysis of exposure based on the deployment of autonomous broadband E-field monitoring probes in many French cities. The combination of the probe’s data with frequency-selective in situ measurements performed by ANFR and the knowledge of the nearby base station antennas, allows to draw statistical conclusions on the exposure of the population. Indeed, the data collected by the probes reveal that different periodicities exist (seasonality, day/night). This paper shows that the monitoring probes are able to detect the seasonality of the exposure and provide analysis of correlation between monitoring probes and radio environment. (10.5802/crphys.182)
    DOI : 10.5802/crphys.182
  • Dynamic Decision Trees and Community-based Graph Embeddings : towards Interpretable Machine Learning
    • Damay Gabriel
    , 2024. Machine Learning is the field of computer science that interests in building models and solutions from data without knowing exactly the set of instructions internal to these models and solutions. This field has achieved great results but is now under scrutiny for the inability to understand or audit its models among other concerns. Interpretable Machine Learning addresses these concerns by building models that are inherently interpretable. This thesis contributes to Interpretable Machine Learning in two ways.First, we study Decision Trees. This is a very popular group of Machine Learning methods for classification problems and it is interpretable by design. However, real world data is often dynamic, but few algorithms can maintain a decision tree when data can be both inserted and deleted from the training set. We propose a new algorithm called FuDyADT to solve this problem.Second, when data are represented as graphs, a very common machine learning technique called "embedding" consists in projecting them onto a vectorial space. This kind of method however is usually not interpretable. We propose a new embedding algorithm called Parfaite based on the factorization of the Personalized PageRank matrix. This algorithm is designed to provide interpretable results.We study both algorithms theoretically and experimentally. We show that FuDyADT is at least comparable to state-of-the-art algorithms in the usual setting, while also being able to handle unusual settings such as deletions of data and numerical features. Parfaite on the other hand produces embedding dimensions that align with the communities of the graph, making the embedding interpretable.
  • Touching at a distance: the elaboration of communicative functions from the perspective of the interactants
    • Héron Robin
    • Safin Stéphane
    • Baker Michael J
    • Zhang Zhuoming
    • Lecolinet Eric
    • Détienne Françoise
    Frontiers in Psychology, Frontiers Media, 2024, 15, pp.01-20. Touch is an inherent part of human social interactions and the diversity of its functions has been highlighted in numerous works. Given the varied roles of touch, with technology-mediated communication being a big part of our everyday lives, research has been interested in enabling and enhancing distant social interactions with mediated touch over networks. Due to the complexity of the sense of touch and technological limitations, multimodal devices have been developed and investigated. In this article, we explore the use of mediated visual touch in distant social interaction. Adopting an interactionist and collaborative approach to human communication, we focus on the communicative functions of distant touch behaviours which interactants co-elaborate throughout their mediated interactions. For this purpose, we conducted an exploratory study placing five romantically involved couples in interaction, where each discussed shared biographical events via a video call, using mediated touch devices (producing vibration and coloured lights). Their interactions were recorded, and excerpts were presented to participants in interviews using a collective confrontation technique (participants are confronted with a recording of their activity and encouraged to comment on it). This technique allows a better understanding of the participants’ points of view on their use of the touch devices in context. Through analysis of the interviews, our results highlight: (1) a variety of visual-touch functions with a redistribution of functions mostly supported by other modalities of communication in face-to-face interactions, such as illustrating aspects of the ongoing conversation; (2) the visual-touch characteristics as well as the verbal, paraverbal and non-verbal indicators of the interactional context considered by the participants to make sense of the stimuli and; (3) the multifactorial and dynamic aspects of the co-elaboration process of the visual-touch functions, reaffirming the role of interactional context, combined with cultural and biographical knowledge, in the meaning making. (10.3389/fpsyg.2024.1497289)
    DOI : 10.3389/fpsyg.2024.1497289
  • Efficient delegated secure multiparty computation
    • Urban Antoine
    , 2024. With the rise of cloud computing, it has become easier to delegate themanagement and analysis of data to external infrastructures, enabling the combinationof diverse datasets to extract valuable insights. However, ensuring the confidentialityof sensitive data remains a significant challenge. Secure multiparty computation(MPC) addresses this issue by allowing multiple participants to collaborateon computations without revealing their private data. This thesis explores an approachwhere data owners delegate these computations to untrusted servers whilemaintaining security and confidentiality. To achieve this, we rely on fully homomorphicencryption (FHE), which allows computations to be performed directly on encrypteddata. Our contributions include a robust MPC protocol based on FHE and a genericmethod that minimizes communication requirements.These advancements make secure computations more efficient and accessible,even for projects involving a large number of participants.
  • Optimizing Hyperparameters for Quantum Data Re-Uploaders in Calorimetric Particle Identification
    • Cassé Léa
    • Pfahringer Bernhard
    • Bifet Albert
    • Magniette Frédéric
    , 2024. We present an application of a single-qubit Data Re-Uploading (QRU) quantum model for particle classification in calorimetric experiments. Optimized for Noisy Intermediate-Scale Quantum (NISQ) devices, this model requires minimal qubits while delivering strong classification performance. Evaluated on a novel simulated dataset specific to particle physics, the QRU model achieves high accuracy in classifying particle types. Through a systematic exploration of model hyperparameters -- such as circuit depth, rotation gates, input normalization and the number of trainable parameters per input -- and training parameters like batch size, optimizer, loss function and learning rate, we assess their individual impacts on model accuracy and efficiency. Additionally, we apply global optimization methods, uncovering hyperparameter correlations that further enhance performance. Our results indicate that the QRU model attains significant accuracy with efficient computational costs, underscoring its potential for practical quantum machine learning applications.
  • Achieving accountability, reconfiguration, randomness, and secret leadership in byzantine fault tolerant distributed systems
    • Freitas de Souza Luciano
    , 2024. This thesis explores three fundamental problems in distributed computing. The first contribution focuses on accountable and reconfigurable distributed systems that detect and respond to component failures. A framework for implementing accountable and reconfigurable replicated services, leveraging the lattice agreement abstraction is presented. The asynchronous implementation ensures any consistency violation is followed by undeniable evidence of misbehavior, enabling seamless system reconfiguration. The second contribution addresses leader election in partially synchronous environments. Homomorphic Sortition, the first SSLE protocol for partially synchronous blockchains is introduced. Using Threshold Fully Homomorphic Encryption (ThFHE), this protocol supports diverse stake distributions and efficient off-chain execution, addressing network instability issues. Additionally, a Secret Leader Permutation (SLP) abstraction to ensure non-repeating leaders in certain blockchains, improving performance and consensus termination is proposed. Finally, the thesis explores randomness generation in distributed systems, focusing on the common coin primitive. Recognizing its impossibility in asynchronous, fault-prone environments, two relaxed versions are introduced: the approximate common coin and the Monte Carlo common coin. These abstractions provide efficient, scalable solutions tolerating up to one-third Byzantine processes without requiring trusted setup or public key infrastructure. Applying our Monte Carlo common coin protocol in binary Byzantine agreement achieves improved communication complexity, setting a new standard. All these contributions advance the robustness, efficiency, and reliability of distributed systems, providing new methods to handle accountability, leader election, and randomness generation in the lack of synchrony.
  • Autoregressive GAN for Semantic Unconditional Head Motion Generation
    • Airale Louis
    • Alameda-Pineda Xavier
    • Lathuilière Stéphane
    • Vaufreydaz Dominique
    ACM Transactions on Multimedia Computing, Communications and Applications, Association for Computing Machinery, 2024, 21 (1), pp.14:1-14. In this work, we address the task of unconditional head motion generation to animate still human faces in a low-dimensional semantic space from a single reference pose. Different from traditional audio-conditioned talking head generation that seldom puts emphasis on realistic head motions, we devise a GAN-based architecture that learns to synthesize rich head motion sequences over long duration while maintaining low error accumulation levels.In particular, the autoregressive generation of incremental outputs ensures smooth trajectories, while a multi-scale discriminator on input pairs drives generation toward better handling of high- and low-frequency signals and less mode collapse.We experimentally demonstrate the relevance of the proposed method and show its superiority compared to models that attained state-of-the-art performances on similar tasks. (10.1145/3635154)
    DOI : 10.1145/3635154
  • Non orthogonal multiple access (NOMA) with energy harvesting from vibrations
    • Boujemaa Hatem
    • Alhussein Musaed
    • Rekaya Ghaya
    Signal, Image and Video Processing, Springer Verlag, 2024, 19 (2), pp.137-1:137-9. In this article, we evaluate the throughput of NOMA with energy harvesting from vibrations. NOMA is the best candidate for 6 G communications as it offers larger data rates than 5 G. However, NOMA’s performance and throughput have not been yet derived when the source harvests energy from vibrations. Indeed, energy harvesting from vibrations is a technology that consists in converting mechanical vibrations into electrical energy. The vibrations sources can come from industrial machinery, vehicles, or human motion and they can be captured by using piezoelectric materials. The harvested energy from vibrations is proportional to the the product of the squared value of the mechanical deformation d and the frequency of the mechanical vibration f. Both f and d follow a Gaussian distribution. The harvested power depends also on the capacity of the piezoelectric element C and the force factor g. The harvesting process was also optimized in order to maximize data rates. We have derived the statistics of the harvested energy from vibrations as well as that of the Signal to Interference plus Noise Ratio (SINR) to compute and maximize the throughput. (10.1007/s11760-024-03597-0)
    DOI : 10.1007/s11760-024-03597-0
  • Subsea optical links characterization assisted by machine learning
    • Girard-Jollet Joana
    , 2024. With the growing demand for internet bandwidth, having robust optical communication systems is more crucial than ever. Beyond the advances in individual technologies and components, the performance of the communication system also depends on how intelligently these components are used and combined so that the overall configuration is optimized. Often, network performance is limited by predeployment predictions, making accurate estimation essential for optimization. The objective of this thesis is to characterize and monitor key sources of distortion in subsea links, primarily to predict the network’s future performance with minimal margins. We focused on two physical impairments: nonlinear Kerr effects and polarizationdependent loss. The thesis addresses the challenges of performance estimation and network monitoring to introduce novel monitoring tools and provide guidelines for optimizing transmission systems.
  • Distributed computing for blockchains and beyond
    • Tonkikh Andrei
    , 2024. In this dissertation, we address three major challenges in the design of blockchain systems in particular and large-scale fault-tolerant distributed systems in general. This work aims at improving the performance of such systems directly, as well as providing useful tools for future development of distributed algorithms.First, we explore the limits of what can be done with minimal synchronization by designing CryptoConcurrency—an asset transfer system that, instead of totally ordering all users' requests, processes concurrent requests in parallel as much as possible. Unlike other similar systems, in CryptoConcurrency, we allow the users to have shared accounts and do not make the unrealistic assumption that an honest user's account is never accessed from two devices concurrently. CryptoConcurrency explores novel theoretical grounds by addressing transaction conflicts in a dynamic, non-pairwise manner, allowing the owners of each account to independently choose their preferred mechanism for conflict resolution. Then, we improve the performance of consensus—the synchronization problem at the heart of most practical distributed systems. We build the first consensus protocol that manages to combine two desirable properties: extremely fast termination in favorable conditions and graceful recovery when such conditions are not met. The design involves a novel type of cryptographic proofs, with an efficient practical implementation.Finally, we set out to tackle the problem of designing efficient distributed protocols with weighted participation. To this end, we define several new optimization problems, related to reducing or, in other words, quantizing the weights of the participants in a way that preserves important structural properties. We show how to apply them to make weighted-model variants of a large class of distributed protocols with very little overhead compared to their counterparts in the simpler non-weighted model. For these optimization problems, we prove upper bounds, provide a practical open-source approximate solver that satisfies these upper bounds, and perform an empirical study on the weight distributions from real-world blockchain systems.
  • Uncertainty quantification for fast reconstruction methods using augmented equivariant bootstrap: Application to radio interferometry
    • Cherif Mostafa
    • Liaudat Tobías I.
    • Kern Jonathan
    • Kervazo Christophe
    • Bobin Jérôme
    , 2024. The advent of next-generation radio interferometers like the Square Kilometer Array promises to revolutionise our radio astronomy observational capabilities. The unprecedented volume of data these devices generate requires fast and accurate image reconstruction algorithms to solve the ill-posed radio interferometric imaging problem. Most state-of-the-art reconstruction methods lack trustworthy and scalable uncertainty quantification, which is critical for the rigorous scientific interpretation of radio observations. We propose an unsupervised technique based on a conformalized version of a radio-augmented equivariant bootstrapping method, which allows us to quantify uncertainties for fast reconstruction methods. Noticeably, we rely on reconstructions from ultra-fast unrolled algorithms. The proposed method brings more reliable uncertainty estimations to our problem than existing alternatives.
  • IQ-Code for PDL and Crosstalk Mitigation in Nyquist-WDM Transmission Based on DSC
    • Abouseif Akram
    • Klaimi Rami
    • Othman Ghaya Rekaya-Ben
    • Jaouën Yves
    • Darweesh Jamal
    Journal of Lightwave Technology, Institute of Electrical and Electronics Engineers (IEEE)/Optical Society of America(OSA), 2024, 42 (24), pp.8655-8663. Nyquist wavelength-division-multiplexing transmission based on digital sub-carrier, emerges as a promising solution to address the growing network traffic demand. However, this system is impacted by the crosstalk between sub-bands resulting from laser drift. In addition, the transmission capacity is restricted by polarization-dependent loss (PDL), which classical digital signal processing (DSP) techniques at the receiver side are not able to mitigate. In this work, we propose a digital multiple-input-multiple-output coding technique, labelled as IQ-code, aimed at enhancing transmission performance in dual polarized wavelength-division multiplexing (WDM) systems affected by PDL. Furthermore, it demonstrates resilience against crosstalk and non-linearity. A key advantage lies in the use of a single time interval, reducing decoding complexity compared to alternative spatial coding techniques. Our findings demonstrate versatility across various sub-bands, achieving notable improvements up to 1.5dB gain in different scenarios over a transmission distance of 1000 Km. (10.1109/JLT.2024.3440399)
    DOI : 10.1109/JLT.2024.3440399
  • Variational Graph Contrastive Learning
    • Xie Shifeng
    • Giraldo Jhony H.
    , 2024. Graph representation learning (GRL) is a fundamental task in machine learning, aiming to encode high-dimensional graph-structured data into low-dimensional vectors. Self-supervised learning (SSL) methods are widely used in GRL because they can avoid expensive human annotation. In this work, we propose a novel Subgraph Gaussian Embedding Contrast (SGEC) method. Our approach introduces a subgraph Gaussian embedding module, which adaptively maps subgraphs to a structured Gaussian space, ensuring the preservation of graph characteristics while controlling the distribution of generated subgraphs. We employ optimal transport distances, including Wasserstein and Gromov-Wasserstein distances, to effectively measure the similarity between subgraphs, enhancing the robustness of the contrastive learning process. Extensive experiments across multiple benchmarks demonstrate that SGEC outperforms or presents competitive performance against state-of-the-art approaches. Our findings provide insights into the design of SSL methods for GRL, emphasizing the importance of the distribution of the generated contrastive pairs.
  • Towards flexible and low-power wireless smart sensors : reconfigurable analog-to-feature converter for healthcare applications
    • Manokhin Mikhail
    , 2024. Current human population growth and aging inevitably raise the rate of chronic diseases, the leading global cause of death. Wireless Body Area Networks (WBANs) composed of smart wearable or implantable sensors are the primary solution for proactive healthcare systems to reduce the burden of these diseases. However, such networks are severely restricted regarding energy usage and data throughput, especially in the case of biopotential and inertial sensors requiring continuous signal acquisition. Reducing the amount of collected and sent data, thus improving sensors' autonomy, is possible in classification applications. For this purpose, this thesis aims to design a reconfigurable Analog-to-Feature (A2F) converter that extracts only relevant features for a given task in the analog domain within the sensor node and classifies further at the sensor or aggregator level. Based on Non-Uniform Wavelet Sampling (NUWS), our converter leverages a generic architecture to suit different low-frequency signals and enable WBANs with multimodal sensors. To prove the converter's universality, we address two applications: anomaly detection in electrocardiogram (ECG) signals and human activity recognition (HAR) in inertial signals. After training the neural network classifiers for each application, we defined the relevant features and hardware specifications required for the complete circuit design. Thanks to the circuit level simulation of the converter, we can show that the estimated energy consumption is divided by 20 for ECG and 5 for HAR compared to the Nyquist approach. This fact highlights the potential of A2F conversion with NUWS in achieving flexible, reliable, and low-power sensor systems for healthcare and beyond.
  • Stein's method for extreme value distributions
    • Costacèque-Cecchi Bruno
    , 2024. Extreme value theory deals with the probability of occurrence of extreme events, such as floods, droughts or financial crises. An important part of that theory relies on limit theorems, such as the extreme value theorem, or the Pickands-Balkeman-de Hann theorem. In order to apply those theorems accurately and approximate efficiently the usually unknown distribution of the extreme data by its limit model, one needs to quantify the speed of convergence of those theorems. A manner of doing so is to use the generator approach of Stein's method. That is why in this thesis we construct a family of Markov semi-groups whose invariant measure is an extreme-value distribution. We do so via a Mehler's formula, which relies itself on the stability property satisfied by max-stable distributions. Thanks to this definition, the semi-groups satisfy similar properties to the Ornstein-Uhlenbeck (commutation rule, Poincaré's inequality, covariance identities, etc.). We then proceed to apply those results to the generator approach of Stein's method to deduce rates of convergence to extreme-value distributions in various settings. The last chapter focuses on Poisson processes whose intensity measure satisfies an homogeneity assumption and how their standard properties translate into new results for max-stable distributions, thus shedding a new light on the contents of the previous chapters.
  • Nonlocality of the energy density of a spontaneously emitted single-photon from a Hydrogen atom
    • Federico Maxime
    • Jauslin H R
    Journal of Physics A: Mathematical and Theoretical, IOP Publishing, 2024, 58 (1), pp.015304. Abstract We analyze through the expectation value of the energy density the spatial nonlocality of single photons emitted by the spontaneous decay of a Hydrogen atom. By using a minimal coupling between the quantized electromagnetic field and the atom, we compute the state of the photon under the assumption that only a single-photon is produced. The calculations are thus performed in the subspace of single-photon states which is essentially equivalent to the rotating wave approximation. We obtain a characterization of the spatial decay of the energy density. We compute the asymptotic limit of large distances from the atom at each given time, and find an algebraic behavior of 1 / r 6 . This result confirms that the energy density of single-photon states is nonlocal and the algebraic decay is far from the maximal quasiexponential localization predicted by the theory. (10.1088/1751-8121/ad97fa)
    DOI : 10.1088/1751-8121/ad97fa
  • Improving Decision Tree Learning
    • Yu Peng
    , 2024. Decision tree models are widely recognized for their efficiency and interpretability, particularly when working with structured data. This thesis addresses two main challenges: improving the interpretability of deep tree-based models and handling categorical variables. We introduce the Linear TreeShap algorithm, which illuminates the model’s decision processby assigning importance scores to each node and feature. In parallel, we propose a methodological framework enabling decision trees to split directly on categorical variables, enhancing both accuracy and robustness. Our approach includes the stochastic BSplitZ method, designed to efficiently handle large sets of categories, and provides a thorough investigation ofthe Mean Absolute Error (MAE) criterion. In particular, we prove that no optimal numerical encoding exists under MAE and solve a related optimization problem (the unimodal cost 2-median) central to tree splitting.Our contributions advance the theoretical foundations and real-world applicability of decision tree models,paving the way for more robust and interpretable solutions in machine learning.
  • Unsupervised Learning of Unbiased Visual Representations
    • Barbano Carlo Alberto
    • Tartaglione Enzo
    • Grangetto Marco
    IEEE Transactions on Artificial Intelligence, IEEE, 2024, 6 (5), pp.1171 - 1183. Deep neural networks often struggle to learn robust representations in the presence of dataset biases, leading to suboptimal generalization on unbiased datasets. This limitation arises because the models heavily depend on peripheral and confounding factors, inadvertently acquired during training. Existing approaches to address this problem typically involve explicit supervision of bias attributes or reliance on prior knowledge about the biases. In this study, we address the challenging scenario where no explicit annotations of bias are available, and there’s no prior knowledge about its nature. We present a fully unsupervised debiasing framework with three key steps: firstly, leveraging the inherent tendency to learn malignant biases to acquire a bias-capturing model; next, employing a pseudo-labeling process to obtain bias labels; and finally, applying cutting-edge supervised debiasing techniques to achieve an unbiased model. Additionally, we introduce a theoretical framework for evaluating model biasedness and conduct a detailed analysis of how biases impact neural network training. Experimental results on both synthetic and real-world datasets demonstrate the effectiveness of our method, showcasing state-of-the-art performance in various settings, occasionally surpassing fully supervised debiasing approaches. (10.1109/TAI.2024.3514554)
    DOI : 10.1109/TAI.2024.3514554
  • Activation Map Compression through Tensor Decomposition for Deep Learning
    • Nguyen Le-Trung
    • Quélennec Aël
    • Tartaglione Enzo
    • Tardieu Samuel
    • Nguyen van Tam
    , 2024. Internet of Things and Deep Learning are synergetically and exponentially growing industrial fields with a massive call for their unification into a common framework called Edge AI. While on-device inference is a well-explored topic in recent research, backpropagation remains an open challenge due to its prohibitive computational and memory costs compared to the extreme resource constraints of embedded devices. Drawing on tensor decomposition research, we tackle the main bottleneck of backpropagation, namely the memory footprint of activation map storage. We investigate and compare the effects of activation compression using Singular Value Decomposition and its tensor variant, High-Order Singular Value Decomposition. The application of low-order decomposition results in considerable memory savings while preserving the features essential for learning, and also offers theoretical guarantees to convergence. Experimental results obtained on main-stream architectures and tasks demonstrate Pareto-superiority over other state-of-the-art solutions, in terms of the trade-off between generalization and memory footprint. © 2024 Neural information processing systems foundation. All rights reserved.
  • Implicit Bias of Mirror Flow on Separable Data
    • Pesme Scott
    • Dragomir Radu-Alexandru
    • Flammarion Nicolas
    , 2024. We examine the continuous-time counterpart of mirror descent, namely mirror flow, on classification problems which are linearly separable. Such problems are minimised `at infinity' and have many possible solutions; we study which solution is preferred by the algorithm depending on the mirror potential. For exponential tailed losses and under mild assumptions on the potential, we show that the iterates converge in direction towards a $ϕ_\infty$-maximum margin classifier. The function $ϕ_\infty$ is the \textit{horizon function} of the mirror potential and characterises its shape `at infinity'. When the potential is separable, a simple formula allows to compute this function. We analyse several examples of potentials and provide numerical experiments highlighting our results.