Sorry, you need to enable JavaScript to visit this website.
Partager

Publications

 

Les publications de nos enseignants-chercheurs sont sur la plateforme HAL :

 

Les publications des thèses des docteurs du LTCI sont sur la plateforme HAL :

 

Retrouver les publications figurant dans l'archive ouverte HAL par année :

2025

  • Analytic Rényi Entropy Bounds for Device-Independent Cryptography
    • Hahn Thomas
    • Philip Aby
    • Tan Ernest
    • Brown Peter
    , 2025. Device-independent (DI) cryptography represents the highest level of security, enabling cryptographic primitives to be executed safely on uncharacterized devices. Moreover, with successful proof-of-concept demonstrations in randomness expansion, randomness amplification, and quantum key distribution, the field is steadily advancing toward commercial viability. Critical to this continued progression is the development of tighter finite-size security proofs. In this work, we provide a simple method to obtain tighter finite-size security proofs for protocols based on the CHSH game, which is the nonlocality test used in all of the proof-of-concept experiments. We achieve this by analytically solving key-rate optimization problems based on Rényi entropies, providing a simple method to obtain tighter finite-size key rates. (10.48550/arXiv.2507.07365)
    DOI : 10.48550/arXiv.2507.07365
  • Exact distinguishability between real-valued and complex-valued Haar random quantum states
    • Nemoz Tristan
    • Alléaume Romain
    • Brown Peter
    , 2025. Haar random states are fundamental objects in quantum information theory and quantum computing. We study the density matrix resulting from sampling $t$ copies of a $d$-dimensional quantum state according to the Haar measure on the orthogonal group. In particular, we analytically compute its spectral decomposition. This allows us to compute exactly the trace distance between $t$-copies of a real Haar random state and $t$-copies of a complex Haar random state. Using this we show a lower-bound on the approximation parameter of real-valued state $t$-designs and improve the lower-bound on the number of copies required for imaginarity testing.
  • Execution Platform Contracts
    • Bourgeoisat Dorian
    • Kühne Ulrich
    • Brandner Florian
    , 2025. Confidentiality is a crucial security property for many critical applications. As a response to the discovery of numerous micro-architectural side channel attacks such as Spectre, allowing an attacker to extract secret information in pernicious ways, the notion of hardware/software contracts was proposed to formalise the guarantees provided by the hardware to the software. In this paper, we propose to extend this notion to include the guarantees provided by the operating system (OS), so far unspecified in such contracts. We formalize an attacker model adapted to a typical execution model on a shared platform. More precisely, we formalize common thread and memory management policies provided by the OS on top of a hardware model and explore the consequences of potential leaks emerging on such a platform. Our investigation shows that the OS policies play a crucial role in providing security guarantees to code processing sensitive data and thus have to be taken into consideration when writing such code through platform contracts.
  • Evict+Spec+Time on RISC-V: Gem5-Based Implementation and Microarchitectural Analysis
    • Khan Mahreen
    • Mushtaq Maria
    • Pacalet Renaud
    • Apvrille Ludovic
    , 2025. <div><p>Microarchitectural side-channel attacks are a growing concern and have been widely studied on x86 and ARM architectures, but RISC-V's susceptibility to similar attacks remains understudied. We present the first implementation and evaluation of the Evict+Spec+Time attack on RISC-V, previously demonstrated only on x86 [2]. This advanced variant of Evict+Time integrates three critical phases: eviction, speculation, and timing. First, the attack forcibly evicts target cache lines using RISC-V's cbo.flush instruction via the Zicbom extension [6]. Next, it exploits out-of-order execution to manipulate microarchitectural resources such as the reorder buffer, limiting the processor's ability to mask cache-miss latency. Finally, it infers secret-dependent memory access patterns through precise timing measurements. We validate RISC-V's vulnerability by recovering secret keys from AES T-table implementations. Using the gem5 simulator [4], we provide the first detailed analysis of microarchitectural behavior during the attack, including cache contention, pipeline stalls, and latency variations. These insights establish foundational guidance for developing RISC-V-specific countermeasures against such attacks.</p></div>
  • Digital Twin and Digital Thread for System Security and Performance applied to an Electrical Vehicle Charging Use Case
    • Heermann Hagen
    • Koch Johannes
    • Grimm Christoph
    • Genius Daniela
    • Apvrille Ludovic
    • Mifdaoui Ahlem
    • Schneider Klaus
    , 2025, pp.1-8. System security requires a solid foundation in both development and operation. During development, performance trade-offs result in security infrastructures that are more or less effective, but usually imperfect. Hence, during operation, runtime monitoring and anomaly detection continuously check for security issues. In this paper, we show how development and operation can be linked. We demonstrate how information and data from development and operation can be aggregated in a digital twin and/or digital thread which is used as the basis for runtime monitoring and anomaly detection. In particular, we address the trade-off between system security and performance in a concrete smart grid system. (10.1109/FDL68117.2025.11165407)
    DOI : 10.1109/FDL68117.2025.11165407
  • Solutions de surveillance avancée des réseaux optiques
    • Tomczyk Louis
    , 2025. Cette thèse explore des techniques de traitement du signal appliquées aux réseaux de télécommunications optiques cœur, en se concentrant sur les traitements numériques effectués après la détection. Face à la saturation de la croissance des flux de données mondiaux et aux exigences croissantes en matière de surveillance fine de la couche physique, deux axes de recherche complémentaires sont étudiés. Le premier porte sur la détection et la localisation des pertes de puissance dans les liaisons fibre optique point-à-point. En exploitant la non-commutativité entre la dispersion chromatique et la non-linéarité de Kerr, un estimateur de profil longitudinal de puissance (LPPE) est étudié. Ce travail inclut une analyse théorique de l'interaction entre la dispersion chromatique (CD) et l'auto-modulation de phase (SPM), au cœur des algorithmes LPPE. Le second axe concerne l'égalisation de signaux modulés selon des formats avancés, notamment le modelage probabiliste de constellations (PCS). Inspirée des modèles génératifs, une fonction de coût dérivée des auto encodeurs variationnels (VAE) est intégrée à une unique architecture de filtre adaptatif. Cette approche améliore le suivi de polarisation dans des canaux dynamiques, et permet une correction partielle des erreurs de phase sans recourir à des symboles pilotes.
  • PESTO: Real‑Time Pitch Estimation with Self‑Supervised Transposition‑Equivariant Objective
    • Riou Alain
    • Torres Bernardo
    • Hayes Ben
    • Lattner Stefan
    • Hadjeres Gaëtan
    • Richard Gaël
    • Peeters Geoffroy
    Transactions of the International Society for Music Information Retrieval (TISMIR), Ubiquity Press, 2025, 8 (1), pp.334-352. In this paper, we introduce PESTO, a self-supervised learning approach for single-pitch estimation using a Siamese architecture. Our model processes individual frames of a Variable-Q Transform (VQT) and predicts pitch distributions. The neural network is designed to be equivariant to translations, notably thanks to a Toeplitz fully-connected layer. In addition, we construct pitch-shifted pairs by translating and cropping the VQT frames and train our model with a novel class-based transposition-equivariant objective, eliminating the need for annotated data. Thanks to this architecture and training objective, our model achieves remarkable performance while being very lightweight (130 k parameters). Evaluations on music and speech datasets (MIR-1K, MDB-stem-synth, and PTDB) demonstrate that PESTO not only outperforms self-supervised baselines but also competes with supervised methods, exhibiting superior cross-dataset generalization. Finally, we enhance PESTO's practical utility by developing a streamable VQT implementation using cached convolutions. Combined with our model's low latency (less than 10 ms) and minimal parameter count, this makes PESTO particularly suitable for real-time applications. (10.5334/tismir.251)
    DOI : 10.5334/tismir.251
  • A quantitative approach to the GDPR’s anonymisation and “appropriate technical and organisational measures” tests
    • Holzenberger Nils
    • Maxwell Winston
    Computer Law & Security Review, Elsevier, 2025, 59, pp.106173-1:106173-13. This article examines two tests from the European General Data Protection Regulation (GDPR): (1) the test for anonymisation (the ''anonymisation test''), and (2) the test for applying ''appropriate technical and organisational measures'' to protect personal data (the ''ATOM test''). Both tests depend on vague legal standards and have given rise to legal disputes and differing interpretations among data protection authorities and courts, including in the context of machine learning. Under the anonymisation test, data are sufficiently anonymised when the risk of identification is ''insignificant'' taking into account ''all means reasonably likely to be used'' by an attacker. Under the ATOM test, measures to protect personal data must be ''appropriate'' with regard to the risks of data loss. Here, we use methods from law and economics to transform these two qualitative tests into quantitative approaches that can be visualized on a graph. For the anonymisation test, we chart different attack efforts and identification probabilities, and propose this as a methodology to help stakeholders discuss what attack efforts are ''reasonably likely'' to be deployed and their likelihood of success. For the ATOM test, we use the Learned Hand formula from law and economics to chart the incremental costs and benefits of privacy protection measures to identify the point where those measures maximize social welfare. The Hand formula permits the negative effects of privacy protection measures, such as the loss of data utility and negative impacts on model fairness, to be taken into account when defining what level of protection is ''appropriate''. We apply our proposed framework to several scenarios, applying the anonymisation test to a Large Language Model, and the ATOM test to a database protected with differential privacy. (10.1016/j.clsr.2025.106173)
    DOI : 10.1016/j.clsr.2025.106173
  • Age of Information based cache updating with popularity contents: Whittle's index based approach
    • Ciblat Philippe
    • Caire Giuseppe
    • Yates Roy D
    , 2025. <div><p>We focus on the scheduling algorithm for updating files from a cloud server to a local server having cache. We consider that only K out of N files can be updated at each timeslot. Each file is time-sensitive and the content relevance is thus measured through the Age of Information. In addition, each file has its own popularity which is time-varying according to a Markovian model. In this paper, we offer two contributions: first, we exhibit Whittle's index for this scheduling problem when the popularity is known and fixed over time. Second, we propose a heuristic based on previous Whittle's index for the timevarying popularity case assuming that only the past popularity is available.</p></div>
  • SHAMaNS: Sound Localization with Hybrid Alpha-Stable Spatial Measure and Neural Steerer
    • Di Carlo Diego
    • Fontaine Mathieu
    • Nugraha Aditya Arie
    • Bando Yoshiaki
    • Yoshii Kazuyoshi
    , 2025. This paper describes a sound source localization (SSL) technique that combines an α-stable model for the observed signal with a neural network-based approach for modeling steering vectors. Specifically, a physics-informed neural network, referred to as Neural Steerer, is used to interpolate measured steering vectors (SVs) on a fixed microphone array. This allows for a more robust estimation of the so-called α-stable spatial measure, which represents the most plausible direction of arrival (DOA) of a target signal. As an α-stable model for the non-Gaussian case (α ∈ (0, 2)) theoretically defines a unique spatial measure, we choose to leverage it to account for residual reconstruction error of the Neural Steerer in the downstream tasks. The objective scores indicate that our proposed technique outperforms state-of-the-art methods in the case of multiple sound sources.
  • Soft Disentanglement in Frequency Bands for Neural Audio Codecs
    • Giniès Benoît
    • Bie Xiaoyu
    • Fercoq Olivier
    • Richard Gaël
    , 2025. In neural-based audio feature extraction, ensuring that representations capture disentangled information is crucial for model interpretability. However, existing disentanglement methods often rely on assumptions that are highly dependent on data characteristics or specific tasks. In this work, we introduce a generalizable approach for learning disentangled features within a neural architecture. Our method applies spectral decomposition to time-domain signals, followed by a multibranch audio codec that operates on the decomposed components. Empirical evaluations demonstrate that our approach achieves better reconstruction and perceptual performance compared to a state-of-the-art baseline while also offering potential advantages for inpainting tasks.
  • When Can Sequence Modelling Approaches Recover the Target Policy In Offline Reinforcement Learning? a Statistical Analysis
    • Ghani Abdelghanem
    • Ciblat Philippe
    • Ghogho Mounir
    , 2025. <div><p>We present a theoretical analysis of sample complexity for learning the target policy in offline reinforcement learning (RL) using sequence modeling approaches. Our main theorem establishes bounds on the minimum required number of high-return samples. We identify distinct small-data and largedata regimes, characterized by a critical transition point, and reveal a potential trade-off between context coverage breadth and sampling depth. These findings offer insights into efficient data collection strategies and algorithm design for offline RL.</p></div>
  • Bayesian Experimental Design with Mutual Information and Learned Errors for Human-Computer Interaction
    • Miquel Hugo
    • Gori Julien
    • Rioul Olivier
    , 2025. This work provides a Bayesian framework for handling user errors in interactive systems, with applications in human-computer interaction (HCI) and user modeling. The Bayesian Information Gain (BIG) algorithm [1, 2, 3, 4] is an iterative variant of Bayesian experimental design with mutual information as a cost function, used in HCI. It is a principled approach that maximizes expected information gained from each interaction. More precisely, let Θ be the potential target in the user’s mind with prior distribution p(θ), X be the system feedback, and Y be the corresponding user’s input. In each interaction loop, BIG selects feedback x that maximizes mutual information I(Θ; Y|X= x), assuming a known user model (likelihood) p(y|x,θ), and then updates the posterior distribution p(θ|x,y). This work extends the BIG algorithm to learn from user errors while preserving its mathematical foundations. We incorporate an error rate parameter ϵinto the likelihood function p(y|x,θ,ϵ) and develop an adaptive algorithm that jointly infers both θ and ϵ by updating the posterior p(θ,ϵ|x,y) at each interaction step. We also discuss three simplifying hypotheses for the prior expression p(θ,ϵ) and three user models: (i) zero error; (ii) fixed error rate; (iii) arbitrary random error rate. We prove mathematical continuity between these three models, showing that our adaptive approach naturally extends BIG. We also investigate model mismatch on the overall performance and degradation properties with respect to the standard BIG algorithm. While standard BIG converges quickly with perfect responses, it degrades with even small error rates. The fixed-error model depends critically on correctly estimating the error parameter, while our adaptive model achieves the highest accuracy under varying error conditions, at the expense of additional interactions.
  • Causal decompositions of one-dimensional quantum cellular automata
    • Vanrietvelde Augustin
    • Mestoudjian Octave
    • Arrighi Pablo
    , 2025. Understanding quantum theory's causal structure stands out as a major matter, since it radically departs from classical notions of causality. We present advances in the research program of causal decompositions, which investigates the existence of an equivalence between the causal and the compositional structures of unitary channels. Our results concern one-dimensional Quantum Cellular Automata (1D QCAs), i.e. unitary channels over a line of N quantum systems (with or without periodic boundary conditions) that feature a causality radius r: a given input cannot causally influence outputs at a distance more than r. We prove that, for N ≥ 4r +1, 1D QCAs all admit causal decompositions: a unitary channel is a 1D QCA if and only if it can be decomposed into a unitary routed circuit of nearest-neighbour interactions, in which its causal structure is compositionally obvious. This provides the first constructive form of 1D QCAs with causality radius one or more, fully elucidating their structure. In addition, we show that this decomposition can be taken to be translation-invariant for the case of translation-invariant QCAs. Our proof of these results makes use of innovative algebraic techniques, leveraging a new framework for capturing partitions into non-factor sub-C* algebras.
  • Audio processor parameters: estimating distributions instead of deterministic values
    • Peladeau Côme
    • Fourer Dominique
    • Peeters Geoffroy
    , 2025, pp.275-282. Audio effects and sound synthesizers are widely used processors in popular music. Their parameters control the quality of the output sound. Multiple combinations of parameters can lead to the same sound. While recent approaches have been proposed to estimate these parameters given only the output sound, those are deterministic, i.e. they only estimate a single solution among the many possible parameter configurations. In this work, we propose to model the parameters as probability distributions instead of deterministic values. To learn the distributions, we optimize two objectives: (1) we minimize the reconstruction error between the ground truth output sound and the one generated using the estimated parameters, as is it usually done, but also (2) we maximize the parameter diversity, using entropy. We evaluate our approach through two numerical audio experiments to show its effectiveness. These results show how our approach effectively outputs multiple combinations of parameters to match one sound.
  • Partitions in quantum theory
    • Vanrietvelde Augustin
    • Mestoudjian Octave
    • Arrighi Pablo
    , 2025. Decompositional theories describe the ways in which a global physical system can be split into subsystems, facilitating the study of how different possible partitions of a same system interplay, e.g. in terms of inclusions or signalling. In quantum theory, subsystems are usually framed as sub-C* algebras of the algebra of operators on the global system. However, most decompositional approaches have so far restricted their scope to the case of systems corresponding to factor algebras. We argue that this is a mistake: one should cater for the possibility for non-factor subsystems, arising for instance from symmetry considerations. Building on simple examples, we motivate and present a definition of partitions into an arbitrary number of parts, each of which is a possibly non-factor sub-C* algebra. We discuss its physical interpretation and study its properties, in particular with regards to the structure of algebras' centres. We prove that partitions, defined at the C*-algebraic level, can be represented in terms of a splitting of Hilbert spaces, using the framework of routed quantum circuits. For some partitions, however, such a representation necessarily retains a residual pseudo-nonlocality. We provide an example of this behaviour, given by the partition of a fermionic system into local modes.
  • QINCODEC: Neural Audio Compression with Implicit Neural Codebooks
    • Lahrichi Zineb
    • Hadjeres Gaëtan
    • Richard Gael
    • Peeters Geoffroy
    , 2026. Neural audio codecs, neural networks which compress a waveform into discrete tokens, play a crucial role in the recent development of audio generative models. State-of-the-art codecs rely on the end-to-end training of an autoencoder and a quantization bottleneck. However, this approach restricts the choice of the quantization methods as it requires to define how gradients propagate through the quantizer and how to update the quantization parameters online. In this work, we revisit the common practice of joint training and propose to quantize the latent representations of a pre-trained autoencoder offline, followed by an optional finetuning of the decoder to mitigate degradation from quantization. This strategy allows to consider any off-the-shelf quantizer, especially state-of-the-art trainable quantizers with implicit neural codebooks such as QINCO2. We demonstrate that with the latter, our proposed codec termed QINCODEC, is competitive with baseline codecs while being notably simpler to train. Finally, our approach provides a general framework that amortizes the cost of autoencoder pretraining, and enables more flexible codec design.
  • Semi-supervised graph learning for underwater source localization using ship-of-opportunity spectrograms
    • Castro-Correa Jhon
    • Badiey Mohsen
    • Giraldo Jhony
    • Malliaros Fragkiskos
    Journal of the Acoustical Society of America, Acoustical Society of America, 2025, 158 (3), pp.1836-1848. Conventional techniques for underwater source localization have traditionally relied on optimization methods, matched-field processing, beamforming, and, more recently, deep learning. However, these methods often fall short to fully exploit the data correlation crucial for accurate source localization. This correlation can be effectively captured using graphs, which consider the spatial relationship among data points through edges. This work introduces a novel graph learning module for source localization using spectrograms from ships-of-opportunity, which represent mid-frequency acoustic broadband signals from ship-radiated noise ranging from 360 to 1100 Hz, collected during the 2017 Seabed Characterization Experiment (SBCEX 2017). The proposed approach follows a two-step process: first, a pre-trained convolutional neural network (CNN) module is used for feature extraction via self-supervised learning, and then a graph neural network model is trained using semi-supervised learning for source localization. The graph is constructed using a k-nearest neighbor algorithm, incorporating features extracted by the CNN from the spectrograms. By employing this two-stage training strategy, our framework addresses the challenge of limited labeled data availability while achieving performance comparable to conventional supervised learning models. The effectiveness of our approach is demonstrated through model evaluation on both synthetic and measured data, showcasing the architecture's ability to generalize well to unseen scenarios. (10.1121/10.0039042)
    DOI : 10.1121/10.0039042
  • Bidding efficiently in Simultaneous Ascending Auctions with incomplete information using Monte Carlo Tree Search and determinization
    • Pacaud Alexandre
    • Bechler Aurelien
    • Coupechoux Marceau
    IEEE Transactions on Games, Institute of Electrical and Electronics Engineers, 2025, 17 (3), pp.813-826. In this paper, we tackle the problem of designing an efficient bidding strategy for Simultaneous Ascending Auctions (SAA). SAA is well-known mechanism for allocating spectrum to mobile networks operators and has been used for example to allocate 5G licenses in many countries. Although the rules are relatively simple, there is no known optimal bidding strategy for SAA. In a previous work, we proposed a Simultaneous Move Monte-Carlo Tree Search (SM-MCTS) based algorithm named SMSα that we extend here to an incomplete information framework. We consider and compare three determinization approaches of SMSα, and show how they are able to tackle four key strategic issues of SAA, namely the exposure problem, the own price effect, the budget constraints and the eligibility management. Extensive numerical experiments on instances of realistic size and including an uncertain framework show that our extensions of SMSα outperform state-of-the-art algorithms by achieving higher expected utility while taking less risks. (10.1109/TG.2025.3552025)
    DOI : 10.1109/TG.2025.3552025
  • On the Effect of Feature Reduction on Energy Consumption: An Exploratory Study
    • Tërnava Xhevahire
    • Lefeuvre Romain
    • Perez Quentin
    • Khelladi Djamel Eddine
    • Acher Mathieu
    • Combemale Benoît
    , 2025, pp.1-11. Energy consumption is a growing concern for sustainable software. Although increasingly studied, it remains largely unexplored in configurable systems growing in complexity with features. Feature reduction can eliminate software bloat, but to our knowledge, its impact on energy use has not been investigated. To fill this gap, we investigated how both on-demand and built-in feature reduction (defined later) affect the energy consumption of configurable systems. We conducted a first exploratory study using 28 programs from three systems with built-in feature reduction, namely ToyBox, BusyBox, and GNU, as well as 6 GNU programs debloated on-demand using the Chisel, Debop, and Cov tools. In our results, built-in feature reduction led to statistically significant energy decreases in 7% of the cases, while on-demand reduction, despite achieving energy decreases in 67% of cases, showed no statistical significance. However, when energy consumption increased, it was often more substantial than the reductions observed (occurring in 25% of built-in cases and 11% of on-demand cases) showing the complex and sometimes counterintuitive interplay between feature reduction and energy. Additionally, the observed strong correlation between energy consumption and execution time motivates a shift from traditional debloating goals, centered on binary size/attack surface, to energy-aware strategies that prioritize performance concerns. Finally, we provide an in-depth analysis and discuss the perspective. (10.1145/3744915.3748463)
    DOI : 10.1145/3744915.3748463
  • Quantum Reupload Units: A Scalable and Expressive Approach for Time Series Learning
    • Cassé Léa
    • Ponnambalam Sabarikirishwaran
    • Pfahringer Bernhard
    • Bifet Albert
    , 2025, pp.1815-1825. <div><p>We propose a single-qubit Quantum Machine Learning (QML) model for time series forecasting, built around the concept of a Quantum Reupload Unit (QRU), a hardwareefficient quantum circuit architecture with shallow depth. The proposed model demonstrates enhanced predictive power compared to variational methods such as quantum circuits (VQC), parameterized quantum circuits (PQC), and quantum residual blocks (QRB). The proposed QRU outperforms classical learning models such as Recurrent Neural Networks (RNNs) and Long-Short Term Memory (LSTM) with the same number of parameters. The novelty of this approach is its ability to model temporal patterns without relying on an extensive memory state, which reduces resource demands while preserving forecast accuracy. The expressivity of the model is evaluated through Fourier spectral decomposition. We analyze the trainability of our model using the absorption witness metric. We benchmarked the proposed model on the Mackey-Glass chaotic time series and the real-world river level dataset from TAIAO. The proposed model consistently exhibits enhanced expressivity over both of the datasets. These results highlight the significance of QRUs as promising candidates for learning models that can be conveniently deployed on noisy intermediate-scale quantum (NISQ) hardware.</p></div> (10.1109/QCE65121.2025.00199)
    DOI : 10.1109/QCE65121.2025.00199
  • Blind Polarisation Demultiplexing and Carrier Recovery Using FIR-based Variational AutoEncoder Equaliser for Probabilistic Constellation Shaping in Optical Fibre Communications
    • Tomczyk Louis
    • Awwad Élie
    • Ware Cédric
    Journal of Lightwave Technology, Institute of Electrical and Electronics Engineers (IEEE)/Optical Society of America(OSA), 2025, pp.1-15. <div><p>We investigate through simulations the potential of Finite Impulse Response (FIR)-based Variational AutoEncoderinspired (VAE-FIR) equaliser for polarisation demultiplexing, Carrier Phase Recovery (CPR), and Carrier Frequency Offset (CFO) estimation in the context of Probabilistic Constellation Shaped (PCS) transmissions in coherent optical fibre communication systems. Additionally, we compare the performance of this novel estimator with the conventional Constant Modulus Algorithm (CMA) and Pilot-Aided Carrier Phase Recovery (PA-CPR). Our study shows that the VAE-FIR clearly outperforms the conventional approach in terms of polarisation demultiplexing, even with PCS where the CMA fails. We also show the ability of the VAE-FIR to track the phase evolution. Its ability to compensate for the carrier's phase effects is however limited to linewidths of a few dozen kHz and a hundred kHz for CFO, showing that the VAE-FIR may be used to compensate for the small residual phase noise or residual frequency mismatch.</p></div> (10.1109/JLT.2025.3603685)
    DOI : 10.1109/JLT.2025.3603685
  • Graph Neural Networks for Moving Objects Detection in Videos
    • Prummel Wieke
    • Giraldo Jhony
    • Zakharova Anastasia
    • Bouwmans Thierry
    , 2025, 09, pp.121-143. Deep learning has been widely applied for the detection of moving objects from static cameras. Recently, many methods using graph neural networks for background subtraction have been reported with very promising performance. This chapter provides a survey of different graph neural for moving object detection. First, a comparison of the transductive and inductive architectures of each method is provided, followed by a discussion of the specific application requirements, such as spatio-temporal and real-time constraints. After analyzing the strategies of each method and showing their limitations, a comparative evaluation of the large-scale CDnet2014 dataset is provided. Finally, we conclude with some potential future research directions. (10.1142/9789819807154_0006)
    DOI : 10.1142/9789819807154_0006
  • Super-résolution non supervisée d'images hyperspectrales de télédétection utilisant un entraînement entièrement synthétique
    • Xu Xinxin
    • Gousseau Yann
    • Kervazo Christophe
    • Ladjal Saïd
    , 2025. Hyperspectral single-image super-resolution (SISR) aims to improve the spatial resolution of images while preserving their spectral richness. Most current methods rely on supervised learning that requires high-resolution reference data, which are often unavailable in practice. To overcome this limitation, we propose an unsupervised learning approach based on the generation of synthetic data. The hyperspectral image is first decomposed into materials and abundances using a hyperspectral unmixing algorithm. A neural network is then trained to super-resolve the abundance maps from synthetic data generated through a dead leaves model, which imitates the statistical properties of real abundances. The high-resolution hyperspectral image is finally reconstructed by recombining the super-resolved abundance maps with the materials. Experimental results validate the effectiveness of this approach and highlight the usefulness of synthetic data for training.
  • La recherche en TdSI aux temps du transhumanisme
    • Maitre Henri
    , 2025, pp.1-4. <div><p>À travers des start-ups de la Tech et certains GAFAM, l'agenda transhumaniste a fait irruption dans les laboratoires de TdSI et d'IA, avec en étendard ses thèmes favoris : l'homme augmenté, la communication cerveaumachine, l'assistance mentale… Pour certains, cette confiance hardie dans le rôle de la Science pour sauver l'humanité est le meilleur défenseur de la Raison dans un monde qui doute de ses scientifiques. On montre ici le double visage des projets transhumanistes : d'une part un engagement très noble dans la résolution de quelques grands projets de la société, d'autre part une idéologie mortifère pour l'humanité qui l'engagera violemment dans une quête technosolutionniste au service d'une minorité.</p><p>Abstract -With the start-ups of Tech and GAFAM, the transhumanist agenda has burst into the laboratories of Signal &amp; Image Processing and AI, brandishing its favorite themes: augmented man, brain-machine communication, mental assistance ... For many, this bold confidence in the power of Science is the best guarantee of maintaining Reason in a world that doubts its scientists. Here we show the double face of transhumanistic projects: on the one hand a very noble commitment in the resolution of some great projects of society, on the other hand a mortifying ideology for mankind that will engage it violently in a techno-solutionist quest at the service of a minority.</p></div>