Sorry, you need to enable JavaScript to visit this website.
Partager

Publications

 

Les publications de nos enseignants-chercheurs sont sur la plateforme HAL :

 

Les publications des thèses des docteurs du LTCI sont sur la plateforme HAL :

 

Retrouver les publications figurant dans l'archive ouverte HAL par année :

2025

  • POLSAR2POLSAR: A SEMI-SUPERVISED DESPECKLING ALGORITHM FOR POLARIMETRIC SAR IMAGES
    • Mendes Cristiano Ulondu
    • Dalsasso Emanuele
    • Zhang Yi
    • Denis Loïc
    • Tupin Florence
    ISPRS Journal of Photogrammetry and Remote Sensing, Elsevier, 2025, 220 (0924-2716), pp.783-798. <div><p>Polarimetric Synthetic Aperture Radar (PolSAR) imagery is a valuable tool for Earth observation. This imaging technique finds wide application in various fields, including agriculture, forestry, geology, and disaster monitoring. However, due to the inherent presence of speckle noise, filtering is often necessary to improve the interpretability and reliability of PolSAR data. The effectiveness of a speckle filter is measured by its ability to attenuate fluctuations without introducing artifacts or degrading spatial and polarimetric information. Recent advancements in this domain leverage the power of deep learning. These approaches adopt a supervised learning strategy, which requires a large amount of speckle-free images that are costly to produce. In contrast, this paper presents PolSAR2PolSAR, a semi-supervised learning strategy that only requires, from the sensor under consideration, pairs of noisy images of the same location and acquired in the same configuration (same incidence angle and mode as during the revisit of the satellite on its orbit). Our approach applies to a wide range of sensors. Experiments on Radarsat-2 and RCM data demonstrate the capacity of the proposed method to effectively reduce speckle noise and retrieve fine details. The code of the trained models is made freely available at https://gitlab.telecom-paris.fr/ring/polsar2polsar. The repository additionally contains a model fine-tuned on SLC PolSAR images from NASA's UAVSAR sensor.</p></div>
  • LayerFold: A Python library to reduce the depth of neural networks
    • Pilo Giommaria
    • Hezbri Nour
    • Pereira E Ferreira André
    • Quétu Victor
    • Tartaglione Enzo
    SoftwareX, Elsevier, 2025, 29, pp.102030. Large-scale models are the backbone of Computer Vision and Natural Language Processing, and their generalizability allows for transfer learning and deployment in different scenarios. However, their large size means that reducing their computational and memory demands remains a challenge. Recent research proposes to achieve “layer collapse”, a condition where multiple layers can be combined due to the collapse of non-linearities to linear operators. While this is an important discovery, most studies remain theoretical, often replacing non-linearities with simple identity functions and not providing a real implementation of the more compact architecture. Our contribution is LayerFold, a library that studies and implements the merging of collapsed layers. We address typical cases, from fully connected to convolutional layers, discussing constraints and prospective challenges. Our tests on edge devices reveal that merely reducing network depth does not always result in faster computation, even when GPU-equipped. This work raises important warnings and opens the door to further advances in efficient model deployment. (10.1016/j.softx.2024.102030)
    DOI : 10.1016/j.softx.2024.102030
  • Learning on graphs : from algorithms to socio-technical analyses on AI
    • Delarue Simon
    , 2025. This thesis addresses the dual challenge of advancing Artificial Intelligence (AI) methods while critically assessing their societal impact. With AI technologies now embedded in high-stake decision sectors like healthcare and justice, their growing influence demands thorough examination, reflected in emerging international regulations such as the AI Act in Europe. To address these challenges, this work leverages attributed-graph based methods and advocates for a shift from performance-focused AI models to approaches that also prioritise scalability, simplicity, and explainability.The first part of this thesis develops a toolkit of attributed graph-based methods and algorithms aimed at enhancing AI learning techniques. It includes a software contribution that leverages the sparsity of complex networks to reduce computational costs. Additionally, it introduces non-neural graph models for node classification and link predictions tasks, showing how these methods can outperform advanced neural networks while being more computationally efficient. Lastly, it presents a novel pattern mining algorithm that generates concise, human-readable summaries of large networks. Together, these contributions highlight the potential of these approaches to provide efficient and interpretable solutions to AI's technical challenges.The second part adopts an interdisciplinary approach to study AI as a socio-technical system. By framing AI as an ecosystem influenced by various stakeholders and societal concerns, it uses graph-based models to analyse interactions and tensions related to explainability, ethics, and environmental impact. A user study explores the influence of graph-based explanations on user perceptions of AI recommendations, while the building and analysis of a corpus of AI ethics charters and manifestos quantifies the roles of key actors in AI governance. A final study reveals that environmental concerns in AI are primarily framed technically, highlighting the need for a broader approach to the ecological implications of digitalisation.
  • Quasi-optimal sampling from Gibbs states via non-commutative optimal transport metrics
    • Capel Ángela
    • Gondolf Paul
    • Kochanowski Jan
    • Rouzé Cambyse
    , 2024. We study the problem of sampling from and preparing quantum Gibbs states of local commuting Hamiltonians on hypercubic lattices of arbitrary dimension. We prove that any such Gibbs state which satisfies a clustering condition that we coin decay of matrix-valued quantum conditional mutual information (MCMI) can be quasi-optimally prepared on a quantum computer. We do this by controlling the mixing time of the corresponding Davies evolution in a normalized quantum Wasserstein distance of order one. To the best of our knowledge, this is the first time that such a non-commutative transport metric has been used in the study of quantum dynamics, and the first time quasi-rapid mixing is implied by solely an explicit clustering condition. Our result is based on a weak approximate tensorization and a weak modified logarithmic Sobolev inequality for such systems, as well as a new general weak transport cost inequality. If we furthermore assume a constraint on the local gap of the thermalizing dynamics, we obtain rapid mixing in trace distance for interactions beyond the range of two, thereby extending the state-of-the-art results that only cover the nearest neighbor case. We conclude by showing that systems that admit effective local Hamiltonians, like quantum CSS codes at high temperature, satisfy this MCMI decay and can thus be efficiently prepared and sampled from. (10.48550/arXiv.2412.01732)
    DOI : 10.48550/arXiv.2412.01732
  • Strong Converse for Classical-Quantum Degraded Broadcast Channels
    • Cheng Hao-Chung
    • Datta Nilanjana
    • Rouzé Cambyse
    , 2019. We consider the transmission of classical information through a degraded broadcast channel, whose outputs are two quantum systems, with the state of one being a degraded version of the other. Yard et al. proved that the capacity region of such a channel is contained in a region characterized by certain entropic quantities. We prove that this region satisfies the strong converse property, that is, the maximal probability of error incurred in transmitting information at rates lying outside this region converges to one exponentially in the number of uses of the channel. In establishing this result, we prove a second-order Fano-type inequality, which might be of independent interest. A powerful analytical tool which we employ in our proofs is the tensorization property of the quantum reverse hypercontractivity for the quantum depolarizing semigroup. (10.48550/arXiv.1905.00874)
    DOI : 10.48550/arXiv.1905.00874
  • GraphRAG: Leveraging Graph-Based Efficiency to Minimize Hallucinations in LLM-Driven RAG for Finance Data
    • Barry Mariam
    • Caillaut Gaëtan
    • Halftermeyer Pierre
    • Qader Raheel
    • Mouayad Mehdi
    • Cariolaro Dimitri
    • Deit Fabrice Le
    • Gesnouin Joseph
    , 2025. This study explores the integration of graphbased methods into Retrieval-Augmented Generation (RAG) systems to enhance efficiency, reduce hallucinations, and improve explainability, with a particular focus on financial and regulatory document retrieval. We propose two strategies-FactRAG and HybridRAG-which leverage knowledge graphs to improve RAG performance. Experiments conducted using Finance Bench, a benchmark for AI in finance, demonstrate that these approaches achieve a 6% reduction in hallucinations and an 80% decrease in token usage compared to conventional RAG methods. Furthermore, we evaluate HybridRAG by comparing the Digital Operational Resilience Act (DORA) from the European Union with the Federal Financial Institutions Examination Council (FFIEC) guidelines from the United States. The results reveal a significant improvement in computational efficiency, reducing contradiction detection complexity from O(n 2 ) to O(k •n)-where n is the number of chunks-and a remarkable 734-fold decrease in token consumption. Graph-based retrieval methods can improve the efficiency and cost-effectiveness of large language model (LLM) applications, though their performance and token usage depend on the dataset, knowledge graph design, and retrieval task.
  • Rapid thermalization of dissipative many-body dynamics of commuting Hamiltonians
    • Kochanowski Jan
    • Alhambra Alvaro
    • Capel Angela
    • Rouzé Cambyse
    , 2024. Quantum systems typically reach thermal equilibrium rather quickly when coupled to a thermal environment. The usual way of bounding the speed of this process is by estimating the spectral gap of the dissipative generator. However the gap, by itself, does not always yield a reasonable estimate for the thermalization time in many-body systems: without further structure, a uniform lower bound on it only constrains the thermalization time to grow polynomially with system size. Here, instead, we show that for a large class of geometrically-2-local models of Davies generators with commuting Hamiltonians, the thermalization time is much shorter than one would naïvely estimate from the gap: at most logarithmic in the system size. This yields the so-called rapid mixing of dissipative dynamics. The result is particularly relevant for 1D systems, for which we prove rapid thermalization with a system size independent decay rate only from a positive gap in the generator. We also prove that systems in hypercubic lattices of any dimension, and exponential graphs, such as trees, have rapid mixing at high enough temperatures. We do this by introducing a novel notion of clustering which we call "strong local indistinguishability" based on a max-relative entropy, and then proving that it implies a lower bound on the modified logarithmic Sobolev inequality (MLSI) for nearest neighbour commuting models. This has consequences for the rate of thermalization towards Gibbs states, and also for their relevant Wasserstein distances and transportation cost inequalities. Along the way, we show that several measures of decay of correlations on Gibbs states of commuting Hamiltonians are equivalent, a result of independent interest. At the technical level, we also show a direct relation between properties of Davies and Schmidt dynamics, that allows to transfer results of thermalization between both. (10.48550/arXiv.2404.16780)
    DOI : 10.48550/arXiv.2404.16780
  • STanH : Parametric Quantization for Variable Rate Learned Image Compression
    • Presta Alberto
    • Tartaglione Enzo
    • Fiandrotti Attilio
    • Grangetto Marco
    IEEE Transactions on Image Processing, Institute of Electrical and Electronics Engineers, 2025, 34, pp.639-651. In end-to-end learned image compression, encoder and decoder are jointly trained to minimize a R+λD cost function, where λ controls the trade-off between rate of the quantized latent representation and image quality. Unfortunately, a distinct encoder-decoder pair with millions of parameters must be trained for each λ , hence the need to switch encoders and to store multiple encoders and decoders on the user device for every target rate. This paper proposes to exploit a differentiable quantizer designed around a parametric sum of hyperbolic tangents, called STanH, that relaxes the step-wise quantization function. STanH is implemented as a differentiable activation layer with learnable quantization parameters that can be plugged into a pre-trained fixed rate model and refined to achieve different target bitrates. Experimental results show that our method enables variable rate coding with comparable efficiency to the state-of-the-art, yet with significant savings in terms of ease of deployment, training time, and storage costs. (10.1109/TIP.2025.3527883)
    DOI : 10.1109/TIP.2025.3527883
  • Zero-Knowledge Proofs of Quantumness
    • Phan Duong Hieu
    • Wen Weiqiang
    • Yan Xingyu
    • Zheng Jinwei
    IACR Communications in Cryptology, International Association for Cryptologic Research (IACR), 2025, 1 (4), pp.1-19. With the rapid development of quantum computers, proofs of quantumness have recently become an interesting and intriguing research direction. However, in all current schemes for proofs of quantumness, quantum provers almost invariably face the risk of being maliciously exploited by classical verifiers. In fact, through malicious strategies in interaction with quantum provers, classical verifiers could solve some instances of hard problems that arise from the specific scheme in use. In other words, malicious verifiers can break some schemes (that quantum provers are not aware of) through interaction with quantum provers. All this is due to the lack of formalization that prevents malicious verifiers from extracting useful information in proofs of quantumness. To address this issue, we formalize zero-knowledge proofs of quantumness. Intuitively, the zero-knowledge property necessitates that the information gained by the classical verifier from interactions with the quantum prover should not surpass what can be simulated using a simulated classical prover interacting with the same verifier. As a result, the new zero-knowledge notion can prevent any malicious verifier from exploiting quantum advantage. Interestingly, we find that the classical zero-knowledge proof is sufficient to compile some existing proofs of quantumness schemes into zero-knowledge proofs of quantumness schemes. Due to some technical reason, it appears to be more general to require zero-knowledge proof on the verifier side instead of the prover side. Intuitively, this helps to regulate the verifier's behavior from malicious to be honest-but-curious. As a result, both parties will play not only one role in the proofs of quantumness but also the dual role in the classical zero-knowledge proof. Specifically, the two principle proofs of quantumness schemes: Shor's factoring-based scheme and learning with errors-based scheme in [Brakerski et al, FOCS, 2018], can be transformed into zero-knowledge proofs of quantumness by requiring an extractable non-interactive zero-knowledge argument on the verifier side. Notably, the zero-knowledge proofs of quantumness can be viewed as an enhanced security notion for proofs of quantumness. To prevent malicious verifiers from exploiting the quantum device's capabilities or knowledge, it is advisable to transition existing proofs of quantumness schemes to this framework whenever feasible. (10.62056/ayiv4fe-3)
    DOI : 10.62056/ayiv4fe-3
  • Masked Computation of the Floor Function and Its Application to the FALCON Signature
    • Berthet Pierre-Augustin
    • Paillet Justine
    • Tavernier Cédric
    • Colombier Brice
    • Bossuet Lilian
    IACR Communications in Cryptology, International Association for Cryptologic Research (IACR), 2025, 1 (4), pp.1-23. FALCON is a signature selected for standardisation of the new Post-Quantum Cryptography (PQC) primitives by the National Institute of Standards and Technology (NIST). However, it remains a challenge to define efficient countermeasures against side-channel attacks (SCA) for this algorithm. FALCON is a lattice-based signature that relies on rational numbers, which is unusual in the cryptography field. Although recent work proposed a solution to mask the addition and the multiplication, some roadblocks remain, most noticeably, how to protect the floor function. In this work, we propose to complete the first existing tests of hardening FALCON against SCA. We perform the mathematical proofs of our methods as well as formal security proofs in the probing model by ensuring Multiple Input Multiple Output Strong Non-Interference (MIMO-SNI) security. We provide performances on a laptop computer of our gadgets as well as of a complete masked FALCON. We notice significant overhead in doing so and discuss the deployability of our method in a real-world context. (10.62056/ay73zl7s)
    DOI : 10.62056/ay73zl7s
  • A Circus of Circuits: Connections Between Decision Diagrams, Circuits, and Automata
    • Amarilli Antoine
    • Arenas Marcelo
    • Choi Yoojung
    • Monet Mikaël
    • Broeck Guy van Den
    • Wang Benjie
    , 2024. This document is an introduction to two related formalisms to define Boolean functions: binary decision diagrams, and Boolean circuits. It presents these formalisms and several of their variants studied in the setting of knowledge compilation. Last, it explains how these formalisms can be connected to the notions of automata over words and trees.
  • Edge-Minimum Walk of Modular Length in Polynomial Time
    • Amarilli Antoine
    • Groz Benoit
    • Wein Nicole
    , 2024, 325, pp.5:1-5:23. We study the problem of finding, in a directed graph, an st-walk of length r mod q which is edge-minimum, i.e., uses the smallest number of distinct edges. Despite the vast literature on paths and cycles with modularity constraints, to the best of our knowledge we are the first to study this problem. Our main result is a polynomial-time algorithm that solves this task when r and q are constants. We also show how our proof technique gives an algorithm to solve a generalization of the well-known Directed Steiner Network problem, in which connections between endpoint pairs are required to satisfy modularity constraints on their length. Our algorithm is polynomial when the number of endpoint pairs and the modularity constraints on the pairs are constants. (10.4230/LIPIcs.ITCS.2025.5)
    DOI : 10.4230/LIPIcs.ITCS.2025.5
  • Survey of Results on the ModPath and ModCycle Problems
    • Amarilli Antoine
    , 2024. This note summarizes the state of what is known about the tractability of the problem ModPath, which asks if an input undirected graph contains a simple st-path whose length satisfies modulo constraints. We also consider the problem ModCycle, which asks for the existence of a simple cycle subject to such constraints. We also discuss the status of these problems on directed graphs, and on restricted classes of graphs. We explain connections to the problem variant asking for a constant vertex-disjoint number of such paths or cycles, and discuss links to other related work.
  • Tighter Bounds for Query Answering with Guarded TGDs
    • Amarilli Antoine
    • Benedikt Michael
    , 2025. We consider the complexity of the open-world query answering problem, where we wish to determine certain answers to conjunctive queries over incomplete datasets specified by an initial set of facts and a set of guarded TGDs. This problem has been well-studied in the literature and is decidable but with a high complexity, namely, it is 2EXPTIME complete. Further, the complexity shrinks by one exponential when the arity is fixed. We show in this paper how we can obtain better complexity bounds when considering separately the arity of the guard atom and that of the additional atoms, called the side signature. Our results make use of the technique of linearizing guarded TGDs, introduced in Gottlob, Manna, and Pieris. Specifically, we present a variant of the linearization process, making use of a restricted version of the chase that we recently introduced. Our results imply that open-world query answering with guarded TGDs can be solved in EXPTIME with arbitrary-arity guard relations if we simply bound the arity of the side signature; and that the complexity drops to NP if we fix the side signature and bound the width of the dependencies.
  • Somewhat homomorphic encryption based on random codes
    • Aguilar-Melchor Carlos
    • Dyseryn Victor
    • Gaborit Philippe
    Designs, Codes and Cryptography, Springer Verlag, 2025, 93 (6), pp.1645-1669. We present a secret-key encryption scheme based on random rank metric ideal linear codes with a simple decryption circuit. It supports unlimited homomorphic additions and plaintext multiplications (i.e. the homomorphic multiplication of a clear plaintext with a ciphertext) as well as a fixed arbitrary number of homomorphic multiplications. We study a candidate bootstrapping algorithm that requires no multiplication but additions and plaintext multiplications only. This latter operation is therefore very efficient in our scheme, whereas bootstrapping is usually the main reason which penalizes the performance of other fully homomorphic encryption schemes. However, the security reduction of our scheme restricts the number of independent ciphertexts that can be published. In particular, this prevents to securely evaluate the bootstrapping algorithm as the number of ciphertexts in the key switching material is too large. Our scheme is nonetheless the first somewhat homomorphic encryption scheme based on random ideal codes and a first step towards full homomorphism. Random ideal codes give stronger security guarantees as opposed to existing constructions based on highly structured codes. We give concrete parameters for our scheme that shows that it achieves competitive sizes and performance, with a key size of 3.7 kB and a ciphertext size of 0.9 kB when a single multiplication is allowed. (10.1007/s10623-024-01555-y)
    DOI : 10.1007/s10623-024-01555-y
  • Rate Meta-Distribution in Millimeter Wave URLLC Device-to-Device Networks With Beam Misalignment
    • Quan Yibo
    • Coupechoux Marceau
    • Kélif Jean-Marc
    IEEE Transactions on Vehicular Technology, Institute of Electrical and Electronics Engineers, 2025, 74 (1), pp.657-673. <div><p>Using the stochastic geometry framework, we study a millimeter wave (mmWave) Device-to-Device (D2D) network dedicated to Ultra-Reliable Low Latency Communications (URLLC), where users employ multiple antennas to perform beamforming. We leverage the notion of meta-distribution in order to capture the reliability requirement of URLLC. The packet transmission process is divided into two phases: a beam training phase, during which exhaustive beam sweeping is adopted, and a data transmission phase. The paper investigates the misalignment error distribution resulting from an imperfect training phase, due to the finite codebooks resolution and the fast variation of the channel. For the data transmission phase, closed-form expressions for all the moments of the conditional rate coverage probability are derived, and the meta-distribution is approximated using the beta approximation. The study evaluates the overall network performance through the effective rate metadistribution, which accounts for the training overhead and beam misalignment errors. The results show the detrimental impact of misalignment errors when URLLC requirements are stringent and highlight the trade-off between the training overhead and the gain brought by multiple antennas. Insights are provided for optimally and jointly choosing the codebook size and tbe number of antennas.</p></div> (10.1109/TVT.2024.3451487)
    DOI : 10.1109/TVT.2024.3451487
  • ding-01 :ARG0 Un corpus AMR pour le français parlé spontané
    • Kang Jeongwoo
    • Boritchev Maria
    • Coavoux Maximin
    , 2025, pp.791-801. Nous présentons notre travail en cours sur l'annotation d'un corpus sémantique du français. Nous annotons le corpus DinG, constitué de transcriptions de dialogues spontanés en français enregistrées pendant des parties du jeu de plateau Catan , en Abstract Meaning Representation (AMR), un formalisme de représentation sémantique. Comme AMR a une couverture insuffisante de la dynamique de la parole spontanée, nous étendons le formalisme pour mieux représenter la parole spontanée et les structures de phrases spécifiques au français. En outre, nous diffusons un guide d'annotation détaillant ces extensions. Enfin, nous publions notre corpus sous licence libre (CC-SA-BY). Notre travail contribue au développement de ressources sémantiques pour le dialogue en français.
  • On regression in extreme regions
    • Clémençon Stéphan
    • Huet Nathan
    • Sabourin Anne
    Electronic Journal of Statistics, Shaker Heights, OH : Institute of Mathematical Statistics, 2025, 19 (2), pp.4784–4828. We establish a statistical learning theoretical framework aimed at extrapolation, or out-of-domain generalization, on the unobserved tails of covariates in continuous regression problems. Our strategy involves performing statistical regression on a subsample of observations with continuous labels that are the furthest away from the origin, focusing specifically on their angular components. The underlying assumptions of our approach are grounded in the theory of multivariate regular variation, a cornerstone of extreme value theory. We address the stylized problem of nonparametric least squares regression with predictors chosen from a Vapnik-Chervonenkis class. This work contributes to a broader initiative to develop statistical learning theoretical foundations for supervised learning strategies that enhance performance on the supposedly heavy tails of covariates. Previous efforts in this area have focused exclusively on binary classification on extreme covariates. Although the continuous target setting necessitates different techniques and regularity assumptions, our main results echo findings from earlier studies. We quantify the predictive performance on tail regions in terms of excess risk, presenting it as a finite sample risk bound with a clear bias-variance decomposition. Numerical experiments with simulated and real data illustrate our theoretical findings. (10.1214/25-EJS2441)
    DOI : 10.1214/25-EJS2441
  • Réélaboration des règles de sécurité en contextes interpersonnels : le cas du toucher en temps de Covid‑19
    • Héron Robin
    • Safin Stéphane
    • Baker Michael J
    • Zhang Zhuoming
    • Alvina Jessalyn
    • Lecolinet Éric
    • Détienne Françoise
    Activités, Association Recherches et Pratiques sur les ACTivités, 2025, 22-1. In this article, we study how the health safety rules relating to social touching, established during the pandemic, have been redefined with regard to interpersonal interactions. According to the adaptative safety framework, rules are theorised as resources, which are adapted in context. To better understand the processes for the re-elaboration of safety rules in interpersonal contexts, we conducted a study consisting of an online questionnaire on social touching habits before and after the lockdown, followed by in-depth interviews with selected participants. Our results highlight (1) reduced touching practices, especially for semi-intimate relationships such as colleagues, casual friends, close friends, and extended family; (2) two processes of re-elaborating pandemic-related safety rules: explicit deliberation, behavioural alignment, as well as reflexive processes; and (3) two dimensions of justifications given for the re-elaboration of those rules, preserving physical health/vulnerability concerns and maintaining social relations, and their relative weight according to the relationships (family versus friends) between the interactants. While recommended health and safety rules were mostly followed leading to large decrease in touching behaviours, it appears that people re-elaborate the rule depending on the relationship they share with one another. Through the re-elaboration of the official health and safety rules people strike a balance between safety measures in order to preserve their wellbeing and/or the relationship. (10.4000/13ra9)
    DOI : 10.4000/13ra9
  • Just Project! Multi-Channel Despeckling, the Easy Way
    • Denis Loïc
    • Dalsasso Emanuele
    • Tupin Florence
    IEEE Transactions on Geoscience and Remote Sensing, Institute of Electrical and Electronics Engineers, 2025, 63, pp.1-11. Reducing speckle fluctuations in multi-channel SAR images is essential in many applications of SAR imaging such as polarimetric classification or interferometric height estimation. While single-channel despeckling has widely benefited from the application of deep learning techniques, extensions to multi-channel SAR images are much more challenging. This paper introduces MuChaPro, a generic framework that exploits existing single-channel despeckling methods. The key idea is to generate numerous single-channel projections, restore these projections, and recombine them into the final multi-channel estimate. This simple approach is shown to be effective in polarimetric and/or interferometric modalities. A special appeal of MuChaPro is the possibility to apply a self-supervised training strategy to learn sensor-specific networks for single-channel despeckling. (10.1109/TGRS.2025.3531957)
    DOI : 10.1109/TGRS.2025.3531957
  • Efficient Hamiltonian, structure and trace distance learning of Gaussian states
    • Fanizza Marco
    • Rouzé Cambyse
    • Stilck Franca Daniel
    , 2024. In this work, we initiate the study of Hamiltonian learning for positive temperature bosonic Gaussian states, the quantum generalization of the widely studied problem of learning Gaussian graphical models. We obtain efficient protocols, both in sample and computational complexity, for the task of inferring the parameters of their underlying quadratic Hamiltonian under the assumption of bounded temperature, squeezing, displacement and maximal degree of the interaction graph. Our protocol only requires heterodyne measurements, which are often experimentally feasible, and has a sample complexity that scales logarithmically with the number of modes. Furthermore, we show that it is possible to learn the underlying interaction graph in a similar setting and sample complexity. Taken together, our results put the status of the quantum Hamiltonian learning problem for continuous variable systems in a much more advanced state when compared to spins, where state-of-the-art results are either unavailable or quantitatively inferior to ours. In addition, we use our techniques to obtain the first results on learning Gaussian states in trace distance with a quadratic scaling in precision and polynomial in the number of modes, albeit imposing certain restrictions on the Gaussian states. Our main technical innovations are several continuity bounds for the covariance and Hamiltonian matrix of a Gaussian state, which are of independent interest, combined with what we call the local inversion technique. In essence, the local inversion technique allows us to reliably infer the Hamiltonian of a Gaussian state by only estimating in parallel submatrices of the covariance matrix whose size scales with the desired precision, but not the number of modes. This way we bypass the need to obtain precise global estimates of the covariance matrix, controlling the sample complexity.
  • A bipartite ranking approach to the two-sample problem
    • Clémençon Stéphan
    • Limnios Myrto
    • Vayatis Nicolas
    Electronic Journal of Statistics, Shaker Heights, OH : Institute of Mathematical Statistics, 2025, 19 (1), pp.2733–2779. The two-sample problem consists in testing whether two independent samples are drawn from the same (unknown) distribution. Its study in high-dimension is the subject of much attention, especially because the information acquisition processes at work in the Big Data era often involve various poorly controlled sources, leading to datasets possibly exhibiting strong sampling bias. While the efficiency of classic methods relying on computing a discrepancy measure between the empirical distributions of each sample, is negatively impacted by increasing dimensionality, we develop a two-step approach based on statistical learning and an extension of rank tests. By dividing the initial samples in two, a bipartite ranking algorithm first learns a real-valued scoring function inducing a preorder on the multivariate space. Then, a rank statistic based on the scores of the remaining observations, tests for differences in distribution. Because the ranking algorithm learns how to map the data onto the real line as the likelihood ratio between the original multivariate distributions, the approach resists to large dimensions (ignoring ranking model bias issues) and preserves the advantages of univariate rank tests. We prove nonasymptotic error bounds based on recent results for two-sample linear rank-processes, and experimentally show how the promoted approach surpasses state-of-the-art methods. (10.1214/25-EJS2392)
    DOI : 10.1214/25-EJS2392
  • Multi-view 3D surface reconstruction from SAR images by inverse rendering
    • Barbier--Renard Emile
    • Tupin Florence
    • Trouvé Nicolas
    • Denis Loïc
    IEEE Geoscience and Remote Sensing Letters, IEEE - Institute of Electrical and Electronics Engineers, 2025, 22, pp.4008805. 3D reconstruction of a scene from Synthetic Aperture Radar (SAR) images mainly relies on interferometric measurements, which involve strict constraints on the acquisition process. These last years, progress in deep learning has significantly advanced 3D reconstruction from multiple views in optical imaging, mainly through reconstruction-by-synthesis approaches popularized by Neural Radiance Fields. In this paper, we propose a new inverse rendering method for 3D reconstruction from a few incoherent SAR views, drawing inspiration from optical approaches. First, we introduce a new simplified differentiable SAR rendering model, able to synthetize images from a Digital Surface Model (DSM) and a radar backscattering coefficients map. Then, we introduce a coarse-to-fine strategy to reconstruct the DSM and the map of backscattering coefficients of a SAR scene starting only from a few SAR views. We use a neural field, i.e. a continuous parametric model based on a Multi-Layer Perceptron, to represent the SAR scene. Finally, we present preliminary results of DSM reconstruction from synthetic SAR images produced by ONERA's physically-based EMPRISE simulator, supporting the potential of applying inverse rendering approaches to SAR data in order to efficiently exploit geometric disparities in future applications such as multi-sensor data fusion. (10.1109/LGRS.2025.3572303)
    DOI : 10.1109/LGRS.2025.3572303
  • From the Gradient-Step Denoiser to the Proximal Denoiser and their associated convergent Plug-and-Play algorithms
    • Herfeld Vincent
    • de Senneville Baudouin Denis
    • Leclaire Arthur
    • Papadakis Nicolas
    , 2025. In this paper we analyze the Gradient-Step Denoiser and its usage in Plug-and-Play algorithms. The Plug-and-Play paradigm of optimization algorithms uses off the shelf denoisers to replace a proximity operator or a gradient descent operator of an image prior. Usually this image prior is implicit and cannot be expressed, but the Gradient-Step Denoiser is trained to be exactly the gradient descent operator or the proximity operator of an explicit functional while preserving state-of-the-art denoising capabilities.
  • Did You Forkget It? Detecting One-Day Vulnerabilities in Open-source Forks With Global History Analysis
    • Lefeuvre Romain
    • Reux Charly
    • Zacchiroli Stefano
    • Barais Olivier
    • Combemale Benoit
    , 2025. Tracking vulnerabilities inherited from third-party open-source software is a well-known challenge, often addressed by tracing the threads of dependency information. However, vulnerabilities can also propagate through forking: a code repository forked after the introduction of a vulnerability, but before it is patched, may remain vulnerable long after the vulnerability has been fixed in the initial repository. History analysis approaches are used to track vulnerable software versions at scale. However, such approaches fail to track vulnerabilities in forks, leaving fork maintainers to identify them manually. This paper presents a global history analysis approach to help software developers identify one-day (known but unpatched) vulnerabilities in forked repositories. Leveraging the global graph of public code, as captured by the Software Heritage archive, our approach propagates vulnerability information at the commit level and performs automated impact analysis. Starting from 7162 repositories with vulnerable commits listed in OSV, we propagate vulnerability information to 2.2 million forks. We evaluate our approach by filtering forks with significant user bases whose latest commit is still potentially vulnerable, manually auditing the code, and contacting maintainers for confirmation and responsible disclosure. This process identified 135 high-severity one-day vulnerabilities, achieving a precision of 0.69, with 9 confirmed by maintainers.