Sorry, you need to enable JavaScript to visit this website.
Share

Publications

2025

  • POLSAR2POLSAR: A SEMI-SUPERVISED DESPECKLING ALGORITHM FOR POLARIMETRIC SAR IMAGES
    • Mendes Cristiano Ulondu
    • Dalsasso Emanuele
    • Zhang Yi
    • Denis Loïc
    • Tupin Florence
    ISPRS Journal of Photogrammetry and Remote Sensing, Elsevier, 2025, 220 (0924-2716), pp.783-798. <div><p>Polarimetric Synthetic Aperture Radar (PolSAR) imagery is a valuable tool for Earth observation. This imaging technique finds wide application in various fields, including agriculture, forestry, geology, and disaster monitoring. However, due to the inherent presence of speckle noise, filtering is often necessary to improve the interpretability and reliability of PolSAR data. The effectiveness of a speckle filter is measured by its ability to attenuate fluctuations without introducing artifacts or degrading spatial and polarimetric information. Recent advancements in this domain leverage the power of deep learning. These approaches adopt a supervised learning strategy, which requires a large amount of speckle-free images that are costly to produce. In contrast, this paper presents PolSAR2PolSAR, a semi-supervised learning strategy that only requires, from the sensor under consideration, pairs of noisy images of the same location and acquired in the same configuration (same incidence angle and mode as during the revisit of the satellite on its orbit). Our approach applies to a wide range of sensors. Experiments on Radarsat-2 and RCM data demonstrate the capacity of the proposed method to effectively reduce speckle noise and retrieve fine details. The code of the trained models is made freely available at https://gitlab.telecom-paris.fr/ring/polsar2polsar. The repository additionally contains a model fine-tuned on SLC PolSAR images from NASA's UAVSAR sensor.</p></div>
  • LayerFold: A Python library to reduce the depth of neural networks
    • Pilo Giommaria
    • Hezbri Nour
    • Pereira E Ferreira André
    • Quétu Victor
    • Tartaglione Enzo
    SoftwareX, Elsevier, 2025, 29, pp.102030. Large-scale models are the backbone of Computer Vision and Natural Language Processing, and their generalizability allows for transfer learning and deployment in different scenarios. However, their large size means that reducing their computational and memory demands remains a challenge. Recent research proposes to achieve “layer collapse”, a condition where multiple layers can be combined due to the collapse of non-linearities to linear operators. While this is an important discovery, most studies remain theoretical, often replacing non-linearities with simple identity functions and not providing a real implementation of the more compact architecture. Our contribution is LayerFold, a library that studies and implements the merging of collapsed layers. We address typical cases, from fully connected to convolutional layers, discussing constraints and prospective challenges. Our tests on edge devices reveal that merely reducing network depth does not always result in faster computation, even when GPU-equipped. This work raises important warnings and opens the door to further advances in efficient model deployment. (10.1016/j.softx.2024.102030)
    DOI : 10.1016/j.softx.2024.102030
  • Learning on graphs : from algorithms to socio-technical analyses on AI
    • Delarue Simon
    , 2025. This thesis addresses the dual challenge of advancing Artificial Intelligence (AI) methods while critically assessing their societal impact. With AI technologies now embedded in high-stake decision sectors like healthcare and justice, their growing influence demands thorough examination, reflected in emerging international regulations such as the AI Act in Europe. To address these challenges, this work leverages attributed-graph based methods and advocates for a shift from performance-focused AI models to approaches that also prioritise scalability, simplicity, and explainability.The first part of this thesis develops a toolkit of attributed graph-based methods and algorithms aimed at enhancing AI learning techniques. It includes a software contribution that leverages the sparsity of complex networks to reduce computational costs. Additionally, it introduces non-neural graph models for node classification and link predictions tasks, showing how these methods can outperform advanced neural networks while being more computationally efficient. Lastly, it presents a novel pattern mining algorithm that generates concise, human-readable summaries of large networks. Together, these contributions highlight the potential of these approaches to provide efficient and interpretable solutions to AI's technical challenges.The second part adopts an interdisciplinary approach to study AI as a socio-technical system. By framing AI as an ecosystem influenced by various stakeholders and societal concerns, it uses graph-based models to analyse interactions and tensions related to explainability, ethics, and environmental impact. A user study explores the influence of graph-based explanations on user perceptions of AI recommendations, while the building and analysis of a corpus of AI ethics charters and manifestos quantifies the roles of key actors in AI governance. A final study reveals that environmental concerns in AI are primarily framed technically, highlighting the need for a broader approach to the ecological implications of digitalisation.
  • Strong Converse for Classical-Quantum Degraded Broadcast Channels
    • Cheng Hao-Chung
    • Datta Nilanjana
    • Rouzé Cambyse
    , 2019. We consider the transmission of classical information through a degraded broadcast channel, whose outputs are two quantum systems, with the state of one being a degraded version of the other. Yard et al. proved that the capacity region of such a channel is contained in a region characterized by certain entropic quantities. We prove that this region satisfies the strong converse property, that is, the maximal probability of error incurred in transmitting information at rates lying outside this region converges to one exponentially in the number of uses of the channel. In establishing this result, we prove a second-order Fano-type inequality, which might be of independent interest. A powerful analytical tool which we employ in our proofs is the tensorization property of the quantum reverse hypercontractivity for the quantum depolarizing semigroup. (10.48550/arXiv.1905.00874)
    DOI : 10.48550/arXiv.1905.00874
  • GraphRAG: Leveraging Graph-Based Efficiency to Minimize Hallucinations in LLM-Driven RAG for Finance Data
    • Barry Mariam
    • Caillaut Gaëtan
    • Halftermeyer Pierre
    • Qader Raheel
    • Mouayad Mehdi
    • Cariolaro Dimitri
    • Deit Fabrice Le
    • Gesnouin Joseph
    , 2025. This study explores the integration of graphbased methods into Retrieval-Augmented Generation (RAG) systems to enhance efficiency, reduce hallucinations, and improve explainability, with a particular focus on financial and regulatory document retrieval. We propose two strategies-FactRAG and HybridRAG-which leverage knowledge graphs to improve RAG performance. Experiments conducted using Finance Bench, a benchmark for AI in finance, demonstrate that these approaches achieve a 6% reduction in hallucinations and an 80% decrease in token usage compared to conventional RAG methods. Furthermore, we evaluate HybridRAG by comparing the Digital Operational Resilience Act (DORA) from the European Union with the Federal Financial Institutions Examination Council (FFIEC) guidelines from the United States. The results reveal a significant improvement in computational efficiency, reducing contradiction detection complexity from O(n 2 ) to O(k •n)-where n is the number of chunks-and a remarkable 734-fold decrease in token consumption. Graph-based retrieval methods can improve the efficiency and cost-effectiveness of large language model (LLM) applications, though their performance and token usage depend on the dataset, knowledge graph design, and retrieval task.
  • Rapid thermalization of dissipative many-body dynamics of commuting Hamiltonians
    • Kochanowski Jan
    • Alhambra Alvaro
    • Capel Angela
    • Rouzé Cambyse
    , 2024. Quantum systems typically reach thermal equilibrium rather quickly when coupled to a thermal environment. The usual way of bounding the speed of this process is by estimating the spectral gap of the dissipative generator. However the gap, by itself, does not always yield a reasonable estimate for the thermalization time in many-body systems: without further structure, a uniform lower bound on it only constrains the thermalization time to grow polynomially with system size. Here, instead, we show that for a large class of geometrically-2-local models of Davies generators with commuting Hamiltonians, the thermalization time is much shorter than one would naïvely estimate from the gap: at most logarithmic in the system size. This yields the so-called rapid mixing of dissipative dynamics. The result is particularly relevant for 1D systems, for which we prove rapid thermalization with a system size independent decay rate only from a positive gap in the generator. We also prove that systems in hypercubic lattices of any dimension, and exponential graphs, such as trees, have rapid mixing at high enough temperatures. We do this by introducing a novel notion of clustering which we call "strong local indistinguishability" based on a max-relative entropy, and then proving that it implies a lower bound on the modified logarithmic Sobolev inequality (MLSI) for nearest neighbour commuting models. This has consequences for the rate of thermalization towards Gibbs states, and also for their relevant Wasserstein distances and transportation cost inequalities. Along the way, we show that several measures of decay of correlations on Gibbs states of commuting Hamiltonians are equivalent, a result of independent interest. At the technical level, we also show a direct relation between properties of Davies and Schmidt dynamics, that allows to transfer results of thermalization between both. (10.48550/arXiv.2404.16780)
    DOI : 10.48550/arXiv.2404.16780
  • Quasi-optimal sampling from Gibbs states via non-commutative optimal transport metrics
    • Capel Ángela
    • Gondolf Paul
    • Kochanowski Jan
    • Rouzé Cambyse
    , 2024. We study the problem of sampling from and preparing quantum Gibbs states of local commuting Hamiltonians on hypercubic lattices of arbitrary dimension. We prove that any such Gibbs state which satisfies a clustering condition that we coin decay of matrix-valued quantum conditional mutual information (MCMI) can be quasi-optimally prepared on a quantum computer. We do this by controlling the mixing time of the corresponding Davies evolution in a normalized quantum Wasserstein distance of order one. To the best of our knowledge, this is the first time that such a non-commutative transport metric has been used in the study of quantum dynamics, and the first time quasi-rapid mixing is implied by solely an explicit clustering condition. Our result is based on a weak approximate tensorization and a weak modified logarithmic Sobolev inequality for such systems, as well as a new general weak transport cost inequality. If we furthermore assume a constraint on the local gap of the thermalizing dynamics, we obtain rapid mixing in trace distance for interactions beyond the range of two, thereby extending the state-of-the-art results that only cover the nearest neighbor case. We conclude by showing that systems that admit effective local Hamiltonians, like quantum CSS codes at high temperature, satisfy this MCMI decay and can thus be efficiently prepared and sampled from. (10.48550/arXiv.2412.01732)
    DOI : 10.48550/arXiv.2412.01732
  • STanH : Parametric Quantization for Variable Rate Learned Image Compression
    • Presta Alberto
    • Tartaglione Enzo
    • Fiandrotti Attilio
    • Grangetto Marco
    IEEE Transactions on Image Processing, Institute of Electrical and Electronics Engineers, 2025, 34, pp.639-651. In end-to-end learned image compression, encoder and decoder are jointly trained to minimize a R+λD cost function, where λ controls the trade-off between rate of the quantized latent representation and image quality. Unfortunately, a distinct encoder-decoder pair with millions of parameters must be trained for each λ , hence the need to switch encoders and to store multiple encoders and decoders on the user device for every target rate. This paper proposes to exploit a differentiable quantizer designed around a parametric sum of hyperbolic tangents, called STanH, that relaxes the step-wise quantization function. STanH is implemented as a differentiable activation layer with learnable quantization parameters that can be plugged into a pre-trained fixed rate model and refined to achieve different target bitrates. Experimental results show that our method enables variable rate coding with comparable efficiency to the state-of-the-art, yet with significant savings in terms of ease of deployment, training time, and storage costs. (10.1109/TIP.2025.3527883)
    DOI : 10.1109/TIP.2025.3527883
  • Zero-Knowledge Proofs of Quantumness
    • Phan Duong Hieu
    • Wen Weiqiang
    • Yan Xingyu
    • Zheng Jinwei
    IACR Communications in Cryptology, International Association for Cryptologic Research (IACR), 2025, 1 (4), pp.1-19. With the rapid development of quantum computers, proofs of quantumness have recently become an interesting and intriguing research direction. However, in all current schemes for proofs of quantumness, quantum provers almost invariably face the risk of being maliciously exploited by classical verifiers. In fact, through malicious strategies in interaction with quantum provers, classical verifiers could solve some instances of hard problems that arise from the specific scheme in use. In other words, malicious verifiers can break some schemes (that quantum provers are not aware of) through interaction with quantum provers. All this is due to the lack of formalization that prevents malicious verifiers from extracting useful information in proofs of quantumness. To address this issue, we formalize zero-knowledge proofs of quantumness. Intuitively, the zero-knowledge property necessitates that the information gained by the classical verifier from interactions with the quantum prover should not surpass what can be simulated using a simulated classical prover interacting with the same verifier. As a result, the new zero-knowledge notion can prevent any malicious verifier from exploiting quantum advantage. Interestingly, we find that the classical zero-knowledge proof is sufficient to compile some existing proofs of quantumness schemes into zero-knowledge proofs of quantumness schemes. Due to some technical reason, it appears to be more general to require zero-knowledge proof on the verifier side instead of the prover side. Intuitively, this helps to regulate the verifier's behavior from malicious to be honest-but-curious. As a result, both parties will play not only one role in the proofs of quantumness but also the dual role in the classical zero-knowledge proof. Specifically, the two principle proofs of quantumness schemes: Shor's factoring-based scheme and learning with errors-based scheme in [Brakerski et al, FOCS, 2018], can be transformed into zero-knowledge proofs of quantumness by requiring an extractable non-interactive zero-knowledge argument on the verifier side. Notably, the zero-knowledge proofs of quantumness can be viewed as an enhanced security notion for proofs of quantumness. To prevent malicious verifiers from exploiting the quantum device's capabilities or knowledge, it is advisable to transition existing proofs of quantumness schemes to this framework whenever feasible. (10.62056/ayiv4fe-3)
    DOI : 10.62056/ayiv4fe-3
  • Masked Computation of the Floor Function and Its Application to the FALCON Signature
    • Berthet Pierre-Augustin
    • Paillet Justine
    • Tavernier Cédric
    • Colombier Brice
    • Bossuet Lilian
    IACR Communications in Cryptology, International Association for Cryptologic Research (IACR), 2025, 1 (4), pp.1-23. FALCON is a signature selected for standardisation of the new Post-Quantum Cryptography (PQC) primitives by the National Institute of Standards and Technology (NIST). However, it remains a challenge to define efficient countermeasures against side-channel attacks (SCA) for this algorithm. FALCON is a lattice-based signature that relies on rational numbers, which is unusual in the cryptography field. Although recent work proposed a solution to mask the addition and the multiplication, some roadblocks remain, most noticeably, how to protect the floor function. In this work, we propose to complete the first existing tests of hardening FALCON against SCA. We perform the mathematical proofs of our methods as well as formal security proofs in the probing model by ensuring Multiple Input Multiple Output Strong Non-Interference (MIMO-SNI) security. We provide performances on a laptop computer of our gadgets as well as of a complete masked FALCON. We notice significant overhead in doing so and discuss the deployability of our method in a real-world context. (10.62056/ay73zl7s)
    DOI : 10.62056/ay73zl7s
  • A Circus of Circuits: Connections Between Decision Diagrams, Circuits, and Automata
    • Amarilli Antoine
    • Arenas Marcelo
    • Choi Yoojung
    • Monet Mikaël
    • Broeck Guy van Den
    • Wang Benjie
    , 2024. This document is an introduction to two related formalisms to define Boolean functions: binary decision diagrams, and Boolean circuits. It presents these formalisms and several of their variants studied in the setting of knowledge compilation. Last, it explains how these formalisms can be connected to the notions of automata over words and trees.
  • Edge-Minimum Walk of Modular Length in Polynomial Time
    • Amarilli Antoine
    • Groz Benoit
    • Wein Nicole
    , 2024, 325, pp.5:1-5:23. We study the problem of finding, in a directed graph, an st-walk of length r mod q which is edge-minimum, i.e., uses the smallest number of distinct edges. Despite the vast literature on paths and cycles with modularity constraints, to the best of our knowledge we are the first to study this problem. Our main result is a polynomial-time algorithm that solves this task when r and q are constants. We also show how our proof technique gives an algorithm to solve a generalization of the well-known Directed Steiner Network problem, in which connections between endpoint pairs are required to satisfy modularity constraints on their length. Our algorithm is polynomial when the number of endpoint pairs and the modularity constraints on the pairs are constants. (10.4230/LIPIcs.ITCS.2025.5)
    DOI : 10.4230/LIPIcs.ITCS.2025.5
  • Survey of Results on the ModPath and ModCycle Problems
    • Amarilli Antoine
    , 2024. This note summarizes the state of what is known about the tractability of the problem ModPath, which asks if an input undirected graph contains a simple st-path whose length satisfies modulo constraints. We also consider the problem ModCycle, which asks for the existence of a simple cycle subject to such constraints. We also discuss the status of these problems on directed graphs, and on restricted classes of graphs. We explain connections to the problem variant asking for a constant vertex-disjoint number of such paths or cycles, and discuss links to other related work.
  • Tighter Bounds for Query Answering with Guarded TGDs
    • Amarilli Antoine
    • Benedikt Michael
    , 2025. We consider the complexity of the open-world query answering problem, where we wish to determine certain answers to conjunctive queries over incomplete datasets specified by an initial set of facts and a set of guarded TGDs. This problem has been well-studied in the literature and is decidable but with a high complexity, namely, it is 2EXPTIME complete. Further, the complexity shrinks by one exponential when the arity is fixed. We show in this paper how we can obtain better complexity bounds when considering separately the arity of the guard atom and that of the additional atoms, called the side signature. Our results make use of the technique of linearizing guarded TGDs, introduced in Gottlob, Manna, and Pieris. Specifically, we present a variant of the linearization process, making use of a restricted version of the chase that we recently introduced. Our results imply that open-world query answering with guarded TGDs can be solved in EXPTIME with arbitrary-arity guard relations if we simply bound the arity of the side signature; and that the complexity drops to NP if we fix the side signature and bound the width of the dependencies.
  • Somewhat homomorphic encryption based on random codes
    • Aguilar-Melchor Carlos
    • Dyseryn Victor
    • Gaborit Philippe
    Designs, Codes and Cryptography, Springer Verlag, 2025, 93 (6), pp.1645-1669. We present a secret-key encryption scheme based on random rank metric ideal linear codes with a simple decryption circuit. It supports unlimited homomorphic additions and plaintext multiplications (i.e. the homomorphic multiplication of a clear plaintext with a ciphertext) as well as a fixed arbitrary number of homomorphic multiplications. We study a candidate bootstrapping algorithm that requires no multiplication but additions and plaintext multiplications only. This latter operation is therefore very efficient in our scheme, whereas bootstrapping is usually the main reason which penalizes the performance of other fully homomorphic encryption schemes. However, the security reduction of our scheme restricts the number of independent ciphertexts that can be published. In particular, this prevents to securely evaluate the bootstrapping algorithm as the number of ciphertexts in the key switching material is too large. Our scheme is nonetheless the first somewhat homomorphic encryption scheme based on random ideal codes and a first step towards full homomorphism. Random ideal codes give stronger security guarantees as opposed to existing constructions based on highly structured codes. We give concrete parameters for our scheme that shows that it achieves competitive sizes and performance, with a key size of 3.7 kB and a ciphertext size of 0.9 kB when a single multiplication is allowed. (10.1007/s10623-024-01555-y)
    DOI : 10.1007/s10623-024-01555-y
  • Small Yet Configurable: Unveiling Null Variability in Software
    • Tërnava Xhevahire
    • Randrianaina Georges Aaron
    • Lesoil Luc
    • Acher Mathieu
    , 2025. Many small-scale software systems, that is, with limited codebase or binary size, are widely used in everyday tasks, yet their configurability remains largely unexplored. At the same time, studies on modern software systems show a trend toward increasing configurability, alongside growing interest in building immutable, specialized, and reproducible software. In this paper, we present the first empirical study on the extent of configurability in small-scale software systems. By analyzing 108 programs from GNU coreutils, we show that even small programs can exhibit significant compile-time and run-time variability, with up to 76 options per program. Then, there is a high correlation (0.78) between run-time variability and codebase size. Furthermore, an analysis of the 20 smallest programs across 85 releases reveals that variability tends to increase over time, primarily due to the added compile-time variability. This suggests that shifting options between run-time and compile-time, removing unnecessary run-time variability, or resolving compile-time variability early, can help reduce codebase complexity and size. We also introduce, for the first time, the concept of null-variable software system, one with no configurability beyond mandatory features. Our findings show that high configurability is not exclusive to largescale systems and that reducing unnecessary variability can lead to lightweight, smaller, and more maintainable software. We hope this effort contributes to designing new software by understanding how to balance its configurability with codebase size.
  • Réélaboration des règles de sécurité en contextes interpersonnels : le cas du toucher en temps de Covid‑19
    • Héron Robin
    • Safin Stéphane
    • Baker Michael J
    • Zhang Zhuoming
    • Alvina Jessalyn
    • Lecolinet Éric
    • Détienne Françoise
    Activités, Association Recherches et Pratiques sur les ACTivités, 2025, 22-1. Dans cet article, nous étudions comment les règles de sécurité sanitaire liées au toucher social, établies dans la situation de pandémie, ont été réélaborées dans les interactions interpersonnelles. Selon le cadre de la sécurité adaptative, les règles sont vues comme des ressources adaptées en fonction du contexte. Pour mieux comprendre les processus de réélaboration des règles de sécurité dans des contextes interpersonnels, nous avons mené une étude comprenant un questionnaire en ligne sur les habitudes de toucher social avant et après le confinement en Europe, suivi d’entretiens approfondis auprès d’une sélection de participants vivant en France. Nos résultats mettent en évidence (1) des pratiques de toucher réduites, en particulier pour les relations semi-intimes telles que les collègues, les amis, les amis proches et la famille élargie ; (2) deux processus de réélaboration des règles de sécurité liées à la pandémie, délibération explicite et alignement comportemental, ainsi que des processus réflexifs ; et (3) deux dimensions des justifications données pour la réélaboration de ces règles, la préservation de la santé physique/la crainte de la vulnérabilité et le maintien des relations sociales, ainsi que leur poids relatif selon les relations (famille versus amis) entre les interactants. Alors que les règles de santé et de sécurité recommandées ont été majoritairement suivies, entraînant une forte diminution des comportements de toucher, il apparaît que les personnes réélaborent les règles en fonction de la relation qu’elles entretiennent. En réélaborant les règles officielles de santé et de sécurité, les personnes trouvent un équilibre entre les mesures de sécurité sanitaire prescrites et les situations d’interaction spécifiques afin de préserver leur santé et/ou la relation. (10.4000/13ra9)
    DOI : 10.4000/13ra9
  • Functional analysis of multivariate max-stable distributions
    • Costacèque-Cecchi Bruno
    • Decreusefond Laurent
    , 2025. <div><p>We study the connections existing between max-infinitely divisible distributions and Poisson processes from the point of view of functional analysis. More precisely, we derive functional identities for the former by using well-known results of Poisson stochastic analysis. We also introduce a family of Markov semigroups whose stationary measures are the so-called multivariate max-stable distributions. Their generators thus provide a functional characterization of extreme valued distributions in any dimension. Additionally, we give a few functional identities associated to those semi-groups, namely a Poincaré identity and commutation relations. Finally, we present a stochastic process whose semigroup corresponds to the one we introduced and that can be expressed using extremal stochastic integrals.</p></div>
  • From the Gradient-Step Denoiser to the Proximal Denoiser and their associated convergent Plug-and-Play algorithms
    • Herfeld Vincent
    • de Senneville Baudouin Denis
    • Leclaire Arthur
    • Papadakis Nicolas
    , 2025. In this paper we analyze the Gradient-Step Denoiser and its usage in Plug-and-Play algorithms. The Plug-and-Play paradigm of optimization algorithms uses off the shelf denoisers to replace a proximity operator or a gradient descent operator of an image prior. Usually this image prior is implicit and cannot be expressed, but the Gradient-Step Denoiser is trained to be exactly the gradient descent operator or the proximity operator of an explicit functional while preserving state-of-the-art denoising capabilities.
  • Extending InSAR2InSAR to Sentinel-1 Data
    • Geara Carla
    • Gelas Colette
    • De Vitry Louis
    • Colin Elise
    • Tupin Florence
    IEEE Geoscience and Remote Sensing Letters, IEEE - Institute of Electrical and Electronics Engineers, 2025. Interferometric SAR parameters estimation is a very important and challenging problem. The InSAR2InSAR method previously proposed is one of the few self-supervised methods that aims to estimate InSAR parameters. This method has proven to outperform state-of-the-art methods on simulated synthetic data. However, it has to be extended on real data. In this letter, we demonstrate that Sentinel-1 images acquired in the Interferometric Wide Swath mode possess the necessary properties to train and apply InSAR2InSAR effectively. In this paper, we demonstrate the ability of InSAR2InSAR to process across-track Sentinel-1 interferometric images with state-of-theart performances.
  • Multi-view 3D surface reconstruction from SAR images by inverse rendering
    • Barbier--Renard Emile
    • Tupin Florence
    • Trouvé Nicolas
    • Denis Loïc
    IEEE Geoscience and Remote Sensing Letters, IEEE - Institute of Electrical and Electronics Engineers, 2025, 22, pp.4008805. 3D reconstruction of a scene from Synthetic Aperture Radar (SAR) images mainly relies on interferometric measurements, which involve strict constraints on the acquisition process. These last years, progress in deep learning has significantly advanced 3D reconstruction from multiple views in optical imaging, mainly through reconstruction-by-synthesis approaches popularized by Neural Radiance Fields. In this paper, we propose a new inverse rendering method for 3D reconstruction from a few incoherent SAR views, drawing inspiration from optical approaches. First, we introduce a new simplified differentiable SAR rendering model, able to synthetize images from a Digital Surface Model (DSM) and a radar backscattering coefficients map. Then, we introduce a coarse-to-fine strategy to reconstruct the DSM and the map of backscattering coefficients of a SAR scene starting only from a few SAR views. We use a neural field, i.e. a continuous parametric model based on a Multi-Layer Perceptron, to represent the SAR scene. Finally, we present preliminary results of DSM reconstruction from synthetic SAR images produced by ONERA's physically-based EMPRISE simulator, supporting the potential of applying inverse rendering approaches to SAR data in order to efficiently exploit geometric disparities in future applications such as multi-sensor data fusion. (10.1109/LGRS.2025.3572303)
    DOI : 10.1109/LGRS.2025.3572303
  • Efficient thermalization and universal quantum computing with quantum Gibbs samplers
    • Rouzé Cambyse
    • Stilck Franca Daniel
    • Alhambra Álvaro
    , 2024. The preparation of thermal states of matter is a crucial task in quantum simulation. In this work, we prove that a recently introduced, efficiently implementable dissipative evolution thermalizes to the Gibbs state in time scaling polynomially with system size at high enough temperatures for any Hamiltonian that satisfies a Lieb-Robinson bound, such as local Hamiltonians on a lattice. Furthermore, we show the efficient adiabatic preparation of the associated purifications or "thermofield double" states. To the best of our knowledge, these are the first results rigorously establishing the efficient preparation of high-temperature Gibbs states and their purifications. In the low-temperature regime, we show that implementing this family of dissipative evolutions for inverse temperatures polynomial in the system's size is computationally equivalent to standard quantum computations. On a technical level, for high temperatures, our proof makes use of the mapping of the generator of the evolution into a Hamiltonian, and then connecting its convergence to that of the infinite temperature limit. For low temperature, we instead perform a perturbation at zero temperature and resort to circuit-to-Hamiltonian mappings akin to the proof of universality of quantum adiabatic computing. Taken together, our results show that a family of quasi-local dissipative evolutions efficiently prepares a large class of quantum many-body states of interest, and has the potential to mirror the success of classical Monte Carlo methods for quantum many-body systems.
  • Did You Forkget It? Detecting One-Day Vulnerabilities in Open-source Forks With Global History Analysis
    • Lefeuvre Romain
    • Reux Charly
    • Zacchiroli Stefano
    • Barais Olivier
    • Combemale Benoit
    , 2025. Tracking vulnerabilities inherited from third-party open-source software is a well-known challenge, often addressed by tracing the threads of dependency information. However, vulnerabilities can also propagate through forking: a code repository forked after the introduction of a vulnerability, but before it is patched, may remain vulnerable long after the vulnerability has been fixed in the initial repository. History analysis approaches are used to track vulnerable software versions at scale. However, such approaches fail to track vulnerabilities in forks, leaving fork maintainers to identify them manually. This paper presents a global history analysis approach to help software developers identify one-day (known but unpatched) vulnerabilities in forked repositories. Leveraging the global graph of public code, as captured by the Software Heritage archive, our approach propagates vulnerability information at the commit level and performs automated impact analysis. Starting from 7162 repositories with vulnerable commits listed in OSV, we propagate vulnerability information to 2.2 million forks. We evaluate our approach by filtering forks with significant user bases whose latest commit is still potentially vulnerable, manually auditing the code, and contacting maintainers for confirmation and responsible disclosure. This process identified 135 high-severity one-day vulnerabilities, achieving a precision of 0.69, with 9 confirmed by maintainers.
  • Optimal quantum algorithm for Gibbs state preparation
    • Rouzé Cambyse
    • Stilck Franca Daniel
    • Alhambra Alvaro
    , 2024. It is of great interest to understand the thermalization of open quantum many-body systems, and how quantum computers are able to efficiently simulate that process. A recently introduced disispative evolution, inspired by existing models of open system thermalization, has been shown to be efficiently implementable on a quantum computer. Here, we prove that, at high enough temperatures, this evolution reaches the Gibbs state in time scaling logarithmically with system size. The result holds for Hamiltonians that satisfy the Lieb-Robinson bound, such as local Hamiltonians on a lattice, and includes long-range systems. To the best of our knowledge, these are the first results rigorously establishing the rapid mixing property of high-temperature quantum Gibbs samplers, which is known to give the fastest possible speed for thermalization in the many-body setting. We then employ our result to the problem of estimating partition functions at high temperature, showing an improved performance over previous classical and quantum algorithms.
  • University Rents Enabling Corporate Innovation: Mapping Academic Researcher Coding and Discursive Labour in the R Language Ecosystem
    • Cai Xiaolan
    • O'Neil Mathieu
    • Zacchiroli Stefano
    Journal of Quantitative Description: Digital Media, University of Zurich, 2025, 5. This article explores the role of unrecognised labour in corporate innovation systems via an analysis of researcher coding and discursive contributions to R, one of the largest statistical software ecosystems. Studies of online platforms typically focus on how platform affordances constrain participants' actions, and profit from their labour. We innovate by connecting the labour performed inside digital platforms to the professional employment of participants. Our case study analyses 8,924 R package repositories on GitHub, examining commits and communications. Our quantitative findings show that researchers, alongside non-affiliated contributors, are the most frequent owners of R package repositories and their most active contributors. Researchers are more likely to hold official roles compared to the average, and to engage in collaborative problem-solving and support work during package development. This means there is, underneath the 'recognised' category of star researchers who transition between academia and industry and secure generous funding, an 'unrecognised' category of researchers who not only create and maintain key statistical infrastructure, but also provide support to industry employees, for no remuneration. Our qualitative findings show how this unrecognised labour affects practitioners. Finally, our analysis of the ideology and practice of free, libre and open source software (FLOSS) shows how this ideology and practice legitimate the use of 'university rents' by Big Tech. (10.51685/jqd.2025.025)
    DOI : 10.51685/jqd.2025.025