Sorry, you need to enable JavaScript to visit this website.
Share

Publications

2025

  • Efficiently Solving Saddle Point Problems : Smoothed Duality Gap-Based Stopping Criterion for Convex Settings and a Nonconvex Coordinate Descent Algorithm
    • Walwil Iyad
    , 2025. Mathematical optimization lies at the heart of a vast range of scientific and engineering disciplines. From machine learning, artificial intelligence, and data science to transportation networks, resource allocation, and game theory, among many others.Regardless of the domain, such problems are ultimately formulated as optimization problems, with the aim of solving them efficiently—most often through a suitable algorithm.While this thesis does not focus on a specific real-world application, it concerns a more in-depth and fundamental topic: algorithms. In particular, we study primal-dual algorithms for solving saddle point problems, with the overarching goal of contributing toward making such algorithms more efficient in practice. Two elements are especially critical to the performance of any algorithm: the stopping criterion that decides when to halt, and the update steps that drive progress toward a solution. Accordingly, this work is divided into two main parts, each devoted to one of these key aspects.In the first part, we optimize the running time of the primal-dual algorithms by optimizing their stopping criteria for solving convex optimization problems under affine equality constraints, which means terminating the algorithm earlier with fewer iterations.We study the relations between four stopping criteria and show under which conditions they are accurate to detect optimal solutions. The uncomputable one: "Optimality gap and Feasibility error," and the computable ones: the "Karush-Kuhn-Tucker error," the "Projected Duality Gap," and the "Smoothed Duality Gap."Assuming metric sub-regularity or quadratic error bound, we establish that all of the computable criteria provide practical upper bounds for the optimality gap, and approximate it effectively. Furthermore, we establish comparability between some of the computable criteria under certain conditions.Numerical experiments on basis pursuit, and quadratic programs with(out) non-negative weights corroborate these findings and show that the smoothed duality gap is more widely applicable than the rest.In the second part, we introduce two novel primal-dual algorithms for addressing nonconvex, nonconcave, and nonsmooth saddle point problems characterized by the weak Minty Variational Inequality (MVI) assumption. The first algorithm, Nonconvex-Nonconcave Primal-Dual Hybrid Gradient (NC-PDHG), extends the well-known Primal-Dual Hybrid Gradient (PDHG) method to this challenging problem class. The second algorithm, Nonconvex-Nonconcave Stochastic Primal-Dual Hybrid Gradient (NC-SPDHG), incorporates a randomly extrapolated primal-dual coordinate descent approach, extending the Stochastic Primal-Dual Hybrid Gradient (SPDHG) algorithm.To our knowledge, designing a coordinate-based algorithm to solve nonconvex-nonconcave saddle point problems is unprecedented, and proving its convergence posed significant difficulties. This challenge motivated us to utilize PEPit, a Python-based tool for computer-assisted worst-case analysis of first-order optimization methods. By integrating PEPit with automated Lyapunov function techniques, we successfully derived the NC-SPDHG algorithm. Both methods are effective under a mild condition on the weak MVI parameter, achieving convergence with constant step sizes that adapt to the structure of the problem.Numerical experiments on sigmoid regression with squared loss and perceptron-regression problems validate our theoretical findings and show their efficiency compared to existing state-of-the-art algorithms, where linear convergence is observed. Additionally, we conduct a convex-concave least-squares experiment to show that NC-SPDHG performs competitively with SAGA, a leading algorithm in the smooth convex setting.
  • Energy and Memory-Efficient Artificial Intelligence for On-Device Learning
    • Quélennec Aël
    , 2025. On-device learning enables neural networks to continuously adapt on edge devices, offering enhanced privacy, reduced latency, and improved energy efficiency. However, limited memory and computational resources pose significant challenges, particularly during backpropagation. This thesis addresses these bottlenecks through two complementary approaches: strategic subnetwork selection for efficient fine-tuning and activation map compression for memory-efficient training.The first line of work introduces dynamic methods that adaptively identify important network components for updating. We develop extbf{Tra}ining extbf{Dy}namics (TraDy), a framework incorporating heavy-tailed gradient theory for dynamic subnetwork selection under strict memory constraints, and extbf{Me}mory-constrained extbf{Dy}namic Upd extbf{ate} (MeDyate), an adaptive channel selection strategy with stochasticity and sampling mechanisms. Experimental validation demonstrates state-of-the-art performance in memory-constrained fine-tuning.The second line of work addresses the activation memory bottleneck in backpropagation through tensor decomposition-based compression. We propose compression using High-Order Singular Value Decomposition (HOSVD) with controlled information loss and convergence guarantees. To overcome HOSVD's computational overhead, we develop ASI (Activation Subspace Iteration), which leverages activation map stability. By performing rank selection once before training and utilizing single subspace iterations with warm starts, ASI achieves significant memory reduction (up to 120× compression) and speedup (up to 91× faster) while maintaining comparable performance.Theoretical contributions include formal analysis of fine-tuning dynamics, convergence guarantees for compressed activation training, and complexity analysis of tensor decomposition methods. Extensive validation across diverse architectures, datasets, and real-world scenarios including Raspberry Pi implementations demonstrates practical effectiveness.
  • Rethinking AI Deployment in IoT Architectures: Granular AI
    • N’kouka Thierry Isaac
    • Aubonnet Tatiana
    • Lemoine Frédéric
    • Simoni Noëmie
    , 2025, pp.752-757. Executing Artificial Intelligence (AI) on Internet of Things (IoT) devices is constrained by limited computation, memory, and energy resources. Techniques such as pruning, quantization, and compression enable lightweight deployment but often compromise model accuracy and responsiveness. Existing distributed AI frameworks also suffer from static workload mapping, bandwidth dependency, and orchestration overhead. To overcome these issues, this work introduces Granular AI, a paradigm that redefines AI deployment across the IoT-edge-fog-cloud continuum. A federated execution model enables adaptive placement, low-latency inference, and optimized resource utilization through an energy-aware scheduling framework. By aligning AI deployment with distributed systems principles, Granular AI and its Functionality-as-a-Service (FaaS) component enables Quality of Service (QoS)-aware deployments, achieving coordinated task placement, controlled execution, and resilient, scalable, real-time intelligence at the extreme edge of IoT ecosystems. A smart thermostat use case illustrates full model decomposition across a smart home setup. (10.1109/AIoT66900.2025.00117)
    DOI : 10.1109/AIoT66900.2025.00117
  • A Theoretical Framework for Grokking: Interpolation followed by Riemannian Norm Minimisation
    • Boursier Etienne
    • Pesme Scott
    • Dragomir Radu-Alexandru
    , 2025, 38. We study the dynamics of gradient flow with small weight decay on general training losses F : R d → R. Under mild regularity assumptions and assuming convergence of the unregularised gradient flow, we show that the trajectory with weight decay λ exhibits a two-phase behaviour as λ → 0. During the initial fast phase, the trajectory follows the unregularised gradient flow and converges to a manifold of critical points of F. Then, at time of order 1/λ, the trajectory enters a slow drift phase and follows a Riemannian gradient flow minimising the ℓ2-norm of the parameters. This purely optimisation-based phenomenon offers a natural explanation for the grokking effect observed in deep learning, where the training loss rapidly reaches zero while the test loss plateaus for an extended period before suddenly improving. We argue that this generalisation jump can be attributed to the slow norm reduction induced by weight decay, as explained by our analysis. We validate this mechanism empirically on several synthetic regression tasks.
  • Continuous Simplicial Neural Networks
    • Einizade Aref
    • Thanou Dorina
    • Malliaros Fragkiskos D
    • Giraldo Zuluaga Jhony Heriberto
    , 2025. Simplicial complexes provide a powerful framework for modeling higher-order interactions in structured data, making them particularly suitable for applications such as trajectory prediction and mesh processing. However, existing simplicial neural networks (SNNs), whether convolutional or attention-based, rely primarily on discrete filtering techniques, which can be restrictive. In contrast, partial differential equations (PDEs) on simplicial complexes offer a principled approach to capture continuous dynamics in such structures. In this work, we introduce continuous simplicial neural network (COSIMO), a novel SNN architecture derived from PDEs on simplicial complexes. We provide theoretical and experimental justifications of COSIMO's stability under simplicial perturbations. Furthermore, we investigate the over-smoothing phenomenon-a common issue in geometric deep learning-demonstrating that COSIMO offers better control over this effect than discrete SNNs. Our experiments on real-world datasets demonstrate that COSIMO achieves competitive performance compared to state-of-the-art SNNs in complex and noisy environments. The implementation codes are available in https://github.com/ArefEinizade2/COSIMO.
  • The quest for the GRAph Level autoEncoder (GRALE)
    • Krzakala Paul
    • Melo Gabriel
    • Laclau Charlotte
    • d'Alché-Buc Florence
    • Flamary Rémi
    , 2025. Although graph-based learning has attracted a lot of attention, graph representation learning is still a challenging task whose resolution may impact key application fields such as chemistry or biology. To this end, we introduce GRALE, a novel graph autoencoder that encodes and decodes graphs of varying sizes into a shared embedding space. GRALE is trained using an Optimal Transport-inspired loss that compares the original and reconstructed graphs and leverages a differentiable node matching module, which is trained jointly with the encoder and decoder. The proposed attention-based architecture relies on Evoformer, the core component of AlphaFold, which we extend to support both graph encoding and decoding. We show, in numerical experiments on simulated and molecular data, that GRALE enables a highly general form of pre-training, applicable to a wide range of downstream tasks, from classification and regression to more complex tasks such as graph interpolation, editing, matching, and prediction.
  • From stability of Langevin diffusion to convergence of proximal MCMC for non-log-concave sampling
    • Renaud Marien
    • de Bortoli Valentin
    • Leclaire Arthur
    • Papadakis Nicolas
    , 2025. We consider the problem of sampling distributions stemming from non-convex potentials with Unadjusted Langevin Algorithm (ULA). We prove the stability of the discrete-time ULA to drift approximations under the assumption that the potential is strongly convex at infinity. In many context, e.g. imaging inverse problems, potentials are non-convex and non-smooth. Proximal Stochastic Gradient Langevin Algorithm (PSGLA) is a popular algorithm to handle such potentials. It combines the forward-backward optimization algorithm with a ULA step. Our main stability result combined with properties of the Moreau envelope allows us to derive the first proof of convergence of the PSGLA for non-convex potentials. We empirically validate our methodology on synthetic data and in the context of imaging inverse problems. In particular, we observe that PSGLA exhibits faster convergence rates than Stochastic Gradient Langevin Algorithm for posterior sampling while preserving its restoration properties.
  • Fair Text Classification via Transferable Representations
    • Leteno Thibaud
    • Perrot Michael
    • Laclau Charlotte
    • Gourru Antoine
    • Gravier Christophe
    Journal of Machine Learning Research, Microtome Publishing, 2025, 26 (239), pp.1--47. Group fairness is a central research topic in text classification, where reaching fair treatment between sensitive groups (e.g., women and men) remains an open challenge. We propose an approach that extends the use of the Wasserstein Dependency Measure for learning unbiased neural text classifiers. Given the challenge of distinguishing fair from unfair information in a text encoder, we draw inspiration from adversarial training by inducing independence between representations learned for the target label and those for a sensitive attribute. We further show that Domain Adaptation can be efficiently leveraged to remove the need for access to the sensitive attributes in the dataset we cure. We provide both theoretical and empirical evidence that our approach is well-founded.
  • Assessing the Security of Software Supply Chains : Software Bill of Materials, Threat Propagation, and Logical Attack Graphs
    • Soeiro Luís Fernando de Oliveira
    , 2025. The Software Supply Chain (SSC) is becoming more complex and vulnerabledue to the growing diversity of software products and the challenges intracking their dependencies. The Software Bill of Materials (SBOM), aninventory of components, is proposed as a solution to this complexity.Yet, comprehensive studies on SBOM practices using real-world files arelacking. To facilitate such research, we present the largest SBOMdataset to date, with over 78,000 unique SBOM files deduplicated frommore than 94 million public repositories.To leverage SBOMs effectively, especially at scale, industrystakeholders need reliable automated tools for their analysis. Weperform an empirical analysis of real-world SBOMs to benchmark eightstate-of-the-art tools designed for validating and scoring SBOMquality. We establish independent metrics to evaluate the suitabilityof SBOMs for specific applications and compare tool outputs with ourmetrics. Our findings indicate that most SBOMs are not adequatelyprepared for use, and there is significant disagreement among thetools.Current Software Composition Analysis (SCA) tools, attack trees, andgraphs fail to account for the interactions that impact softwaresecurity within the SSC. We propose a novel method for assessing threatlevels in supply chains with the Log Model. This approach identifiesthe key elements that propagate or are targeted by attacks. A set ofrules allows for deducing the threat level of the core elements basedon the initial state, SSC interactions, and assumptions about theattackers.MulVal is a state-of-the-art open-source tool for generating logicalattack graphs in networked systems. However, it does not adequatelyaddress SSC threat propagation, making it less effective against modernSSC attacks like the XZ compromise and the 3CX double SSC attack. Weintroduce a new MulVal extension that addresses this limitation,featuring a new set of predicates in MulVal syntax to model SSCinteractions and integrate them with existing rules, along with 20example scenarios and a test-case framework.
  • Approche Bayésienne Intégrant le Flux Optique pour le Suivi d’Objets Multiples en Imagerie Biologique
    • Reme Raphael
    , 2025. Pour comprendre l'activité neuronale chez les animaux en activité, il est nécessaire de suivre avec précision de grandes populations neuronales au fil du temps. Dans les petits organismes modèles tels que Hydra vulgaris, l'imagerie calcique permet d'observer et suivre simultanément des centaines de neurones. Cependant, le suivi fiable reste un défi majeur en raison de plusieurs facteurs : fortes déformations corporelles, populations neuronales denses, bruit d'imagerie et visibilité intermittente des neurones dans les indicateurs calciques (GCaMP). Les algorithmes actuels échouent souvent dans ces conditions, produisant des trajectoires fragmentées ou des associations incorrectes qui compromettent l'interprétation biologique. L'objectif de cette thèse est de développer des outils informatiques robustes pour le suivi d'objets multiples (MOT) adaptés à ces défis biologiques. Nos contributions s'articulent autour de trois axes complémentaires : (1) Cadre de simulation (SINETRA) : afin de pallier le manque de données expérimentales annotées, nous avons développé SINETRA, un simulateur polyvalent de neurones intégrées dans des tissus déformables. Il génère des vidéos fluorescentes 2D/3D réalistes avec des annotations de référence, reproduisant des mouvement et des bruits similaires à ceux observés dans l'imagerie neuronal de l'Hydre. Ce benchmark synthétique permet une évaluation et une comparaison objectives des algorithmes de suivi. (2) Suivi avec des modèles de mouvement améliorés par flux optique (KOFT / KOFT-MHT) : en nous appuyant sur les filtres de Kalman, nous avons introduit des trackers qui intègrent le flux optique comme un deuxième capteur. Ces méthodes améliorent le suivi en capturant mieux les déformations soudaines des tissus. Nous avons ensuite étendu cette approche à un cadre de Multiple Hypothesis Tracking (MHT) entièrement Bayésien, qui intègre des a priori sur la dynamique des cibles et les statistiques de détection. Ce formalisme probabiliste augmente encore la robustesse face aux détections manquées et fausses, améliorant considérablement la précision par rapport à l'état de l'art. (3) Raccordement de tracklets avec des caractéristiques visuelles (VEMC2) : dans l'imagerie calcique, les neurones ne sont détectables que lorsqu'ils sont actifs, ce qui conduit à des trajectoires fragmentées (tracklets). Pour remédier à cela, nous avons proposé une méthode de raccordement de tracklets combinant les distances positionnelles avec des caractéristiques visuelles auto-supervisées extraites de patchs fluorescents. En entraînant un réseau convolutif par apprentissage contrastif, notre méthode apprend des représentations discriminantes qui aident à distinguer les neurones. Le raccordement est résolu via un problème d'assignation linéaire minimisant un coût hybride positionel et visuel, réduisant les erreurs de raccordement d'un facteur deux par rapport aux approches précédentes. Les méthodes ont été largement validées à la fois sur des données générées par SINETRA et sur des enregistrements réels de fluorescence d'Hydra vulgaris. Les résultats démontrent des améliorations en termes de précision de suivi, de robustesse face au bruit et de continuité des trajectoires neuronales reconstruites. Au-delà de l'Hydre, les approches proposées sont applicables à d'autres petits organismes modèles tels que Caenorhabditis elegans ou les poissons zèbres, et plus généralement à tout système biologique où des cibles denses, déformables et detectables par intermittence doivent être suivies dans le temps. De plus, nous avons adapté avec succès notre méthode au suivi cellulaire, où l'un des principaux défis consiste à détecter et à gérer les événements de mitose. Notre méthode s'est classée première au Cell Linking Benchmark, surpassant les approches supervisées pour cette tâche. Cette thèse apporte à la fois des avancées méthodologiques et des outils pratiques à la communauté du suivi d'objets, ouvrant la voie à une analyse plus fiable des vidéos biologiques. Elle permet notamment un suivi efficace de l'activité neuronale, offrant ainsi de nouvelles possibilités pour étudier le fonctionnement neuronal chez les animaux en activité.
  • Near-sensor testing method for observing radiation-induced soft errors in modern sensors
    • Minelli de Carvalho Matheus
    • Cheymol Benjamin
    • Naviner Lirida
    • Possamai Bastos Rodrigo
    , 2025.
  • Genuine Multipartite Entanglement is Not Necessary for Standard Device-Independent Conference Key Agreement
    • Wooltorton Lewis
    • Brown Peter
    • Colbeck Roger
    Physical Review Letters, American Physical Society, 2025, 135 (22), pp.220803. Conference key agreement (CKA) aims to establish shared, private randomness among many separated parties in a network. Device-independent (DI) CKA is a variant in which no assumptions are placed on the nature of the source, or the measurements performed by each party. So far, DICKA protocols largely fall into two categories: those that rely on violating a joint Bell inequality using genuinely multi-partite entangled states, and those that concatenate many bipartite protocols. The question of whether a hybrid protocol exists, where a multi-partite Bell inequality can be violated using only bipartite entanglement, was asked by Grasselli et al. in [Quantum 7, 980, (2023)]. We answer this question affirmatively, by constructing an asymptotically secure DICKA protocol achieving the same rate as the concatenation of bipartite DIQKD, yet relying on a single joint Bell violation. Our results prompt further discussion on the benefits of multi-partite entanglement for DICKA over its bipartite alternative, and we give an overview of different arguments for near-term devices. (10.48550/arXiv.2503.21290)
    DOI : 10.48550/arXiv.2503.21290
  • Finding Software Supply Chain Attack Paths with Logical Attack Graphs
    • Soeiro Luı́s
    • Robert Thomas
    • Zacchiroli Stefano
    , 2025. Cyberattacks are becoming increasingly frequent and sophisticated, often exploiting the software supply chain (SSC) as an attack vector. Attack graphs provide a detailed representation of the sequence of events and vulnerabilities that could lead to a successful security breach in a system. MulVal is a widely used open-source tool for logical attack graph generation in networked systems. However, its current lack of support for capturing and reasoning about SSC threat propagation makes it unsuitable for addressing modern SSC attacks, such as the XZ compromise or the 3CX double SSC attack. To address this limitation, we propose an extension to MulVal that integrates SSC threat propagation analysis with existing network-based threat analysis. This extension introduces a new set of predicates within the familiar MulVal syntax, enabling seamless integration. The new facts and interaction rules model SSC assets, their dependencies, interactions, compromises, additional security mechanisms, initial system states, and known threats. We explain how this integration operates in both directions and demonstrate the practical application of the extension.
  • The Economic and Environmental Sustainability of Digital Commons. Lessons from the 2023 Debian project survey
    • Broca Sébastien
    • O'Neil Mathieu
    • Cai Xiaolan
    • Daly Angela
    • Rikap Cecilia
    • Shulz Sebastien
    • Zacchiroli Stefano
    , 2025. Digital commons are shared information and knowledge resources such as data, software and cultural content. They are produced and managed for collective use, and to be modified and redistributed as needed. Using results from a 2023 survey of the Debian community, this new DCPC report addresses question such as: Is the free, libre and open source software or FLOSS development model economically sustainable? What role can FLOSS play in the transition to more environmentally sustainable production and consumption? With the benefit of hindsight, the first survey of the Debian community, carried out in 2016, whose preliminary results were published in 2017 in the Journal of Peer Production, represented the first manifestation of a core DCPC program of work: to empirically map which categories of workers (e.g., firm employees, foundation employees, researchers, volunteers, etc) were performing which duties in the digital commons production process, with a view to increasing the recognition of these commons and these voluntary workers by industry and society. The 2023 edition had a more clearly militant intent than its predecessor, as it also sought to investigate community opinion about the predatory practices and environmental sustainability of large IT firms as well as about alternative models. A key finding is the rejection by the community of the use of restrictive licences to incentivise environmental action (e.g., preventing entities that engage in unsustainable environmental practices from using Debian). Other interesting findings can be drawn from comments responding to the question ‘Are the main obstacles to reducing environmental impacts in your workplace economic (for example, their cost), organisational (for example, lack of support from management), technical (for example, difficulty of implementation)?’ These detailed responses allow us to understand the extent and ramifications of the obstacles.
  • Equivariant Denoisers for Plug and Play Image Restoration
    • Renaud Marien
    • Guez Eliot
    • Leclaire Arthur
    • Papadakis Nicolas
    , 2025. One key ingredient of image restoration is to define a realistic prior on clean images to complete the missing information in the observation. State-of-the-art restoration methods rely on a neural network to encode this prior. Typical image distributions are invariant to some set of transformations, such as rotations or flips. However, most deep architectures are not designed to represent an invariant image distribution. Recent works have proposed to overcome this difficulty by including equivariance properties within a Plug-and-Play paradigm. In this work, we propose two unified frameworks named Equivariant Regularization by Denoising (ERED) and Equivariant Plug-and-Play (EPnP) based on equivariant denoisers and stochastic optimization. We analyze the convergence of the proposed algorithms and discuss their practical benefit.
  • Consensus-Based Optimization Beyond Finite-Time Analysis
    • Bianchi Pascal
    • Dragomir Radu-Alexandru
    • Priser Victor
    , 2025. We analyze a zeroth-order particle algorithm for the global optimization of a non-convex function, focusing on a variant of Consensus-Based Optimization (CBO) with small but fixed noise intensity. Unlike most previous studies restricted to finite horizons, we investigate its long-time behavior with fixed parameters. In the mean-field limit, a quantitative Laplace principle shows exponential convergence to a neighborhood of the minimizer x * . For finitely many particles, a block-wise analysis yields explicit error bounds: individual particles achieve long-time consistency near x * , and the global best particle converge to x * . The proof technique combines a quantitative Laplace principle with block-wise control of Wasserstein distances, avoiding the exponential blow-up typical of Grönwall-based estimates.
  • A Maximum Length Sequence-Based Method for Robust Round-Trip Latency Estimation in online Digital Audio Workstations
    • Gil Panal J M
    • Richard Gaël
    • David Aurélien
    , 2025. Accurate estimation of latency when working with digital audio equipment is critical for the precise operation of certain applications. This is particularly true for Digital Audio Workstations (DAWs) and other tools used in the creation and editing of audio, especially music. These systems require exact synchronization or alignment of tracks, which is essential for the mixing process. Latency is an inherent phenomenon in audio capture and restitution. Although it may sometimes be minimal, it is always variable depending on the device, operating system, and audio configuration. The undesired effect introduced by latency—specifically referred to in this context as round-trip latency—manifests as a delay between the audio input and the corresponding output. The most effective way to address this issue is through prior measurement to enable proper compensation. Various methods exist for performing this measurement, generally based on the playback and recording of acoustic signals. This article presents an existing method applied in a novel way within the domain of audio and web browsers, based on the use of a Maximum Length Sequence (MLS) signal. This signal is commonly used in room impulse response characterization. To validate its effectiveness and identify the limitations of the proposed approach, multiple tests and experiments were conducted on different devices. Results were compared across various browsers and operating systems, and the proposed solution was benchmarked against the methods employed by existing online DAWs. The implementation of the proposed method is available as part of the Hi-Audio online platform—an open-source, browser-based DAW—providing a practical demonstration of its applicability and integration in real-world web audio environments.
  • Is Phase Really Needed for Weakly-Supervised Dereverberation ?
    • Rodrigues Marius
    • Bahrman Louis
    • Badeau Roland
    • Richard Gaël
    , 2025. In unsupervised or weakly-supervised approaches for speech dereverberation, the target clean (dry) signals are considered to be unknown during training. In that context, evaluating to what extent information can be retrieved from the sole knowledge of reverberant (wet) speech becomes critical. This work investigates the role of the reverberant (wet) phase in the time-frequency domain. Based on Statistical Wave Field Theory, we show that late reverberation perturbs phase components with white, uniformly distributed noise, except at low frequencies. Consequently, the wet phase carries limited useful information and is not essential for weakly supervised dereverberation. To validate this finding, we train dereverberation models under a recent weak supervision framework and demonstrate that performance can be significantly improved by excluding the reverberant phase from the loss function.
  • Extending Timed Automata with Clock Derivatives
    • Cortés David
    • Leneutre Jean
    • Malvone Vadim
    • Ortiz James
    • Schobbens Pierre-Yves
    , 2025, 16194, pp.99-119. <div><p>The increasing complexity of safety-critical systems in domains like aerospace, robotics, and industrial control demands precise modeling and verification methods. While Timed Automata (TA) and Distributed Timed Automata (DTA) are standard formalisms for realtime systems, they assume synchronized clocks or lack the expressiveness to capture clock drift and indirect timing dependencies. To overcome these limits, we propose Timed Automata with Clock Derivatives (idTA), extending TA with rate constraints to model independent clock evolution. We also introduce DLν , a temporal logic over Multi-Timed Labeled Transition Systems (MLTS), capturing properties of systems with unsynchronized clocks. We show that model checking for DLν is EXP-TIME-complete. Finally, we present MIMETIC, a model checking tool supporting idTA and DLν , providing a platform for analyzing clock interactions and verification of Distributed Real-time Systems (DRTS).</p></div> (10.1007/978-3-032-10794-7_6)
    DOI : 10.1007/978-3-032-10794-7_6
  • Huygens' Metasurfaces Analysis Through Electric and Magnetic Surface Impedance Modeling
    • Medrar Ghiles
    • Lepage Anne Claire
    • Begaud Xavier
    , 2025.
  • Generation of frequency entanglement with an effective quantum dot-waveguidetwo-photon quadratic interaction
    • Meguebel Mohamed
    • Federico Maxime
    • Felicetti Simone
    • Belabas Nadia
    • Fabre Nicolas
    Optica Quantum, Optica publishig group, 2025, 3 (6), pp.617-633. Light-matter interactions with quantum dots have been extensively studied to harness key quantum properties of photons, such as indistinguishability and entanglement. In this theoretical work, we exploit the atomic-like four-level structure of a quantum dot coupled to a waveguide to model a shaping frequency entangling gate (FrEnGATE) for single photons. Our approach is based on the identification of input frequencies and an atomic level structure for which frequencydependent one-photon transitions are adiabatically eliminated, while frequency-dependent twophoton transitions are resonantly enhanced. The frequency entanglement performance of the gate is analyzed using a Schmidt decomposition for continuous variables, revealing a trade-off between entanglement generation efficiency and entanglement quality. We further demonstrate the use of the FrEnGATE for the generation of entangled frequency qudit states. (10.1364/OPTICAQ.571592)
    DOI : 10.1364/OPTICAQ.571592
  • Altered Histories in Version Control System Repositories: Evidence from the Trenches
    • Rapaport Solal
    • Pautet Laurent
    • Tardieu Samuel
    • Zacchiroli Stefano
    , 2025. Version Control Systems (VCS) like Git allow developers to locally rewrite recorded history, e.g., to reorder and suppress commits or specific data in them. These alterations have legitimate use cases, but become problematic when performed on public branches that have downstream users: they break push/pull workflows, challenge the integrity and reproducibility of repositories, and create opportunities for supply chain attackers to sneak into them nefarious changes. We conduct the first large-scale investigation of Git history alterations in public code repositories. We analyze 111 M (millions) repositories archived by Software Heritage, which preserves VCS histories even across alterations. We find history alterations in 1.22 M repositories, for a total of 8.7 M rewritten histories. We categorize changes by where they happen (which repositories, which branches) and what is changed in them (files or commit metadata). Conducting two targeted case studies we show that altered histories recurrently change licenses retroactively, or are used to remove "secrets" (e.g., private keys) committed by mistake. As these behaviors correspond to bad practices-in terms of project governance or security management, respectively-that software recipients might want to avoid, we introduce GitHistorian, an automated tool, that developers can use to spot and describe history alterations in public Git repositories.
  • Temporal change of outdoor RF-EMF levels in four European countries: a microenvironmental measurement study
    • Beláčková Lea
    • Veludo Adriana Fernandes
    • Aminzadeh Reza
    • van Bladel Han
    • Griffon Vincent
    • Cardis Elisabeth
    • Dongus Stefan
    • Eeftens Marloes
    • Guxens Mònica
    • Joseph Wout
    • de Llobet Patricia
    • Mazet Paul
    • van Torre Patrick
    • Thielens Arno
    • Vermeulen Roel
    • Wiart Joe
    • Röösli Martin
    • Huss Anke
    Environmental Research, Elsevier, 2025, 285, pp.122315 (1-8). Introduction Over the past two decades, the amount of transmitted mobile data has increased rapidly. It is unknown whether the implementation of the new technologies enabling this has resulted in changes of outdoor radio-frequency electromagnetic fields (RF-EMF) exposure. Therefore, microenvironmental measurements were used to investigate temporal trends in RF-EMF exposure between 2016 and 2023, in the Netherlands, Switzerland, Belgium and Spain, following a similar protocol across campaigns. Microenvironmental measurements refer to exposure measurements performed at predefined small areas that have been differentiated with a specific function in that particular area. This allowed us to compare exposure trends between countries and years. Methods The data was collected as part of the ACCEDERA (2016–2018), ETAIN (2023), and GOLIAT (2023) projects, walking repeatedly the same routes with RF-EMF exposimeters. Identical microenvironments were identified in each country and measurements of the exposure from mobile base stations, mobile phones and the total exposure were compared across years. Results Comparing measurements between 6 and 14 unique microenvironments in each country, our data did not suggest significant changes in the exposure from the mobile base station origin (total downlink exposure) between baseline measurements in 2016 to follow up and 2023 for the four countries. Across all countries and years the median values of the mobile base station exposure ranged from 0.11 mW/m2 (Switzerland, 2023) to 0.62 mW/m2 (Netherlands, 2018). There was no consistent trend in the individual microenvironments across the countries. Conclusions Our measurements of RF-EMF outdoor exposure levels across included microenvironment groups do not indicate change in exposure levels between 2016 and 2023 despite an increase in mobile data traffic by a factor of 8 in Western Europe1. (10.1016/j.envres.2025.122315)
    DOI : 10.1016/j.envres.2025.122315
  • Comprehensive Measurement‐Based Assessment of Downlink RF‐EMF Exposure in Urban Environments: Multi‐Method Analysis and Intercomparison
    • Wang Shanshan
    • Zhang Yarui
    • Liu Yukun
    • Liu Jiang
    • Conil Emmanuelle
    • Jawad Ourouk
    • Samaras Theodoros
    • Ourak Lamine
    • Wiart Joe
    Bioelectromagnetics, Wiley, 2025, 46 (8), pp.1-12. ABSTRACT This paper presents a comprehensive measurement‐based assessment of radio‐frequency (RF) electromagnetic field (EMF) exposure level in a French city. Three types of assessment methods are used to collect measurement data: drive test (DT), spot measurements, and sensor networks. The DT measurements were conducted by a portable spectrum analyzer, i.e., Tektronix RSA 306B, connected to a 3‐axis antenna mounted on the roof of the vehicle. DT system continuously recorded frequency‐dependent electric field (E‐field) values on a pre‐defined outdoor route. The spot measurements were done in the same region, covered by DT, with both broadband and frequency‐selective systems. Additionally, 19 sensors were installed on streetlamps in the same part of the city to measure the broadband E‐field level. The overall statistical analysis on raw data shows good agreement on RF‐EMF exposure level from three types of measurements. Then a distance‐based moving average method was carried out to remove the random noise in the DT data, where the optimized window size is explored using Kolmogorov‐Smirnov test. The smoothed DT data show a good correlation with nearby spot measurement values, as well as with base station antenna (BSA) density. Specific fifth‐generation (5G) spot measurements, performed with and without traffic‐attracting downloads, demonstrate the impact of beamforming on exposure levels in 5G new radio (NR) bands. Then spot measurements were used to build the exposure map using the kriging method, where the kriging prediction from the trained model is further compared with DT. Furthermore, the temporal variations observed in the sensor network were analyzed in relation to distance from the nearest BSA, revealing an inverse proportional relationship between E‐field level and proximity to the nearest BSA. This study shows good reliability in assessing the RF‐EMF exposure level using different systems. The advantages and limitations of different systems are also demonstrated by performing the intercomparison between data sets. (10.1002/bem.70033)
    DOI : 10.1002/bem.70033
  • Potentially Problematic Word Usages and How to Detect Them: A Survey
    • Garí Soler Aina
    • Labeau Matthieu
    • Clavel Chloé
    , 2025, pp.1-24. We introduce and frame the concept of potentially problematic word usages (PPWUs): word occurrences that are likely to cause communication breakdowns of a semantic nature. While much research has been devoted to lexical complexity, ambiguity, vagueness and related issues, no work has attempted to fully capture the intricate nature of PPWUs. We review linguistic factors, datasets and metrics that can be helpful for PPWU detection. We also discuss challenges to their study, such as their complexity and subjectivity, and highlight the need for future work on this phenomenon. (10.18653/v1/2025.starsem-1.35)
    DOI : 10.18653/v1/2025.starsem-1.35