Sorry, you need to enable JavaScript to visit this website.
Share

Publications

2025

  • DreamPet: Text Driven Controllable 3D Animal Generation using Gaussian Splatting
    • Ramakrishnan Vysakh
    • Nag Sauradip
    • Parakkat Amal Dev
    • Zhu Xiatian
    • Dutta Anjan
    , 2024. Realistic 3D animal generation from text prompts is a significant yet challenging task. Traditional approaches, which use score distillation sampling to optimize 3D formats like meshes or neural fields, often suffer from a lack of detail and designed for fixed shape. To address both limitations, in this work, we introduce DreamPet, a novel framework that explores a retrieval-augmented approach tailored for score distillation and efficiently produces high-quality 3D animal models featuring fine-grained geometry and lifelike textures. Our key insight is that both expressiveness of 2D diffusion models and geometric consistency of 3D animal assets can be fully leveraged by employing the semantically relevant assets directly within the optimization process. Specifically, our method features 1) a Shape-Aware SDS for optimizing appearance and geometry to ensure structural consistency per category, and 2) a Category aware refinement module that addresses the over-saturation issue and further eliminates floating artefacts based on the animal category to produce realistic textures. Extensive experiments demonstrate competitive quality of our method, rendering 3D animals under diverse scenarios.
  • Quantized Precoding Under ACLR Constraints with FIR-DACs for Downlink MU-MIMO
    • Schlegel Nicolas
    • Jabbour Chadi
    • Valcarce Alvaro
    • Wantiez Eric
    , 2025, pp.548-553. Scaling massive multiuser multiple-input multipleoutput (MIMO) to larger antenna numbers constrains each chain in terms of power consumption and implementation. In this paper, the use of low resolution digital-to-analog converters (DACs) enabling these large scale arrays in downlink multiuser massive MIMO is studied with a focus on reaching adjacent channel leakage ratio (ACLR) targets. A quantization aware precoder with constraints on out-of-band (OOB) emissions is designed based on the multi-block alternate directions method of multipliers (ADMM) algorithm. With conventional DACs, simulation results indicate that these constraints do not provide sufficient ACLR. In response, Finite impulse response (FIR)DACs are introduced as a reconfigurable alternative to analog filters. Numerical results show that ACLR targets can be reached, even at 1 bit, at a lower hardware cost than high resolution DACs. (10.1109/EuCNC/6GSummit63408.2025.11036903)
    DOI : 10.1109/EuCNC/6GSummit63408.2025.11036903
  • Tail Index Estimation for Discrete Heavy-Tailed Distributions with Application to Statistical Inference for Regular Markov Chains
    • Bertail Patrice
    • Clémençon Stephan
    • Fernández Carlos
    Test, Spanish Society of Statistics and Operations Research/Springer, 2025, 34, pp.691-713. It is the purpose of this paper to investigate the issue of estimating the regularity index $\beta>0$ of a discrete heavy-tailed r.v. $S$, \textit{i.e.} a r.v. $S$ valued in $\mathbb{N}^*$ such that $\mathbb{P}(S>n)=L(n)\cdot n^{-\beta}$ for all $n\geq 1$, where $L:\mathbb{R}^*_+\to \mathbb{R}_+$ is a slowly varying function. Such discrete probability laws, referred to as generalized Zipf's laws sometimes, are commonly used to model rank-size distributions after a preliminary range segmentation in a wide variety of areas such as \textit{e.g.} quantitative linguistics, social sciences or information theory. As a first go, we consider the situation where inference is based on independent copies $S_1,\; \ldots,\; S_n$ of the generic variable $S$. The estimator $\widehat{\beta}$ we propose can be derived by means of a suitable reformulation of the regularly varying condition, replacing $S$'s survivor function by its empirical counterpart. Under mild assumptions, a non-asymptotic bound for the deviation between $\widehat{\beta}$ and $\beta$ is established, as well as limit results (consistency and asymptotic normality). Beyond the i.i.d. case, the inference method proposed is extended to the estimation of the regularity index of a regenerative $\beta$-null recurrent Markov chain. Since the parameter $\beta$ can be then viewed as the tail index of the (regularly varying) distribution of the return time of the chain $X$ to any (pseudo-) regenerative set, in this case, the estimator is constructed from the successive regeneration times. Because the durations between consecutive regeneration times are asymptotically independent, we can prove that the consistency of the estimator promoted is preserved. In addition to the theoretical analysis carried out, simulation results provide empirical evidence of the relevance of the inference technique proposed. (10.1007/s11749-025-00975-9)
    DOI : 10.1007/s11749-025-00975-9
  • Estimation de la consommation énergétique de la 5G en France basée sur des données réelles et des modèles analytiques
    • Ghali Meriem
    • Busson Anthony
    • Coupechoux Marceau
    , 2025. <div><p>Le changement climatique a incité divers secteurs, y compris le secteur numérique, à évaluer leur impact environnemental, la consommation énergétique étant un facteur clé. Avec le déploiement des réseaux 5G, comprendre leur consommation énergétique est essentiel pour concevoir des infrastructures plus durables. Cette étude propose un modèle pour estimer la consommation énergétique des réseaux 5G, intégrant à la fois des composantes fixes et dépendantes de la charge. Nous appliquons ce modèle au déploiement actuel de la 5G en France. Contrairement aux études précédentes, nous utilisons des données ouvertes et publiques d'ARCEP et de l'INSEE, ce qui rend la paramétrisation de notre modèle plus réaliste et reproductible. Nous mettons en évidence comment la consommation énergétique est influencée par des facteurs tels que la charge de trafic, la densification des réseaux et les différences régionales entre les zones urbaines et rurales. Nos résultats révèlent que : i) Bien que le nombre d'utilisateurs de la 5G augmente en moyenne de 1,5 million par trimestre, la charge du réseau 5G reste faible pour le moment. ii) La consommation énergétique de la 5G est étroitement liée au déploiement de l'infrastructure, les stations de base et les AAU étant actuellement surdimensionnées par rapport à la charge du réseau en France. iii) Les zones rurales présentent une consommation énergétique par utilisateur plus élevée en raison de leur faible densité de population. iv) La consommation énergétique de base d'une station de base 5G est significativement plus élevée que sa consommation énergétique en transmission, soulignant l'importance d'améliorer cette composante.</p></div>
  • NickPay, an Auditable, Privacy-Preserving, Nickname-Based Payment System
    • Quispe Guillaume
    • Jouvelot Pierre
    • Memmi Gerard
    , 2025. In this paper, we describe the motivation, design, security properties, and a prototype implementation of NickPay, a new privacy-preserving yet auditable payment system built on top of the Ethereum blockchain platform. NickPay offers a strong level of privacy to participants and prevents successive payment transfers from being linked to their actual owners. It is providing the transparency that blockchains ensure and at the same time, preserving the possibility for a trusted authority to access sensitive information, e.g., for audit purposes or compliance with financial regulations. NickPay builds upon the Nicknames for Group Signatures (NGS) scheme, a new signing system based on dynamic ``nicknames'' for signers that extends the schemes of group signatures and signatures with flexible public keys. NGS enables identified group members to expose their flexible public keys, thus allowing direct and natural applications such as auditable private payment systems, NickPay being a blockchain-based prototype of these. (10.1109/ICBC64466.2025.11114708)
    DOI : 10.1109/ICBC64466.2025.11114708
  • Une approche unifiée des activités de conception système et conception d’architecture pour intégrer la cybersécurité au tout début des phases de conception
    • Cincilla Pierpaolo
    • Guitton-Ouhamou Patricia
    • Guillot Bertrand
    • Barki Amira
    • Mangé Jean-Baptiste
    • Apvrille Ludovic
    • Chevalier Pascal
    MISC - Multi-System & Internet Security Cookbook, Diamond Connect, 2025, Hors-série Numéro 32 (32), pp.https://connect.ed-diamond.com/misc/mischs-032/vers-une-integration-harmonisee-des-activites-cybersecurite-dans-l-ingenierie-systeme.
  • EU Digital Technologies and Policy Conference (EUDTP 2025) Abstracts and Contributions
    • Cordero-Fuertes Juan-Antonio
    • Alam Mehwish
    • Blazy Olivier
    • Alombert Anne
    • Díaz-Rodríguez Natalia
    • Curelariu Teodora
    • Ashok Pratiksha
    • Ciuhu Calina
    • de Luca Stefano
    • Feijóo Claudio
    • Gaubiene Neringa
    • Ghaddar Bissan
    • Giglietto Fabio
    • Gomà Rafael
    • González-Fuster Gloria
    • Grumulaitis Arturas
    • Guintchev Petia
    • Jacob Romain
    • Janciute Laima
    • Kalogeiton Vicky
    • Kariniotakis Georges
    • Knaster Juan
    • Koch Luise
    • Kreer Philipp
    • Krüger Kim
    • Leblanc-Albarel Diane
    • Manner Jukka
    • Mcstay Andrew
    • Ortiz de Zúñiga María
    • Nivaggioli Patrice
    • Popovic Ivanka
    • Ramos Simona
    • Roth Markus
    • Spangenberg Jochen
    • Cripps Christopher
    , 2025.
  • Large Language Models as Search Engines: Societal Challenges
    • Sadeddine Zacchary
    • Maxwell Winston
    • Varoquaux Gaël
    • Suchanek Fabian M.
    Sigir Forum, Association for Computing Machinery (ACM), 2025, 59 (1), pp.1-35. Large Language Models (LLMs) may one day replace search engines as the primary portal to information on the Web. In this article, we investigate the societal challenges that such a change could bring. We focus on the roles of LLM Providers, Content Creators, and End Users, and identify 15 types of challenges. With each, we show current mitigation strategies -both from the technical perspective and the legal perspective. We also discuss the impact of each challenge and point out future research opportunities. Large Language Models (LLMs) are increasingly used as portals to information on the Web. Google is rolling out AI overviews above its search results 1 building upon its language models 2 , Microsoft's Bing search engine 3 allows sending the query to Microsoft's Co-pilot, DuckDuckGo 4 and Brave Search 5 offer AI-assisted answers, and browsers such as Opera, Brave, and Edge have built-in AI-plugins for query answering. These developments are changing the way users access information: instead of querying the Web with a search engine, reading one or several result pages, and finding the information, people can now ask their question to the AI assistant, which will synthesize an answer for the user from Web sources. This means that LLMs have the potential to severely disrupt the search engine ecosystem, which has been comparatively stable for the last 25 years, and to completely change the way the Web is used. (10.1145/3769733.376974)
    DOI : 10.1145/3769733.376974
  • A Novel Mixture Model for Characterizing Human Aiming Performance Data
    • Li Yanxi
    • Young Derek S
    • Rioul Olivier
    • Gori Julien
    Statistical Modelling, SAGE Publications, 2025, 25 (3), pp.236-254. Fitts’ law is often employed as a predictive model for human movement, especially in the field of human-computer interaction. Models with an assumed Gaussian error structure are usually adequate when applied to data collected from controlled studies. However, observational data (often referred to as data gathered ‘in the wild’) typically display noticeable positive skewness relative to a mean trend as users do not routinely try to minimize their task completion time. As such, the exponentially modified Gaussian (EMG) regression model has been applied to aimed movements data. However, it is also of interest to reasonably characterize those regions where a user likely was not trying to minimize their task completion time. In this article, we propose a novel model with a two-component mixture structure—one Gaussian and one exponential—on the errors to identify such a region. An expectation-conditional-maximization (ECM) algorithm is developed for estimation of such a model and some properties of the algorithm are established. The efficacy of the proposed model, as well as its ability to inform model-based clustering, are addressed in this work through extensive simulations and an insightful analysis of a human aiming performance study. (10.1177/1471082X241234139)
    DOI : 10.1177/1471082X241234139
  • Character recognition in Byzantine seals with deep neural networks
    • Rageau Théophile
    • Likforman-Sulem Laurence
    • Fiandrotti Attilio
    • Eyharabide Victoria
    • Caseau Béatrice
    • Cheynet Jean-Claude
    Digital Applications in Archaeology and Cultural Heritage, Elsevier, 2025, 37, pp.e00403-1:e00403-11. Seals are small coin-shaped artifacts, mostly made of lead, held with strings to seal letters. This work presents the first attempt towards automatic reading of inscribed text on Byzantine seal images. Byzantine seals are generally decorated with iconography on the obverse side and Greek text on the reverse side. Text may include the sender's name, position in the Byzantine aristocracy, and elements of prayers. Both text and iconography are precious literary sources that wait to be exploited electronically, so the development of computerized systems for interpreting seals images is of paramount importance. This work's contribution is hence a deep, two-stages, character reading pipeline for transcribing Byzantine seal images. A first deep convolutional neural network (CNN) detects characters in the seal (character localization). A second convolutional network reads the localized characters (character classification). Finally, a diplomatic transcription of the seal is provided by post-processing the two network outputs. We provide an experimental evaluation of each CNN in isolation and both CNNs in combination. All performances are evaluated by cross-validation. Character localization achieves a mean average precision (mAP) greater than 0.9 at the intersection of union threshold of 0.5. Classification of characters achieves an accuracy greater than 0.92. Such performance compares favorably to similar tasks such as the recognition of inscribed characters on ancient coins. At transcription level, we provide novel performance results in terms of Character Error Rate. This is novel for seal images and differs from results on isolated character recognition. (10.1016/j.daach.2025.e00403)
    DOI : 10.1016/j.daach.2025.e00403
  • Exposing Go Hidden Bugs: A Novel Concolic Framework
    • Gorna Karolina
    • Iooss Nicolas
    • Seurin Yannick
    • Khatoun Rida
    , 2025. The widespread adoption of the Go programming language in infrastructure backends and blockchain projects has heightened the need for improved security measures. Established techniques such as unit testing, static analysis, and program fuzzing provide foundational protection mechanisms. Although symbolic execution tools have made significant contributions, opportunities remain to address the complexities of Go's runtime and concurrency model. In this work, we present Zorya, a novel methodology leveraging concrete and symbolic (concolic) execution to evaluate Go programs comprehensively. By systematically exploring execution paths to uncover vulnerabilities beyond conventional testing, symbolic execution offers distinct advantages, and coupling it with concrete execution mitigates the path explosion problem. Our solution employs Ghidra's P-Code as an intermediate representation (IR). This implementation detects runtime panics in the TinyGo compiler and supports both generic and custom invariants. Furthermore, P-Code's generic IR nature enables analysis of programs written in other languages such as C. Future enhancements may include intelligent classification of concolic execution logs to identify vulnerability patterns.
  • Deliverable D2.1: Study of Timing Anomalies Documented in the Literature
    • Brandner Florian
    • Asăvoae Mihail
    • Bechennec Jean-Luc
    • Carle Thomas
    • Cassé Hugues
    • Faucou Sébastien
    • Rieg Lionel
    , 2025, pp.1-32. In this report, we will study a phenomenon that may have a considerable impact on the computation of the WCET of real-time tasks: Timing Anomalies (timing anomalys (TAs)). These phenomena may make WCET analysis much harder, or even impossible. Even worse they also threaten the validity of schedulability tests, which often manipulate WCET values under the hypothesis that no TAs may occur. In the following we will provide a brief introduction to relevant aspects to understand TAs, notably architecture, WCET analysis, and an intuitive definition of TAs. The remaining sections of the report detail related work on the subject of TAs, with a specific focus on formal definitions of the phenomenon.
  • Rate of Convergence in the Functional Central Limit Theorem for Stable Processes
    • Coutin Laure
    • Decreusefond Laurent
    • Huang Lorick
    Potential Analysis, Springer Verlag, 2025. In this article, we quantify the functional convergence of the rescaled random walk with heavy tails to a stable process. This generalizes the Generalized Central Limit Theorem for stable random variables in finite dimension. We show that provided we have a control between the random walk or the limiting stable process and their respective affine interpolation, we can lift the rate of convergence obtained for multivariate distributions to a rate of convergence in some functional spaces. (10.1007/s11118-025-10215-2)
    DOI : 10.1007/s11118-025-10215-2
  • Efficient 5G Resource Block Scheduling Using Action Branching and Transformer Networks
    • Nérondat Sylvain
    • Leturc Xavier
    • Ciblat Philippe
    • Le Martret Christophe
    , 2025, pp.1-6. <div><p>This paper presents a deep reinforcement learningbased scheduling solution tailored for 5G networks. The proposed neural network architecture, utilizing an encoder-only transformer and action branching, is designed to handle large action spaces for resource block allocation in wireless environments. By training on variable number of user equipment scenarios, the solution generalizes well across different configurations. Experimental results in Nokia's wireless suite environment demonstrate superior performance in packet loss, compared to heuristics.</p></div> (10.1109/ICMLCN64995.2025.11140453)
    DOI : 10.1109/ICMLCN64995.2025.11140453
  • Railway track monitoring using distributed acoustic sensing (DAS) with standard telecom cable
    • Chedid Alex
    • Kabalan Ali
    • Hammi Tarik
    • Garbini Gabriel Papaiz
    • Gabet Renaud
    , 2025, 13639, pp.348. We demonstrate the ability to detect ground vibrations in a railway environment using two Distributed Acoustic Sensing (DAS) configurations. The study employs the standard deviation of the differential phase over time (STDv) as a metric to evaluate the detection capabilities and spatiotemporal localization accuracy of both systems. A demonstration of rail train tracking is presented using a standard optical fiber telecom cable sheathed PEHD, with a detection range extending up to 40 km. (10.1117/12.3062236)
    DOI : 10.1117/12.3062236
  • 6G FR3 Band-limited Based DPD using Low-Resolution Σ∆ Feedback Receiver
    • Zeng Haoyang
    • Ghonaim Ahmed
    • Pham Dang-Kièn Germain
    • Vasilevski Michel
    • Mohellebi Reda
    • Aboushady Hassan
    • Jabbour Chadi
    , 2025, pp.1-5. This paper presents the integration of a band-limited memory polynomial (BL-MP) digital pre-distortion (DPD) model with low-resolution Σ∆-based feedback receivers, specifically targeting 6G FR3 carrier aggregation using a 400 MHz OFDM 64QAM signal. The performance is evaluated using two power amplifiers (PAs)—Doherty and Class AB—with distinct nonlinearity profiles. The study compares the Generalized Memory Polynomial (GMP) model with the BL-MP model. Significant improvements in error vector magnitude (EVM), approximately 0.7 dBm at the 3% threshold, are observed for both PAs relative to the GMP-LS model. Additionally, adjacent channel leakage ratio (ACLR) enhancements of around 7 dB are achieved, further surpassing GMP-LS performance. These findings demonstrate the adaptability and effectiveness of the BL-MP model, delivering substantial performance gains over conventional pre-distortion techniques across varied PA architectures.The results highlight the potential of employing a BL-MP DPD with a low-resolution feedback receiver, offering an optimal solution for DPD applications in 6G FR3. (10.1109/ISCAS56072.2025.11043783)
    DOI : 10.1109/ISCAS56072.2025.11043783
  • The Smoothed Duality Gap as a Stopping Criterion
    • Walwil Iyad
    • Fercoq Olivier
    Mathematical Programming Computation, Springer, 2025. We optimize the running time of the primal-dual algorithms by optimizing their stopping criteria for solving convex optimization problems under affine equality constraints, which means terminating the algorithm earlier with fewer iterations. We study the relations between four stopping criteria and show under which conditions they are accurate to detect optimal solutions. The uncomputable one: "Optimality gap and Feasibility error", and the computable ones: the "Karush-Kuhn-Tucker error", the "Projected Duality Gap", and the "Smoothed Duality Gap". Assuming metric sub-regularity or quadratic error bound, we establish that all of the computable criteria provide practical upper bounds for the optimality gap, and approximate it effectively. Furthermore, we establish comparability between some of the computable criteria under certain conditions. Numerical experiments on basis pursuit, and quadratic programs with(out) non-negative weights corroborate these findings and show the superior stability of the smoothed duality gap over the rest. (10.1007/s12532-025-00284-0)
    DOI : 10.1007/s12532-025-00284-0
  • Efficient adaptation of deep neural networks for semantic segmentation in space applications
    • Olivi Leonardo
    • Santero Mormile Edoardo
    • Tartaglione Enzo
    Scientific Reports, Nature Publishing Group, 2025, 15 (1), pp.18046 (1-14). In recent years, the application of Deep Learning techniques has shown remarkable success in various computer vision tasks, paving the way for their deployment in extraterrestrial exploration. Transfer learning has emerged as a powerful strategy for addressing the scarcity of labeled data in these novel environments. This paper represents one of the first efforts in evaluating the feasibility of employing adapters toward efficient transfer learning for rock segmentation in extraterrestrial landscapes, mainly focusing on lunar and martian terrains. Our work suggests that the use of adapters, strategically integrated into a pre-trained backbone model, can be successful in reducing both bandwidth and memory requirements for the target extraterrestrial device. In this study, we considered two memory-saving strategies: layer fusion (to reduce to zero the inference overhead) and an “adapter ranking” (to also reduce the transmission cost). Finally, we evaluate these results in terms of task performance, memory, and computation on embedded devices, evidencing trade-offs that open the road to more research in the field. The code will be open-sourced upon acceptance of the article. (10.1038/s41598-025-99192-5)
    DOI : 10.1038/s41598-025-99192-5
  • Two Means to an End Goal": Connecting Explainability and Contestability in the Regulation of Public Sector AI
    • Schmude Timothée
    • Yurrita Mireia
    • Alfrink Kars
    • Le Goff Thomas
    • Viard Tiphaine
    , 2025. Explainability and its emerging counterpart contestability have become important normative and design principles for the trustworthy use of AI as they enable users and subjects to understand and challenge AI decisions. However, the regulation of AI systems spans technical, legal, and organizational dimensions, producing a multiplicity in meaning that complicates the implementation of explainability and contestability due to the difficulty of defining them. Resolving this conceptual ambiguity requires specifying and comparing the meaning of both principles across regulation dimensions, disciplines, and actors. This process, here defined as translation, is essential to provide guidance on the principles' realization. To this end, we present the findings of a semi-structured interview study with 14 interdisciplinary AI regulation experts. We report on the experts' understanding of the intersection between explainability and contestability in public AI regulation, their advice for a decision subject and a public agency in a welfare allocation AI use case, and their perspectives on the connections and gaps within the research landscape. We provide differentiations between descriptive and normative explainability, judicial and non-judicial channels of contestation, and individual and collective contestation action. We further outline three main translation processes pertaining to the alignment of top-down and bottom-up regulation, the assignment of responsibility for interpreting regulations, and the establishment of interdisciplinary collaboration. Our contributions include an empirically grounded conceptualization of the intersection between explainability and contestability and recommendations on implementing these principles in public institutions. We believe our contributions can inform policy-making and regulation of these core principles and enable more effective and equitable design, development, and deployment of trustworthy public AI systems.
  • A gem5-Based Framework for RISC-V Security Analysis
    • Khan Mahreen
    • Mushtaq Maria
    • Pacalet Renaud
    • Apvrille Ludovic
    , 2025. <div><p>1] cheng2024,Evict+Spec+Time: Exploiting Out-of-Order Execution to Improve Cache Attacks Enhanced variant of Evict+Time attack combining eviction, speculation, and timing originally tested for x86. We tested it on RISC-V architecture.</p></div>
  • Annealed Winner-Takes-All for Motion Forecasting
    • Xu Yihong
    • Letzelter Victor
    • Chen Mickaël
    • Zablocki Éloi
    • Cord Matthieu
    , 2025. In autonomous driving, motion prediction aims at forecasting the future trajectories of nearby agents, helping the ego vehicle to anticipate behaviors and drive safely. A key challenge is generating a diverse set of future predictions, commonly addressed using data-driven models with Multiple Choice Learning (MCL) architectures and Winner-Takes-All (WTA) training objectives. However, these methods face initialization sensitivity and training instabilities. Additionally, to compensate for limited performance, some approaches rely on training with a large set of hypotheses, requiring a post-selection step during inference to significantly reduce the number of predictions. To tackle these issues, we take inspiration from annealed MCL, a recently introduced technique that improves the convergence properties of MCL methods through an annealed Winner-Takes-All loss (aWTA). In this paper, we demonstrate how the aWTA loss can be integrated with state-of-the-art motion forecasting models to enhance their performance using only a minimal set of hypotheses, eliminating the need for the cumbersome post-selection step. Our approach can be easily incorporated into any trajectory prediction model normally trained using WTA and yields significant improvements. To facilitate the application of our approach to future motion forecasting models, the code is made publicly available: https://github.com/valeoai/MF_aWTA.
  • From trustworthy AI to technical standards - The distinctive European approach to artificial intelligence regulation
    • Gornet Mélanie
    , 2025. Europe has been at the forefront of Artificial Intelligence (AI) ethics, developing non-binding charters and principles on "trustworthy'' AI. The term "trustworthiness'' is used by Europe to designate AI systems that are "ethical'', "legal'' and "technically robust''. Europe has supplemented these non-binding principles with a binding regulation on AI, known as the AI Act. The AI Act is one of the world's first comprehensive frameworks for regulating AI systems across different industries and use cases, focusing on safety and protection of fundamental rights. The AI Act relies, for operational questions, mostly on technical standards that are in the course of development. The European approach thus combines three layers of regulatory instruments: AI ethics charters, the AI Act and technical standards.The standardisation approach is traditional in product safety, but under the AI Act, standards are also expected to address fundamental rights concerns. To avoid making hard normative choices, standardisation organisations are playing it safe, developing standards which remain at a high-level. Moreover, under the AI Act, the responsibility for developing technical standards is delegated to private standardisation bodies, where large multinational companies are over-represented and hold significant influence. These standards are also often locked behind paywalls, although the situation may evolve in the coming years after a recent case law from the Court of Justice of the European Union. Standardisation experts therefore face pressures to deliver standards on time and of good quality.
  • Equivariant Denoisers for Image Restoration
    • Renaud Marien
    • Leclaire Arthur
    • Papadakis Nicolas
    , 2025, pp.227 - 240. One key ingredient of image restoration is to define a realistic prior on clean images to complete the missing information in the observation. State-of-the-art restoration methods rely on a neural network to encode this prior. Moreover, typical image distributions are invariant to some set of transformations, such as rotations or flips. However, most deep architectures are not designed to represent an invariant image distribution. Recent works have proposed to overcome this difficulty by including equivariance properties within a Plug-and-Play paradigm. In this work, we propose a unified framework named Equivariant Regularization by Denoising (ERED) based on equivariant denoisers and stochastic optimization. We analyze the convergence of this algorithm and discuss its practical benefit. (10.1007/978-3-031-92366-1_18)
    DOI : 10.1007/978-3-031-92366-1_18
  • Generation of frequency entanglement with an effective quantum dot-waveguide two-photon quadratic interaction
    • Meguebel Mohamed
    • Federico Maxime
    • Felicetti Simone
    • Belabas Nadia
    • Fabre Nicolas
    , 2025. Light–matter interactions with quantum dots have been extensively studied to harness key quantum properties of photons, such as indistinguishability and entanglement. In this theoretical work, we exploit the atomic-like four-level structure of a quantum dot coupled to a waveguide to model a shaping frequency entangling gate (ShaFrEnGa) for single photons. Our approach is based on the identification of input frequencies and an atomic level structure for which frequency-dependent one-photon transitions are adiabatically eliminated, while frequency-dependent two-photon transitions are resonantly enhanced. The frequency entanglement performance of the gate are analyzed using a Schmidt decomposition for continuous variables, revealing a trade-off between entanglement generation efficiency and entanglement quality. We further demonstrate the use of the ShaFrEnGa for the generation of entangled frequency qudit states.
  • Computer vision-based foot contact detection for long jump using a monocular normal-speed camera
    • Fang Yangtao
    • Gan Qi
    • Nguyen Sao Mai
    , 2025.