Sorry, you need to enable JavaScript to visit this website.
Share

Publications

2025

  • Cartographier l'intelligence artificielle : collecte et analyse d'un corpus de chartes et manifestes sur l'éthique de l'IA
    • Viard Tiphaine
    • Delarue Simon
    • Gornet Mélanie
    • Boritchev Maria
    , 2025. Nous présentons ici un corpus de 436 chartes et manifestes au sujet de l'éthique de l'intelligence artificielle, ainsi qu'une analyse discursive de celui-ci, permettant de comprendre les enjeux autour desquels le monde social de l'éthique de l'IA se structure. Nous mettons à disposition le corpus, sa documentation, et le code permettant son analyse.
  • Clustering cell nuclei on microgrooves for disease diagnosis using deep learning
    • Roellinger Bettina
    • Thenier Francois
    • Leclech Claire
    • Coirault Catherine
    • Angelini Elsa
    • Barakat Abdul I
    Scientific Reports, Nature Publishing Group, 2025, 15 (1), pp.22476 (1-13). Various diseases including laminopathies and certain types of cancer are associated with abnormal nuclear mechanical properties that influence cellular and nuclear deformations in complex environments. Recently, microgroove substrates designed to mimic the anisotropic topography of the basement membrane have been shown to induce 3D nuclear deformations in various adherent cell types. Importantly, these deformations are different in myoblasts derived from laminopathy patients from those in cells derived from normal individuals. Here we assess the ability of a Variational Autoencoder (VAE) and a Gaussian Mixture Model (GMM) to cluster patches of nuclei of both wildtype myoblasts and myoblasts with laminopathy-associated mutations cultured on microgroove substrates, and we explore the impact of image processing parameters on clustering performance. We show that a standard VAE with GMM is able to cluster nuclei based on their morphologies and degrees of deformations and that these clusters correspond to either wildtype myoblasts or myoblasts with LMNA mutations. The current results suggest that combining deep learning techniques with microgroove substrates enables automatic classification of nuclear deformations and thus provides a promising approach for easy and rapid diagnosis of pathologies that involve abnormalities in nuclear deformation. (10.1038/s41598-025-05788-2)
    DOI : 10.1038/s41598-025-05788-2
  • Sustainable Social Building Assistant (SSoBA): Transforming Energy Use via Digital Twins, Environmental Sensing and Occupant Behaviour Change
    • Ortner Frederick Peter
    • Yalcinkaya Sezgi
    • Chewon Han
    • Perrault Simon
    • Lek Kwan Wei
    • Govindarajan Praveen
    , 2025. Reduction of building energy consumption is an urgent priority in the effort to eliminate greenhouse gas (GHG) emissions and mitigate the effects of the climate crisis. Improvements in building operations are crucial to reducing building energy use, and intelligent building systems increasingly provide cen- tralised real-time energy evaluation and optimisation in support of this goal. Cen- tralised building management systems (BMS), however, are poorly suited to user-controlled buildings where occupants are enabled to determine ventilation and temperature levels in an effort to improve comfort and wellbeing. Building digital twins, if user-friendly and accessibly, can help non-expert occupants un- derstand building functions and unseen environmental factors and support them to improve building energy efficiency. This paper shares results of the design development and test-bedding of a building digital twin system, the Smart Social Building Assistant (SSoBA), created for use by non-expert building occupants. To evaluate SSoBA, a six-week field study was conducted in a university living lab: three weeks without interface interaction (baseline) and three weeks with access to SSoBA visualizations and nudges. The design science research (DSR) method was implemented to respond to the research question: ‘what design ele- ments and functions must a building digital twin advance to effectively engage occupants in modifying their behaviour to support sustainable operations?’ A de- tailed analysis of design element prototypes permitted an interrogation of the hi- erarchical and functional relationship between, on one hand the presentation of spatial information in the form of maps and building 3D models, and on the other hand temporal data representing building function, environmental data, and so- cial relationships among occupants. While more testing is required, preliminary results comparing user behaviours before and after use of the SSOBA interface reveal promising improvements in energy efficiency. On the basis of the pre- sented results, recommendations are made for future work in the area of building digital twins and sustainable behaviour change for the built environment. (10.25442/hku.29365262)
    DOI : 10.25442/hku.29365262
  • Neurosymbolic Methods for Dynamic Knowledge Graphs
    • Alam Mehwish
    • Gesese Genet Asefa
    • Paris Pierre-Henri
    , 2025. Knowledge graphs (KGs) have recently been used for many tools and applications, making them rich resources in structured format. However, in the real world, KGs grow due to the additions of new knowledge in the form of entities and relations, making these KGs dynamic. This chapter formally defines several types of dynamic KGs and summarizes how these KGs can be represented. Additionally, many neurosymbolic methods have been proposed for learning representations over static KGs for several tasks such as KG completion and entity alignment. This chapter further focuses on neurosymbolic methods for dynamic KGs with or without temporal information. More specifically, it provides an insight into neurosymbolic methods for dynamic (temporal or non-temporal) KG completion and entity alignment tasks. It further discusses the challenges of current approaches and provides some future directions. (10.3233/FAIA250223)
    DOI : 10.3233/FAIA250223
  • Neurosymbolic Methods for Rule Mining
    • Ławrynowicz Agnieszka
    • Galárraga Luis
    • Alam Mehwish
    • Jaulmes Bérénice
    • Zeman Václav
    • Kliegr Tomáš
    , 2025, pp.1-22. In this chapter, we address the problem of rule mining, beginning with essential background information, including measures of rule quality. We then explore various rule mining methodologies, categorized into three groups: inductive logic programming, path sampling and generalization, and linear programming. Following this, we delve into neurosymbolic methods, covering topics such as the integration of deep learning with rules, the use of embeddings for rule learning, and the application of large language models in rule learning. (10.3233/FAIA250224)
    DOI : 10.3233/FAIA250224
  • Prototyping Custom Hardware Performance Counters in gem5 Simulator: A Framework for RISC-V Side-Channel Attack Assessment
    • Khan Mahreen
    • Mushtaq Maria
    • Pacalet Renaud
    • Apvrille Ludovic
    , 2025. <div><p>Microarchitectural side-channel attacks exploit performancecentric optimizations such as caches and branch predictors to leak sensitive data. While RISC-V processors are increasingly adopted, tools to evaluate their resilience to these attacks remain limited. This paper presents the first framework for designing custom hardware performance counters (HPCs) in the gem5 simulator for RISC-V, targeting the assessment of side-channel attacks. Although gem5 itself can provide valuable statistics to assess attacks, real-world systems do not have access to such detailed insights. To address this gap, we extend gem5 by creating a new component to prototype HPCs for RISC-V. We modify gem5 to create three HPCs to evaluate branch mispredictions, and L1 instruction and data cache misses. These parameters are critical for identifying microarchitectural side-channel attack patterns. To validate our framework, we test it under various workload conditions on Flush+Fault [7], a recent RISC-V attack. Our simulations detect a 491.4% increase in branch mispredictions and a 231.4% rise in instruction cache misses during Flush+Fault execution compared to baseline benign OS processes. These deviations directly align with its exploitation mechanisms of branch mistraining and cache interference. The framework correlates these HPC deviations with attack attempts, enabling vulnerability analysis of components such as cache and branch predictors. Furthermore, it lays the foundation for a prototype evaluation platform that integrates custom HPCs into gem5 for testing realistic detection mechanisms and countermeasures. To our knowledge, no prior work implements custom HPCs in gem5 for RISC-V security analysis. Our goal is to establish a RISC-V security evaluation platform for analyzing attack patterns, testing detection mechanisms, and guiding secure hardware design.</p></div>
  • Neural Velocity for hyperparameter tuning
    • Dalmasso Gianluca
    • Bragagnolo Andrea
    • Tartaglione Enzo
    • Fiandrotti Attilio
    • Grangetto Marco
    , 2025, pp.1-8. Hyperparameter tuning, such as learning rate decay and defining a stopping criterion, often relies on monitoring the validation loss. This paper presents NeVe, a dynamic training approach that adjusts the learning rate and defines the stop criterion based on the novel notion of "neural velocity". The neural velocity measures the rate of change of each neuron’s transfer function and is an indicator of model convergence: sampling neural velocity can be performed even by forwarding noise in the network, reducing the need for a held-out dataset. Our findings show the potential of neural velocity as a key metric for optimizing neural network training efficiently. (10.1109/IJCNN64981.2025.11229080)
    DOI : 10.1109/IJCNN64981.2025.11229080
  • Denoising Diffusion Probabilistic Model for Point Cloud Compression at Low Bit-Rates
    • Spadaro Gabriele
    • Presta Alberto
    • Giraldo Zuluaga Jhony Heriberto
    • Grangetto Marco
    • Hu Wei
    • Valenzise Giuseppe
    • Fiandrotti Attilio
    • Tartaglione Enzo
    , 2025. Efficient compression of low-bit-rate point clouds is critical for bandwidth-constrained applications. However, existing techniques mainly focus on high-fidelity reconstruction, requiring many bits for compression. This paper proposes a "Denoising Diffusion Probabilistic Model" (DDPM) architecture for point cloud compression (DDPM-PCC) at low bit-rates. A PointNet encoder produces the condition vector for the generation, which is then quantized via a learnable vector quantizer. This configuration allows to achieve a low bitrates while preserving quality. Experiments on ShapeNet and ModelNet40 show improved rate-distortion at low rates compared to standardized and state-of-the-art approaches. We publicly released the code at https://github.com/EIDOSLAB/DDPM-PCC.
  • Décoder le pouvoir de persuasion dans les concours d'éloquence : une étude sur la capacité des modèles de langues à évaluer la prise de parole en public
    • Barkar Alisa
    • Chollet Mathieu
    • Labeau Matthieu
    • Biancardi Beatrice
    • Clavel Chloé
    , 2025, pp.77-90. L’importance des compétences en prise de parole en public (PPP) stimule le développement de systèmes d’évaluation automatisée, mais l’intégration des grandes modèles de langue (LLMs) reste peu explorée. Nous proposons un cadre où les LLMs évaluent des critères issus de la littérature et de retours de formateurs. Nous testons trois approches : des prédictions LLM directes à zéro coup (RMSE 0, 8) par rapport à des prédictions de persuasion basées sur des caractéristiques lexicales fabriquées à la main (RMSE 0, 51) ou basées sur des critères évalués par LLM 0, 6 insérés en entrée dans ElasticNet. L’analyse des liens entre critères et caractéristiques lexicales montre que seul le critère de niveau de langue évalué par LLM est prévisible (score F1 de 0, 56) soulignant les limites actuelles des LLMs pour l’analyse de la PPP. Code source et données disponibles sur GitHub.
  • Power-Aware digital twin of coherent optical receiver
    • Purkayastha Ambashri
    • Delezoide Camille
    • Boitier Fabien
    • Bajaj Vinod
    • Lourdiane Mounia
    • Ware Cédric
    • Layec Patricia
    , 2025. <div><p>We propose a digital twin of coherent receivers based on an extended physical model for predicting quality of transmission under receiver power variations. We experimentally validate it, demonstrating an accuracy improved by up to 1.5dB.</p></div>
  • Deep reinforcement learning for wind farm flow control
    • Kadoche Elie
    , 2025. Within wind farms, wake interactions between turbines can significantly reduce overall energy production. Wind farm flow control encompasses methods designed to mitigate these effects through coordinated turbine control. Wake steering, for example, involves redirecting the wake of certain machines to optimize airflow and increase power output. This is achieved via yaw control, by intentionally misaligning certain turbines with the incoming wind. However, designing robust wake steering controllers is challenging: they must adapt to dynamic and uncertain wind conditions, respect turbine actuation constraints, and capture the complexity of physical interactions in large arrays. Reinforcement learning offers a powerful framework for developing more effective controllers. By leveraging principles from artificial intelligence, it enables wind farms to adapt and respond intelligently to diverse wind conditions. A reinforcement learning agent learns from past experience and continuously refines control strategies as the wind changes. This Ph.D. thesis further studies and develops deep reinforcement learning-based wake steering controllers for wind farm flow control. More specifically: 1) it studies the impact of wind dynamics on yaw control and highlights the importance of optimizing strategies over a certain time horizon when variations in wind direction are significant; 2) it investigates how multi-agent reinforcement learning approaches can control large-scale wind farms and capture complex wake interactions in time-varying wind conditions; 3) it leverages self-attention mechanisms within a single-agent setting to build more capable controllers, demonstrating significant improvements in sample efficiency, learning performance and generalization. Overall, this Ph.D. thesis contributes to narrowing the gap between algorithmic advances in deep reinforcement learning and their practical deployment for large-scale wind farm flow control.
  • Bias in Face Recognition : Assessment and Post-Processing Mitigation
    • Conti Jean-Rémy
    , 2025. Face Recognition (FR) systems have reached unprecedented accuracy due to deep learning advances, enabling widespread deployment. Yet, concerns about fairness—especially across demographic subgroups defined by sensitive attributes like gender and ethnicity— persist. This thesis addresses algorithmic bias in FR through two complementary lenses: evaluation and mitigation.The first part introduces a statistically rigorous framework for assessing FR performance and fairness. Focusing on similarity-based ROC analysis, we apply U-statistics theory to derive asymptotically valid confidence intervals for key metrics. We also propose a scalar uncertainty score to quantify the statistical reliability of fairness assessments. This enables practitioners to identify disparities that are not only large but statistically significant.The second part tackles post-processing bias mitigation for frozen, pre-trained FR models. We begin with gender bias, proposing a novel mitigation approach using two operationally relevant fairness metrics: BFAR and BFRR, which capture disparities in false acceptance/rejection rates under fixed-security constraints. We introduce the Ethical Module, a shallow neural network that transforms embeddings using a Fair von Mises-Fisher loss. This loss models group-specific intra-class variance, enabling fairness control without access to sensitive attributes at deployment. The method improves fairness with minimal performance loss and low computational cost.Next, we address racial bias via a post-processing strategy aligned with FR objectives. Using centroid-based pseudo-scores as proxies for similarity, we develop the Centroid Fairness loss, implemented through a lightweight Fairness Module. This module, trained on top of a frozen model, facilitates subgroup alignment and effectively mitigates racial disparities while preserving overall accuracy—challenging the conventional fairness-performance trade-off.Together, these contributions offer a statistically grounded and practically effective toolbox for evaluating and mitigating bias in face recognition. By integrating uncertainty quantification, fairness-aware representation learning, and scalable post-hoc debiasing, this thesis is an additional step towards the development of equitable and trustworthy biometric technologies.
  • Efficient Thermalization and Universal Quantum Computing with Quantum Gibbs Samplers
    • Rouzé Cambyse
    • França Daniel Stilck
    • Alhambra Álvaro
    , 2025, pp.1488-1495. The preparation of quantum Gibbs states is a crucial task in quantum computing. In this work, we prove that a recently introduced, efficiently implementable dissipative evolution thermalizes to the Gibbs state in time scaling polynomially or even logarithmically with system size at high enough temperatures for any Hamiltonian that satisfies a Lieb-Robinson bound, such as local Hamiltonians on a lattice. Furthermore, we show the efficient adiabatic preparation of the associated purifications or "thermofield double" states. To the best of our knowledge, these are the first results rigorously establishing the efficient preparation of high-temperature Gibbs states and their purifications. In the low-temperature regime, we show that implementing this family of dissipative evolutions for inverse temperatures polynomial in the system's size is computationally equivalent to standard quantum computations. On a technical level, for high temperatures, our proof makes use of the mapping of the generator of the evolution into a Hamiltonian, and then connecting its convergence to that of the infinite temperature limit. We further present an alternative proof that is based on showing the exponential decay of the so-called oscillator norm, yielding convergence in logarithmic times. For low temperature, we instead perform a perturbation at zero temperature and resort to circuit-to-Hamiltonian mappings akin to the proof of universality of quantum adiabatic computing. Taken together, our results show that a family of quasi-local dissipative evolutions efficiently prepares a large class of quantum many-body states of interest, and has the potential to mirror the success of classical Monte Carlo methods for quantum many-body systems. (10.1145/3717823.3718268)
    DOI : 10.1145/3717823.3718268
  • Home Spaces and Semiflows for the Analysis of Parameterized Petri Nets
    • Memmi Gerard
    , 2025, Vol-3998, pp.97-113. After briefly recalling basic notations of Petri Nets, home spaces, and semiflows, we focus on ℱ + , the set of semiflows with non-negative coordinates where the notions of minimality of semiflows and minimality of supports are particularly critical to develop an effective analysis of invariants and behavioral properties of Petri Nets such as boundedness or even liveness. We recall known behavioral properties attached to the notion of semiflows that we associate with extremums. We also recall three known decomposition theorems considering N, Q + , and Q respectively where the decomposition over N is being improved with a necessary and sufficient condition.<p>Then, we regroup a number of properties (old and new) especially around the notions of home spaces and home states which, in combination with semiflows, are used to efficiently support the analysis of behavioral properties. We introduce a new result on the decidability of liveness under the existence of a home state. We introduce new results on the structure and behavioral properties of Petri Nets, illustrating again the importance of considering generating sets of semiflows with non-negative coordinates.</p><p>As examples, we present two related Petri Net modeling arithmetic operations (one of which represents an Euclidean division), illustrating how results on semiflows and home spaces can be methodically used in analyzing the liveness of the parameterized model and underlining the efficiency brought by the combination of these results to the verification engineer.</p>
  • Approximate Hypothesis Testing
    • Le Gouic Nicolas
    • Graczyk Robert
    • Moser Stefan M
    , 2025. We establish the sample complexity of Approximate Hypothesis Testing (AHT) where—unlike in classical hypothesis testing—we need only approximate the hypothesis governing the observed samples rather than recover it exactly. We show that the AHT sample complexity scales inversely with the multivariate Bhatthacharyya distance evaluated on a “maximally confusable” subset of hypotheses that is characterized by the chosen distance measure and approximation accuracy. Index terms—hypothesis testing, sample complexity, learning, Bhattacharyya distance, Hellinger distance.
  • Optimized Co-Design of Delta Sigma Modulators and Fir-DACs for High Speed Transmitters
    • Lima Evelyn
    • Schlegel Nicolas
    • Mathieu Yves
    • Jabbour Chadi
    , 2025, pp.246-250. Delta sigma modulators (DSMs) are a very attractive solution to build flexible high resolution transmitters needed for future massive multiple-input multiple-output (mMIMO) systems. The main challenge in this architecture is its out-of-band (OOB) noise which can be addressed by using FIRDACs. This paper proposes a joint design of the DSM and FIRDACs. The coefficients of the latter are chosen using a multiobjective optimization approach which takes into account the DAC and the DSM noise. The proposed solution is simulated for a 400 MHz 4-Channel signal. Compared to a classical design approach with similar complexity, it achieves 2.86 dB and 16.19 dB improvements for respectively the adjacent and the alternate channel power ratio and error vector magnitude (EVM) of 1.02. (10.1109/NewCAS64648.2025.11107152)
    DOI : 10.1109/NewCAS64648.2025.11107152
  • Side-Channel Attack Detection using gem5 and HPCs
    • Khan Mahreen
    • Mushtaq Maria
    • Pacalet Renaud
    • Apvrille Ludovic
    , 2025.
  • A 24 GHz Rectifier and its Applications in Energy Harvesting and Wireless Power Transfer Systems
    • Wang Yibo
    • Niotaki Kyriaki
    • Lepage Anne Claire
    • Begaud Xavier
    , 2025. This paper presents the design and characterization of a millimeter wave 24 GHz rectifier. A prototype was fabricated on a Rogers 5880 substrate and was characterized over input power and frequency. The rectifier is able to collect power from the FR3 band reserved for Integrated Sensing and Communication. The rectifier exhibits a measured RF-to-dc power conversion efficiency of 21.5% for an input power of about 5 dBm and for a 1.5 kΩ load at 22.6 GHz, while its peak measured efficiency is 34% for an input power of 15 dBm. The potential of the rectifier in energy harvesting and wireless power transfer systems is analysed, showing that the proposed rectifier has the potential to drive low-power devices at a distance of a few meters from a base station or an access point.
  • Circulator-Based RF Energy Harvester with Wide Input Power Dynamic Range
    • Kibiwott Albert
    • Mohellebi Reda
    • Niotaki Kyriaki
    • Jabbour Chadi
    , 2025, pp.440-444. Radio frequency energy harvesting (RFEH) is a promising solution for powering ultra-low-power (ULP) devices. However, capturing ambient energy is challenging due to low power densities and unpredictable RF power available in the surrounding environment. Moreover, conventional rectifiers exhibit limited power conversion efficiency (PCE) and a narrow input power dynamic range (PDR) due to impedance mismatch and power reflection losses. This paper presents a novel circulatorbased multi-branch RF energy harvester designed to efficiently recycle reflected RF power across multiple rectifier branches, significantly enhancing overall energy conversion efficiency and extending the operational power range. Simulation results at 900 MHz reveal that the proposed architecture achieves a PCE of 81% at an input power of 4 dBm and a PDR of 23 dB (spanning from −15 dBm to 8 dBm). These results demonstrate an 8 dB and 26% improvement in dynamic range and efficiency respectively compared to traditional single-rectifier designs, highlighting the effectiveness of the proposed solution. (10.1109/NewCAS64648.2025.11106982)
    DOI : 10.1109/NewCAS64648.2025.11106982
  • Contributions to explainable anomaly detection using data depth
    • Valla Romain
    , 2025. Abnormal events are subjects of interest in various fields of application, such as industry, finance and medicine, when they provide information that differs from the general trend. We can see that the arrival of new interconnected sensors, more powerful and in greater numbers, is leading to an increase in the mass of data available for analysis, requiring innovative methods to solve modern challenges. On the one hand, the growing volume of samples requires faster algorithms, while potential data contamination calls for reliable and robust techniques. On the other hand, it is becoming complicated but necessary to interpret certain decisions made by these new tools, as their complexity has increased when coupled with a large quantity of data.The first part of this thesis focuses on the use of data depths, which measure the centrality of data, in the context of anomaly detection. The development of these methods in recent years, particularly their implementation via numerical approximations coupled with optimisation procedures, has made it possible to extend their use to large-scale datasets. We carry out several comparisons with recognised and widely applied methods in this field, while providing experimental (visual) tools to facilitate parameterisation and thus facilitate their use by practitioners.In the second part, we propose a new method for visualising multivariate data based on reducing the dimensionality of the input space with the aim of better visual separation of abnormal observations (=anomalies) in the sense of data depths. Abnormal Component Analysis (ACA) consists in searching for a new orthogonal basis in an unsupervised way to represent the data while providing interpretation opportunities on the input variables responsible for anomalies of interest. The detailed algorithm, together with certain details enabling users to use it, is followed by applications on real data sets to demonstrate the relevance of the technique.Finally, the last part extends the ACA reasoning to functional data in order to take advantage of the continuous nature of many observed phenomena (time series, frequency analysis, etc.). This extension makes use of dictionaries of functions with various properties to extract significant information from different types of anomalies, which are also described using real data sets. This extension proves to be effective for obtaining a "summary" visualisation of the functional data, complemented by an interpretation obtained using the functions employed.
  • Exploiting Heterogeneous Labels in Deep Learning for Medical Image Analysis : Application to the Automated Diagnosis of Liver Diseases
    • Sarfati Emma
    , 2025. Liver cancer is one of the deadliest cancers worldwide, with a high mortality-to-incidence ratio and major health and economic consequences. Main risk factors include chronic hepatitis B and C infections, non-alcoholic steatohepatitis (NASH), alcohol-related liver disease, and exposure to hepatotoxic substances. Cirrhosis is the leading predisposing factor for hepatocellular carcinoma (HCC), present in the majority of cases. Cirrhosis and HCC diagnoses are typically based on liver biopsy, an invasive and costly procedure. Biomedical imaging is a less invasive alternative but suffers from high inter-radiologist variability. Radiological annotations, while more accessible, are considered weak labels compared to histological standards. This creates a common imbalance: many weakly annotated images and few strongly labeled ones. In this context, pretraining models using weak supervision can improve performance on rare but critical classification tasks. Contrastive learning, a self-supervised method that structures the representation space by pulling similar samples together, enables the use of unlabeled or partially labeled data. However, the integration of heterogeneous annotations remains underexplored. This thesis proposes strategies combining contrastive learning with supervised training via hybrid loss functions, as well as fully contrastive frameworks incorporating both discrete and continuous clinical metadata. Another approach leverages weak radiological predictors directly to guide the learning process. These methods were applied to cirrhosis classification and small HCC detection from CT scans. Results show improved diagnostic accuracy, reduced inter-expert variability, and potential for earlier and more reliable detection of liver diseases.
  • Primal-Dual Coordinate Descent for Nonconvex-Nonconcave Saddle Point Problems Under the Weak MVI Assumption
    • Walwil Iyad
    • Fercoq Olivier
    , 2025. <div><p>We introduce two novel primal-dual algorithms for addressing nonconvex, nonconcave, and nonsmooth saddle point problems characterized by the weak Minty Variational Inequality (MVI). The first algorithm, Nonconvex-Nonconcave Primal-Dual Hybrid Gradient (NC-PDHG), extends the well-known Primal-Dual Hybrid Gradient (PDHG) method to this challenging problem class. The second algorithm, Nonconvex-Nonconcave Stochastic Primal-Dual Hybrid Gradient (NC-SPDHG), incorporates a randomly extrapolated primal-dual coordinate descent approach, extending the Stochastic Primal-Dual Hybrid Gradient (SPDHG) algorithm.</p><p>To our knowledge, designing a coordinate-based algorithm to solve nonconvexnonconcave saddle point problems is unprecedented, and proving its convergence posed significant difficulties. This challenge motivated us to utilize PEPit, a Pythonbased tool for computer-assisted worst-case analysis of first-order optimization methods. By integrating PEPit with automated Lyapunov function techniques, we successfully derived the NC-SPDHG algorithm.</p><p>Both methods are effective under a mild condition on the weak MVI parameter, achieving convergence with constant step sizes that adapt to the structure of the problem. Numerical experiments on logistic regression with squared loss and perceptronregression problems validate our theoretical findings and show their efficiency compared to existing state-of-the-art algorithms, where linear convergence is observed. Additionally, we conduct a convex-concave least-squares experiment to show that NC-SPDHG performs competitively with SAGA, a leading algorithm in the smooth convex setting.</p></div>
  • Evaluating KASLR Break on RISC-V using gem5: Microarchitectural Side-Channel Analysis of Page-Table Walks
    • Khan Mahreen
    • Mushtaq Maria
    • Pacalet Renaud
    • Apvrille Ludovic
    , 2025, 2500, pp.229–235. <div><p>This paper leverages the gem5 simulator to analyze a microarchitectural KASLR break on RISC-V systems. Previous research [2] demonstrated the feasibility of KASLR breaks on RISC-V hardware platforms (C906 and U74). Our paper aims to provide insights that are not easily attainable through traditional hardware experiments. By employing gem5, we gain access to fine-grained metrics such as cycle counts, cache behavior, branch prediction statistics, and TLB accesses, among others. These detailed insights give a deeper analysis of the KASLR bypass and help understand the attack mechanics better.</p></div> (10.1007/978-3-031-94855-8_15)
    DOI : 10.1007/978-3-031-94855-8_15
  • Traffic Prediction Improvement in 5G and beyond: AI and Self-Controlled Components
    • N’kouka Thierry Isaac
    • Aubonnet Tatiana
    • Lemoine Frédéric
    • Kellil Mounir
    • Simoni Noëmie
    , 2025, pp.213-216. The advent of 5G and Beyond 5G (B5G) networks requires novel network management strategies to mitigate potential congestion. Traditional reactive approaches are inadequate as they address issues only post-occurrence, whereas proactive Artificial Intelligence (AI) powered methods can predict and optimize resource allocation. This paper leverages AI on 5G emulated datasets to forecast network traffic, facilitating proactive resource allocation. The experimental results however indicate suboptimal model performance due to the high variability, irregular patterns, sudden traffic bursts, noise, and inconsistent data distributions in the datasets. Our analysis revealed that these issues arise from uncoordinated background traffic, system operations, and random traffic-consuming activities, leading to underperforming model outcomes. Given these challenges, we have proposed a Self-Controlled Component (SCC)-based approach to ensure that high-quality data is fed into the selected AI models, thereby improving prediction accuracy and enhancing performance. (10.1109/NTMS65597.2025.11076981)
    DOI : 10.1109/NTMS65597.2025.11076981
  • A Secure and Cooperative Departure Protocol for Connected Automated Platoons
    • Braiteh Farah-Emma
    • Tse Davy
    • Yhia Ounas
    • Bassi Francesca
    • Khatoun Rida
    , 2025. Cooperative and automated vehicular platoons enhance road safety and reduce traffic congestion by enabling vehicles to travel closely together and maneuver in a synchronized manner. This synchronization relies on vehicle-to-vehicle (V2V) communications, which, while beneficial, also introduce vulnerabilities to potential cyberattacks. In this paper, we introduce a new cooperative and secure protocol for platoon departures, focusing specifically on the departure phase. We demonstrate that, without security measures, a vehicle attempting to leave the platoon could exploit the leave messages of the platoon protocol and introduce attacks that may disrupt the formation of the platoon or even jeopardize its stability. To mitigate this risk, we propose data consistency measures that protect both the stability and integrity of the platoon. Simulations conducted using Plexe simulator validate the security of the proposed protocol through rigorous security assessments.