Sorry, you need to enable JavaScript to visit this website.
Partager

Publications

 

Les publications de nos enseignants-chercheurs sont sur la plateforme HAL :

 

Les publications des thèses des docteurs du LTCI sont sur la plateforme HAL :

 

Retrouver les publications figurant dans l'archive ouverte HAL par année :

2024

  • Covert distributed detection over discrete memoryless channels
    • Bounhar Abdelaziz
    • Sarkiss Mireille
    • Wigger Michèle
    , 2024, pp.172-177. This paper studies the problem of distributed detection (binary hypothesis testing) over a discrete memoryless channel (DMC) under the constraint that an eavesdropping adversary should not be able to determine whether communication is ongoing or not, i.e., communication over the DMC has to remain covert. The main contribution of the paper is an upper bound on the largest possible Stein exponent, showing that it cannot exceed the largest exponent achievable under zero-rate communication over a noise-free link. In interesting special cases, the upper bound is achieved by a local test at the decision center that completely ig-nores the communication. In these cases, the covertness constraint renders communication useless for improving the Stein exponent. (10.1109/ISIT57864.2024.10619455)
    DOI : 10.1109/ISIT57864.2024.10619455
  • OMM: an OpenMusic library for Mathematical Morphology
    • Agon Carlos
    • Romero-García Gonzalo
    • Bloch Isabelle
    , 2024. This article introduces OMM, a new OpenMusic library based on Mathematical Morphology. After conducting several years of research on the potential usages of Mathematical Morphology operators in composition, musical analysis, and music education, we are convinced that it is now opportune to develop computer tools for music creation, analysis, and education based on Mathematical Morphology concepts. Our primary objective is to demonstrate the relevance of these techniques in the musical domain. In pursuit of this goal, we present one illustrative example based on rhythmical transformations. OMM represents an initiative aimed at bridging the gap between theoretical ideas and practical applications in the musical domain. This contribution reflects our commitment to democratizing access to advanced musical tools, thereby promoting a more inclusive musical landscape.
  • Sample Complexity of Locally Differentially Private Quantum Hypothesis Testing
    • Cheng Hao-Chung
    • Hirche Christoph
    • Rouzé Cambyse
    , 2024, pp.2921-2926. Quantum state discrimination is an important problem in many information processing tasks. In this work we are concerned with finding the best possible sample complexity when the states are preprocessed by a quantum channel that is required to be locally differentially private. We give achievability and converse bounds that nearly match the best known classical bounds. On the way, we prove several novel inequalities between quantum divergences that should be of independent interest. (10.1109/ISIT57864.2024.10619433)
    DOI : 10.1109/ISIT57864.2024.10619433
  • What can information guess ? Guessing advantage vs. Rényi entropy for small leakages
    • Béguinot Julien
    • Rioul Olivier
    , 2024. We leverage the Gibbs inequality and its natural generalization to Rényi entropies to derive closed-form paramet- ric expressions of the optimal lower bounds of ρth-order guessing entropy (guessing moment) of a secret taking values on a finite set, in terms of the Rényi-Arimoto α-entropy. This is carried out in an non-asymptotic regime when side information may be available. The resulting bounds yield a theoretical solution to a fundamental problem in side-channel analysis: Ensure that an adversary will not gain much guessing advantage when the leakage information is sufficiently weakened by proper countermeasures in a given cryptographic implementation. Practical evaluation for classical leakage models show that the proposed bounds greatly improve previous ones for analyzing the capability of an adversary to perform side-channel attacks.
  • What can Information Guess? Guessing Advantage vs. Rényi Entropy for Small Leakages
    • Béguinot Julien
    • Rioul Olivier
    , 2024, pp.2963-2968. We leverage the Gibbs inequality and its natural generalization to Rényi entropies to derive closed-form paramet- ric expressions of the optimal lower bounds of ρth-order guessing entropy (guessing moment) of a secret taking values on a finite set, in terms of the Rényi-Arimoto α-entropy. This is carried out in an non-asymptotic regime when side information may be available. The resulting bounds yield a theoretical solution to a fundamental problem in side-channel analysis: Ensure that an adversary will not gain much guessing advantage when the leakage information is sufficiently weakened by proper countermeasures in a given cryptographic implementation. Practical evaluation for classical leakage models show that the proposed bounds greatly improve previous ones for analyzing th (10.1109/ISIT57864.2024.10619150)
    DOI : 10.1109/ISIT57864.2024.10619150
  • SELF-SUPERVISED LEARNING OF MULTI-MODAL COOPERATION FOR SAR DESPECKLING
    • Gaya Victor
    • Dalsasso Emanuele
    • Denis Loïc
    • Tupin Florence
    • Pinel-Puysségur Béatrice
    • Guérin Cyrielle
    , 2024. Synthetic aperture radar (SAR) is a widely used modality for Earth observation, as they provide weather-independent imaging capabilities. However, interpretation of SAR images is difficult due to the speckle phenomenon: fluctuations appear in the image, which are stronger in areas with high radar reflectivity. As a result, many speckle reduction methods have been developed, with deep learning approaches standing out as particularly effective. Our article presents here a deep learning approach with two novel features: the use of an optical image to improve the restoration of a SAR image, while using a self-supervised neural network training
  • Statistical Higher-Order Correlation Attacks Against Code-Based Masking
    • Cheng Wei
    • Ming Jingdian
    • Guilley Sylvain
    • Danger Jean-Luc
    IEEE Transactions on Computers, Institute of Electrical and Electronics Engineers, 2024, 73 (10), pp.2364-2377. Masking is one of the most well-established methods to thwart side-channel attacks. Many masking schemes have been proposed in the literature, and code-based masking emerges and unifies several masking schemes in a coding-theoretic framework. In this work, we investigate the side-channel resistance of code-based masking from a non-profiling perspective by utilizing correlation-based side-channel attacks. We present a systematic evaluation of correlation attacks with various higher-order (centered) moments and then present the form of optimal correlation attacks. Interestingly, the Pearson correlation coefficient between the hypothetical leakage and the measured traces is connected to the signal-to-noise ratio in higher-order moments, and it turns out to be easy to evaluate rather than launch repeated attacks. We also identify some ineffective higher-order correlation attacks at certain orders when the device leaks under the Hamming weight leakage model. Our theoretical findings are verified through both simulated and real-world measurements. (10.1109/TC.2024.3424208)
    DOI : 10.1109/TC.2024.3424208
  • Assessment of EMF Exposure Induced by Wireless Cellular Phones in Various Usage Scenarios in France
    • Liu Jiang
    • Zhang Yarui
    • Chikha Wassim Ben
    • Wang Shanshan
    • Samaras Theodoros
    • Jawad Ourouk
    • Ourak Lamine
    • Conil Emmanuelle
    • Wiart Joe
    IEEE Access, IEEE, 2024, 4, pp.1-15. With the ever-increasing demand for high-speed and low-latency wireless communications, new generation of base stations, i.e., the fifth generation (5G), has been deployed recently. However, this deployment has raised growing concerns regarding the potential public health implications associated with exposure to radio-frequency electromagnetic fields (RF-EMF). According to the type of sources, RF-EMF exposure can be catergorized into uplink (device-to-base station) and downlink (base station-to-device) transmissions. In this paper, we aim to analyze uplink exposure induced by mobile devices operating on 5G and previous generations. A total of 380 measurement points are selected for evaluating three types of services, i.e., voice call, video call, file uploading, in the Greater Paris region. Data collected from those measurements cover technologies in 2G, 3G, 4G, and 5G, where 5G refers to non-standalone (NSA) in the current paper. We compare the average transmitted (TX) power, throughput, connection duration, and energy utilization efficiency for different services and frequency bands. To assess the impact of 5G, we compared measurements with the 5G functionality enabled and disabled, respectively. The results show that mobile phones emit higher power during data-intensive services. Additionally, mobile phones operating on 5G emit lower TX power than 4G. Furthermore, the UL exposure level also depends on the the duration of emission. For a given file size to upload, the duration depends on the throughput. Consequently, we also investigated the emitted energy consumption per bit transmitted, referred to as radiated energy per bit transmitted (REBT). The comparison reveals that REBT in file uploading is 9.65 mJ/Mb for 4G and 5.1 mJ/Mb for 5G, indicating a significant decrease in radiated energy consumption with the adoption of 5G technology. This research enhances our understanding of uplink RF-EMF exposure associated with 5G and provides crucial insights for public health risk assessments. (10.1109/ACCESS.2024.3424305)
    DOI : 10.1109/ACCESS.2024.3424305
  • Evaluating and Improving the Reasoning Abilities of Language Models
    • Helwe Chadi
    , 2024. This thesis focuses on evaluating and improving the reasoning abilities of Smaller Language Models (SLMs) and Large Language Models (LLMs). It explores SLMs’ performance on complex tasks and their limitations with simpler ones. This thesis introduces LogiTorch, a Python library that facilitates the training of models on various reasoning tasks with minimal coding.It also presents TINA, a negated data augmentation technique that improves SLMs’ robustness to Negation in textual entailment tasks. Further, this thesis explores LLMs’ capabilities through MAFALDA, a new benchmark for identifying and classifying reasoning fallacies, proposing a new annotation scheme and evaluation metric that considers subjectivity in reasoning. The findings indicate that humans outperform SLMs and LLMs in this reasoning task. We propose several research directions that merit further investigation, such as investigating Neuro-symbolic AI and improving the reasoning abilities of low-resource LLMs.
  • EM Fault Injection-Induced Clock Glitches: From Mechanism Analysis to Novel Sensor Design
    • Nabhan Roukoz
    • Dutertre Jean-Max
    • Rigaud Jean-Baptiste
    • Danger Jean-Luc
    • Sauvage Laurent
    , 2024. This paper introduces a novel sensor that is capable of detecting faults injected by electromagnetic disturbances. The sensor has been designed from an understanding of the physical mechanisms of ElectroMagnetic Fault Injection (EMFI). A recent study has identified an EMFI mechanism based on the timing violation fault model, which highlights the coexistence of two distinct mechanisms: electromagnetic disturbances that are coupled to the target’s power distribution network, which can cause timing faults by extending the propagation time of logic gates beyond the clock period, and disturbances that are coupled to the target’s clock distribution network, which can cause timing constraint violations due to EMFI-induced voltage glitches within the target’s clock tree. Building on this work, we have investigated the mechanism of EMFI-induced clock glitches, providing useful insights for designing a new sensor. The sensor incorporates two dummy clock paths that are maintained in a frozen state within the circuit. Both paths are respectively capable of detecting both positive and negative EMFI-induced glitches along these paths. The proposed sensor offers significant advantages, including full digitization, ease of implementation, low cost in terms of silicon area, low power consumption, and a high fault detection rate. Accurate design and experimental tests were performed on an FPGA board. Validation experiments were supported by spatial and temporal sensitivity maps covering the full-frequency spectrum of the target, which confirmed the effectiveness of the sensor. (10.1109/IOLTS60994.2024.10616074)
    DOI : 10.1109/IOLTS60994.2024.10616074
  • SepVAE: a contrastive VAE to separate pathological patterns from healthy ones
    • Louiset Robin
    • Duchesnay Edouard
    • Grigis A
    • Dufumier Benoit
    • Gori Pietro
    , 2024. Contrastive Analysis VAE (CA-VAEs) is a family of Variational auto-encoders (VAEs) that aims at separating the common factors of variation between a background dataset (BG) (i.e., healthy subjects) and a target dataset (TG) (i.e., patients) from the ones that only exist in the target dataset. To do so, these methods separate the latent space into a set of salient features (i.e., proper to the target dataset) and a set of common features (i.e., exist in both datasets). Currently, all CA-VAEs models fail to prevent sharing of information between the latent spaces and to capture all salient factors of variation. To this end, we introduce two crucial regularization losses: a disentangling term between common and salient representations and a classification term between background and target samples in the salient space. We show a better performance than previous CA-VAEs methods on three medical applications and a natural images dataset (CelebA) (10.48550/arXiv.2307.06206)
    DOI : 10.48550/arXiv.2307.06206
  • Dense Self-Supervised Learning for Medical Image Segmentation
    • Seince Maxime
    • Le Folgoc Loic
    • Facury de Souza Luiz Augusto
    • Angelini Elsa
    , 2024, 250, pp.1371-1386. Deep learning has revolutionized medical image segmentation, but it relies heavily on high quality annotations. The time, cost and expertise required to label images at the pixel level for each new task has slowed down widespread adoption of the paradigm. We propose Pix2Rep, a self-supervised learning (SSL) approach for few-shot segmentation, that reduces the manual annotation burden by learning powerful pixel-level representations directly from unlabeled images. Pix2Rep is a novel pixel-level loss and pre-training paradigm for contrastive SSL on whole images. It is applied to generic encoder-decoder deep learning backbones (e.g., U-Net). Whereas most SSL methods enforce invariance of the learned image-level representations under intensity and spatial image augmentations, Pix2Rep enforces equivariance of the pixel-level representations. We demonstrate the framework on a task of cardiac MRI segmentation. Results show improved performance compared to existing semi-and self-supervised approaches; and a 5-fold reduction in the annotation burden for equivalent performance versus a fully supervised U-Net baseline. This includes a 30% (resp. 31%) DICE improvement for one-shot segmentation under linear-probing (resp. fine-tuning). Finally, we also integrate the novel Pix2Rep concept with the Barlow Twins non-contrastive SSL, which leads to even better segmentation performance. (10.48550/arXiv.2407.20395)
    DOI : 10.48550/arXiv.2407.20395
  • Formal Analysis of C-ITS PKI protocols
    • Msahli Mounira
    • Lafourcade Pascal
    • Mahmoud Dhekra
    , 2024. Vehicular networking is gaining a lot of popularity and attraction from among the industry and academic research communities in the last decade. The communication between vehicles will lead to more efficient and secured roads because we will be able to provide information about traffic and road conditions to vehicle’s drivers. However, ensuring the security of these networks and devices still remains a main major concern to guarantee the expected services. Secure Public Key Infrastructure (PKI) represents a common solution to achieve many security and privacy requirements. Unfortunately, current Cooperative Intelligent Transport Systems (C-ITS) PKI protocols were not verified in terms of security and privacy. In this paper, we propose a security analysis of C-ITS PKI protocols in the symbolic model using ProVerif. We formally modeled C-ITS PKI protocols based on the specifications given in the ETSI standard. We model C-ITS PKI protocols and formalize their security properties in the applied Pi-calculus. We used an automatic privacy verifier UKano to analyse Enrolment protocol. We found attacks on authentication properties, in Authorization and Validation protocols when considering a dishonest Authorization Authority (AA).We analysed proof results and we fixed identified attacks by introducing new parameters in protocol request.
  • How Does Distributed Denial of Service Affect the Connected Cars Environment?
    • Wehby Ayoub
    • Zeadally Sherali
    • Khatoun Rida
    • Bouchouia Mohammed
    • Fadlallah Ahmad
    , 2024.
  • Neural Decoding with Log-Ratio Images
    • Wang Xiaolin
    • Boutros Joseph J.
    • Rioul Olivier
    , 2024.
  • Collective Deliberation with Physicalizations
    • Bressa Nathalie
    • Huron Samuel
    , 2024. Data physicalization is a promising approach to support co-located collaborative deliberation processes by visualizing information relevant to the deliberation. In this position paper, we first explain the potential of physicalizations for collective deliberation. Then, we identify a set of considerations for designing new physicalization systems for deliberation.
  • Nonlinear Tolerant Sphere Shaping with Dispersion-Aware Sequence Selection
    • Liu Jingtian
    • Awwad Élie
    • Jaouën Yves
    , 2024, pp.1-3. Energy dispersion index of dispersed sequences (DEDI) with sequence selection demonstrates nonlinear shaping gains of 0.3 and 0.03 bits/4D-symbol over single-span and multi-span links respectively against conventional sphere shaping. (10.1109/OECC54135.2024.10975346)
    DOI : 10.1109/OECC54135.2024.10975346
  • Information-theoretic generalization bounds for learning from quantum data
    • Caro Matthias
    • Gur Tom
    • Rouzé Cambyse
    • Stilck Franca Daniel
    • Subramanian Sathyawageeswar
    , 2024. Learning tasks play an increasingly prominent role in quantum information and computation. They range from fundamental problems such as state discrimination and metrology over the framework of quantum probably approximately correct (PAC) learning, to the recently proposed shadow variants of state tomography. However, the many directions of quantum learning theory have so far evolved separately. We propose a general mathematical formalism for describing quantum learning by training on classical-quantum data and then testing how well the learned hypothesis generalizes to new data. In this framework, we prove bounds on the expected generalization error of a quantum learner in terms of classical and quantum information-theoretic quantities measuring how strongly the learner's hypothesis depends on the specific data seen during training. To achieve this, we use tools from quantum optimal transport and quantum concentration inequalities to establish non-commutative versions of decoupling lemmas that underlie recent information-theoretic generalization bounds for classical machine learning. Our framework encompasses and gives intuitively accessible generalization bounds for a variety of quantum learning scenarios such as quantum state discrimination, PAC learning quantum states, quantum parameter estimation, and quantumly PAC learning classical functions. Thereby, our work lays a foundation for a unifying quantum information-theoretic perspective on quantum learning.
  • Experimental Assessment and Biaffine Modeling of the Impact of Ambient Temperature on SoC Power Requirements
    • Rao Vaddina Kameswar
    • Brandner Florian
    • Memmi Gérard
    • Jouvelot Pierre
    , 2024. Based on fundamental physics-based considerations, we introduce the Biaffine Temperature-Voltage power model (BiTV) for SoC systems, which takes the influence of dynamic voltage, frequency, and ambient temperature conditions into account. Using an ARM-Cortexbased AM572x system operating in a temperature-controlled oven, we provide experimental evidence of the validity of the BiTV power model over a significant range of ambient temperatures (25 to 55 °C), voltages (0.98 to 1.23 V) and frequencies (100 to 1,500 MHz). These experiments and the BiTV model provide quantitative elements to assess the impact of ambient temperature on systems’ performance. Such insights could be of use to system designers and compiler writers, in particular when dealing with embedded systems operating in harsh conditions or under energy-critical constraints.
  • Modeling UAV Swarm Deployment Based on Sibuya Process
    • Liu Bin
    • Hu Haifeng
    • Decreusefond Laurent
    • Zhao Haitao
    IEEE Communications Letters, Institute of Electrical and Electronics Engineers, 2024, 28 (8), pp.1959 - 1963. Utilizing Unmanned Aerial Vehicles (UAVs) to provide coverage service has a variety of advantages over terrestrial cellular networks. A typical one is that multiple adjacent UAVs can form swarms dynamically to provide stable connection for high populated areas. Due to the random location feature, stochastic geometry tool is used to evaluate their coverage performance. Prior work leveraged Poisson/Binomial and Poisson cluster processes to characterize single-swarm and multi-swarm respectively. These models simplify the analytical procedure at the price of failing to capture heavy-tailed property of UAV numbers in swarms inspired by population density. This leads to underestimated interference impact, inaccurate performance evaluation and insufficient UAV deployments. For this reason, for the first time, we leverage Sibuya and discrete α -stable processes to characterize single-swarm and multi-swarm networks respectively. These two processes have tractable probability generation functionals and capture heavy tail property as well. In addition, we derive the coverage probability under the maximal instantaneous signal-to-interference ratio association policy for both deployments. Finally, simulation results validate our analytical models. (10.1109/LCOMM.2024.3419912)
    DOI : 10.1109/LCOMM.2024.3419912
  • Dimension reduction for fluid simulation and animation
    • Paliard Chloé
    , 2024. Despite tremendous improvements in graphics hardware performance aswell as key algorithmic advancements since the beginning of the years 2000, some natural phenomena remain extremely costly to simulate. For instance, several tracks have been proposed to improve the performance of fluid simulations, that are animated by solving partial differential equations (PDE), more specifically the highlynon-linear Navier-Stokes equations. In this thesis, we first explore the use of deep learning to create a reduced space in which a solver can operate with lower costs, while still out putting high-quality solutions. We propose a model that enables the simulation of turbulent flows at a resolution four times higher than that of the given input in each dimension, with improved runtime performance compared to a high-resolution solver. Secondly, we use the contributions on intrinsic operators for simulating fluids on 3D surfaces with reduced costs. We focus on the smoothed-particle hydrodynamics (SPH) model that we adapt to 3D surfaces, by gathering the particles' neighborhoods thanks to shortest-path geodesics, and by displacing such particles in an intrinsic manner on the surface. All of this is straightforward to implement on the GPU, enablingthe simulation of tens of thousands of particles on various triangle meshes at interactive speed.
  • Monchi: Multi-scheme Optimization For Collaborative Homomorphic Identification
    • Ibarrondo Luis Alberto
    • Kerenciler Ismet
    • Chabanne Hervé
    • Despiegel Vincent
    • Önen Melek
    , 2024. This paper introduces a novel protocol for privacy-preserving biometric identification, named Monchi, that combines the use of homomorphic encryption for the computation of the identification score with function secret sharing to obliviously compare this score with a given threshold and finally output the binary result. Given the cost of homomorphic encryption, BFV in this solution, we study and evaluate the integration of two packing solutions that enable the regrouping of multiple templates in one ciphertext to improve efficiency meaningfully. We propose an end-to-end protocol, prove it secure and implement it. Our experimental results attest to Monchi’s applicability to the real-life use case of an airplane boarding scenario with 1000 passengers, taking less than one second to authorize/deny access to the plane to each passenger via biometric identification while maintaining the privacy of all passengers. (10.1145/3658664.3659633)
    DOI : 10.1145/3658664.3659633
  • Laser Shield: a Physical Defense with Polarizer against Laser Attacks on Autonomous Driving Systems
    • Zhang Qingjie
    • Chi Lijun
    • Wang Di
    • Msahli Mounira
    • Memmi Gerard
    • Zhang Tianwei
    • Zhang Chao
    • Qiu Han
    , 2024, pp.1-6. Autonomous driving systems (ADS) are boosted with deep neural networks (DNN) to perceive environments, while their security is doubted by DNN's vulnerability to adversarial attacks. Among them, a diversity of laser attacks emerges to be a new threat due to its minimal requirements and high attack success rate in the physical world. Nevertheless, current defense methods exhibit either a low defense success rate or a high computation cost against laser attacks. To fill this gap, we propose Laser Shield which leverages a polarizer along with a min-energy rotation mechanism to eliminate adversarial lasers from ADS scenes. We also provide a physical world dataset, LAPA, to evaluate its performance. Through exhaustive experiments with three baselines, four metrics, and three settings, Laser Shield is proved to surpass SOTA performance. (10.1145/3649329.3657358)
    DOI : 10.1145/3649329.3657358
  • Electroencephalography Response during an Incremental Test According to the V̇O2max Plateau Incidence
    • Billat Véronique
    • Berthomier Christian
    • Clémençon Michel
    • Brandewinder Marie
    • Essid Slim
    • Damon Cécilia
    • Rigaud François
    • Bénichoux Alexis
    • Maby Emmanuel
    • Fornoni Lesly
    • Bouchet Patrick
    • van Beers Pascal
    • Massot Bertrand
    • Revol Patrice
    • Poinsard Luc
    • Creveaux Thomas
    • Collet Christian
    • Mattout Jérémie
    • Pialoux Vincent
    Applied Sciences, Multidisciplinary digital publishing institute (MDPI), 2024, 14 (13), pp.5411. V̇O2max is recognized as a key measure in exercise physiology and sports medicine. However, only 20–50% of maximal incremental exercise tests (IET) result in a plateau of V̇O2 (V̇O2pl). To our knowledge, no study has yet examined the possible difference in brain activity during an IET, in V̇O2pl and non-plateau athletes with the same V̇O2max and age. This study aimed to shed light on the central governor hypothesis, namely that the inability to reach a V̇O2pl may be dictated by the brain rather than by a peripheral physical limit. This hypothesis can now be explored using electroencephalography (EEG) during IET, measuring concomitant power in specific frequency bands. Forty-two athletes were divided into two groups: those who practiced endurance sports and those who did not, and were asked to perform an IET. EEG signals and gas exchange were recorded. A V̇O2pl was observed in twenty-two subjects (52%). EEG power increased in all subjects during IET, except in the alpha band, which showed variability, but not significantly (64% increase, 34% decrease, p = 0.07). No differences were found between endurance athletes and non-endurance athletes, except for V̇O2max (60.10 ± 6.16 vs. 51.77 ± 6.41, p < 0.001). However, the baseline-corrected ratio of EEG power to V̇O2 was found to decrease in all subjects during IET, in the alpha, beta and theta bands. In conclusion, the presence or absence of a V̇O2pl is not related to the type of EEG response during an IET. Nevertheless, the decline in brain and V̇O2 powers/ratios in all frequency bands suggests that aerobic power may be constrained by brain mobilization. (10.3390/app14135411)
    DOI : 10.3390/app14135411
  • CoqDRAM - A Foundation for Designing Formally Proven Memory Controllers
    • Lisboa Malaquias Felipe
    , 2024. Recently proposed real-time memorycontrollers tackle the performance-predictability tradeoffby trying to offer the best of both worlds. However,as a consequence, designs have become complexand often present mathematical developmentsthat are lengthy, hard to read and review, incomplete,and rely on unclear assumptions. Given thatsuch components are often designed as part microarchitecturesthat are used in safety-critical real-timesystems, a high degree of confidence that systemsbehave correctly is required in order to meet certificationgoals. To address that problem, we proposea new framework written in the Coq theorem provernamed CoqDRAM, in which we model DRAM devicesand controllers and their expected behaviour asa formal specification. The framework is intended toaid the design of correct-by-construction, trustworthyDRAM scheduling algorithms. The CoqDRAM specificationcaptures correctness criteria according to theJEDEC standards and states other high-level properties,such as fairness and sequential consistency. Followingsuch approach, paper-and-pencil mathematicaldevelopments are replaced by machine-checkedproofs, which increase confidence that the design isindeed correct.We showcase CoqDRAM’s usability bymodelling and proving two proof of concept schedulingalgorithms: one based on the First-in First-Out (FIFO) arbitration policy and the other on Time-Division Multiplexing (TDM). Moreover, using Coq-DRAM, we propose a new DRAM scheduling algorithmcalled TDMShelve, which extends and improvesprevious work on work-conserving dynamic TDM arbitration.More specifically, TDMShelve exploits informationabout the internal state of the memory at requestscheduling level, thus providing a good balancebetween predictability and average-case latency formixed-criticality real-time systems. Finally, we connectthe algorithms written in CoqDRAM to software andhardware simulation environments. These environmentsare used to perform simulation runs that furthervalidate the correctness of the CoqDRAM model.