Sorry, you need to enable JavaScript to visit this website.
Share

Publications

2024

  • Weibull mixture estimation based on censored data with applications to clustering in reliability engineering
    • Lamalle Florian
    • Feuillard Vincent
    • Sabourin Anne
    • Clémençon Stephan
    Quality and Reliability Engineering International, Wiley, 2024, 40 (8), pp.4247-4261. It is the purpose of this paper to propose a novel clustering technique tailored to randomly censored data in reliability/survival analysis. It is based on an underlying mixture model of Weibull distributions and consists in estimating its parameters by means of a variant of the Expectation–Maximization method in the presence of random censorship. Beyond the description of the algorithm, model selection issues are addressed and we investigate its performance from an empirical perspective by applying it to possibly strongly censored (synthetic and real) survival data. The experiments carried out provide strong empirical evidence that our algorithm performs better than alternative methods standing as natural competitors in this framework. (10.1002/qre.3647)
    DOI : 10.1002/qre.3647
  • Le Soleil se lèvera-t-il demain ?
    • Zayana Karim
    • Rioul Olivier
    , 2024, 88 (88), pp.112. Faut-il vivre chaque jour comme s'il était le dernier ? Cette question est légitime quand on sait que Pierre-Simon de Laplace, en 1814, évaluait à 0,99999945 la probabilité que le Soleil se lève de nouveau le lendemain...
  • Performance analysis of a RIS-assisted communications
    • Adrat Hamza
    • Decreusefond Laurent
    • Martins Philippe
    , 2024. Reconfigurable Intelligent Surfaces (RIS) are currently considered for adoption in future 6G stantards. ETSI and 3GPP have started feasibility and performance investigations of such a technology. This work proposes an analytical model to analyze RIS performance. It relies on a simple street model where obstacles and mobile units are all aligned. RIS is positioned onto a building parallel to the road. The coverage probability in presence of obstacles and concurrent communications is then computed as a performance criteria.
  • WaterMAS: sharpness-aware maximization for neural network watermarking
    • de Sousa Trias Carl
    • Mitrea Mihai
    • Fiandrotti Attilio
    • Cagnazzo Marco
    • Chaudhuri Sumanta
    • Tartaglione Enzo
    , 2024, 15305, pp.301-317. Nowadays, deep neural networks are used for solving complex tasks in several critical applications and protecting both their integrity and intellectual property rights (IPR) has become of utmost importance. To this end, we advance WaterMAS, a substitutive, white-box neural network watermarking method that improves the trade-off among robustness, imperceptibility, and computational complexity, while making provisions for increased data payload and security. WasterMAS insertion keeps unchanged the watermarked weights while sharpening their underlying gradient space. The robustness is thus ensured by limiting the attack’s strength: even small alterations of the watermarked weights would impact the model’s performance. The imperceptibility is ensured by inserting the watermark during the training process. The relationship among the WaterMAS data payload, imperceptibility, and robustness properties is discussed. The secret key is represented by the positions of the weights conveying the watermark, randomly chosen through multiple layers of the model. The security is evaluated by investigating the case in which an attacker would intercept the key. The experimental validations consider 5 models and 2 tasks (VGG16, ResNet18, MobileNetV3, SwinT for CIFAR10 image classification, and DeepLabV3 for Cityscapes image segmentation) as well as 4 types of attacks (Gaussian noise addition, pruning, fine-tuning, and quantization). The code will be released open-source upon acceptance of the article. (10.1007/978-3-031-78169-8_20)
    DOI : 10.1007/978-3-031-78169-8_20
  • OVOSE: Open-Vocabulary Semantic Segmentation in Event-Based Cameras
    • Rahman Muhammad Rameez Ur
    • Giraldo Jhony H.
    • Spinelli Indro
    • Lathuilière Stéphane
    • Galasso Fabio
    , 2024, 15316, pp.18–33. Event cameras, known for low-latency operation and superior performance in challenging lighting conditions, are suitable for sensitive computer vision tasks such as semantic segmentation in autonomous driving. However, challenges arise due to limited event-based data and the absence of large-scale segmentation benchmarks. Current works are confined to closed-set semantic segmentation, limiting their adaptability to other applications. In this paper, we introduce OVOSE, the first Open-Vocabulary Semantic Segmentation algorithm for Event cameras. OVOSE leverages synthetic event data and knowledge distillation from a pre-trained image-based foundation model to an event-based counterpart, effectively preserving spatial context and transferring open-vocabulary semantic segmentation capabilities. We evaluate the performance of OVOSE on two driving semantic segmentation datasets DDD17, and DSEC-Semantic, comparing it with existing conventional image open-vocabulary models adapted for event-based data. Similarly, we compare OVOSE with state-of-the-art methods designed for closed-set settings in unsupervised domain adaptation for event-based semantic segmentation. OVOSE demonstrates superior performance, showcasing its potential for real-world applications. The code is available at https://github.com/ram95d/OVOSE. (10.1007/978-3-031-78444-6_2)
    DOI : 10.1007/978-3-031-78444-6_2
  • Efficient adaptive equalization for PMD mitigation in next-generation optical access networks
    • Nwakamma Peter
    , 2024. As the world transitions fully into the fourth industrial revolution, the demand for connectivity will continue to increase. For optical networks this increase in demand will originate mostly from the optical access network (OAN) which must accommodate these demands in a flexible and cost-effective way. Passive optical networks (PONs) are currently the most deployed OAN because of their cost-effectiveness thanks to the passive splitting architecture and use of intensity modulation and direct detection (IM-DD) technology that requires relatively cheap optics. Several evolutions of the IM-DD PON have been standardized to meet single channel capacity requirements of up to 10 Gigabits/second (Gbps or G)-, 25G-, and recently 50G-PON. However, this increase in capacity comes with a tradeoff, sacrificing cost, such that coherent detection (CohD), a costly DSP-enabled core-network technology, is being considered for PON (CPON). Scaling up IM-DD PON to support single channel capacities of 100G and beyond brings it closer in cost to CohD, motivating the counter strategy of scaling down the cost of a potential CPON that will easily permit 100G and beyond. However, considering the asymmetric architecture of PON which requires burst-mode operation, the DSP operations in CPON must also be burst-mode compatible. This translates to a strict requirement on the latency. Considering also that the polarization mode dispersion (PMD) impairment, in the OAN, can be high and that it impacts the DSP equalization, careful analysis of the convergence properties of equalization DSP is necessary. In this thesis, we address the issue of latency focusing on adaptive equalization. Firstly, we propose an adaptive equalizer that can overcome the limitations induced by the potentially wide-range PMD environment of the OAN. Secondly, we enhance the proposed equalizer to self-reconfigure depending on the PMD level. Finally, in the context of overall DSP complexity reduction, we investigate the potential of the proposed algorithm to mitigate chromatic dispersion (CD) thereby reducing the requirement on a separate CD equalization DSP block.
  • A First Look at the Impact of Measurement on Orchestrating Digital Twin Network
    • Tao Weichen
    • Linguaglossa Leonardo
    , 2024, pp.1-6. Digital twin network are emerging as key drivers for future automated and high-performance networks. Digital twin networks create virtual representations of physical networks, enabling real-time monitoring, simulation, and optimization. The accuracy and timeliness of data are crucial for building a precise digital twin network. However, constructing an accurate digital twin network faces significant challenges due to the difficulties in data collection. In network measurement and data collection, data uncertainty is often unavoidable due to observer effects, where the act of measurement itself imposes an impact on the system. This phenomenon can introduce biases or perturbations that compromise the accuracy of digital twin network models, leading to less precise representations of the network's actual state and behavior. This paper systematically reviews existing measurement schemes and proposes a new classification. We evaluate these measurement methods within a simple network environment, analyzing their impact on system performance and delving into the underlying causes of performance degradation. These insights contribute to the development of a more accurate and efficient digital twin network. (10.1109/CloudNet62863.2024.10815817)
    DOI : 10.1109/CloudNet62863.2024.10815817
  • Enriching GNNs with Text Contextual Representations for Detecting Disinformation Campaigns on Social Media
    • Croso Cunha da Silva Bruno
    • Palmeira Ferraz Thomas
    • Deus Lopes Roseli De
    , 2024. Disinformation on social media poses both societal and technical challenges, requiring robust detection systems. While previous studies have integrated textual information into propagation networks, they have yet to fully leverage the advancements in Transformer-based language models for high-quality contextual text representations. This work addresses this gap by incorporating Transformer-based textual features into Graph Neural Networks (GNNs) for fake news detection. We demonstrate that contextual text representations enhance GNN performance, achieving 33.8% relative improvement in Macro F1 over models without textual features and 9.3% over static text representations. We further investigate the impact of different feature sources and the effects of noisy data augmentation. We expect our methodology to open avenues for further research, and we made code publicly available.
  • Domain Gap and Privacy in Person Re-Identification
    • Rami Hamza
    , 2024. Person Re-Identification (Re-ID) aims to recognize individuals across non-overlapping surveillance cameras. Despite its potential for security, Re-ID models suffer from the domain gap—the discrepancy between training (source domain) and realworld deployment (target domain). Unsupervised Domain Adaptation (UDA) mitigates this issue, enabling adaptation without labeled target data. However, privacy regulations like GDPR and the AI Act impose strict limits on data storage and transfer, making traditional UDA methods, reliant on centralized data, legally and ethically problematic. To address this, we introduce Online UDA (OUDARid), which adapts models from continuous data streams without storing past data, and Distributed UDA (DUDA-Rid), which decentralizes adaptation across multiple cameras to prevent data transfer. We propose Source-Guided Similarity Preservation (S2P) and Fed-Protoid to meet these constraints. S2P mitigates catastrophic forgetting in OUDA-Rid by preserving critical feature similarities from the source domain, ensuring privacy-preserving continual adaptation. Fed-Protoid leverages federated learning to address data transfer restrictions in DUDA-Rid, allowing distributed adaptation without sharing sensitive images. Our frameworks offer a privacy-preserving solution for Person Re-ID while bridging the domain gap. We validate them across multiple scenarios, including real-to-real and synthetic-to-real adaptation, on datasets such as Market-1501, MSMT17, CUHK03, and RandPerson. The results confirm that S2P and FedProtoid achieve strong performance under real-world constraints.
  • Trimming the Fat: Efficient Compression of 3D Gaussian Splats through Pruning
    • Almuhammad Alali Salman
    • Qamar Maryam
    • Bae Sung-Ho
    • Tartaglione Enzo
    , 2024. In recent times, the utilization of 3D models has gained traction, owing to the capacity for end-to-end training initially offered by Neural Radiance Fields and more recently by 3D Gaussian Splatting (3DGS) models. The latter holds a significant advantage by inherently easing rapid convergence during training and offering extensive editability. However, despite rapid advancements, the literature still lives in its infancy regarding the scalability of these models. In this study, we take some initial steps in addressing this gap, showing an approach that enables both the memory and computational scalability of such models. Specifically, we propose “Trimming the fat”, a post-hoc gradient-informed iterative pruning technique to eliminate redundant information encoded in the model. Our experimental findings on widely acknowledged benchmarks attest to the effectiveness of our approach, revealing that up to 75% of the Gaussians can be removed while maintaining or even improving upon baseline performance. Our approach achieves around 50× compression while preserving performance similar to the baseline model, and is able to speed-up computation up to 600 FPS.
  • Addressing Input/Output composition through Open Automata representation
    • Ameur-Boulifa Rabéa
    • Mechiouri Sarah Chabane
    , 2025, 2462 (10.1007/978-3-031-88226-5_14), pp.203-218. Compositional design is a highly convenient approach for specifying and verifying large-scale systems, particularly distributed systems. Automata are widely employed as the basic formalism in this approach. These systems, which are state machines, allow for modelling individual components, their interactions, and their behaviours, but also provide standard modelling methods such as parallel composition and levels of abstraction. In this paper, we focus on distributed systems com- posed of reactive components. We will examine a new formalism that builds upon existing formalisms, allowing for the modelling of the com- ponents and their interactions. The aim of this formalism is to provide a theoretical tool for reasoning about behavioural equivalence among reactive systems. (10.1007/978-3-031-88226-5_14)
    DOI : 10.1007/978-3-031-88226-5_14
  • Modeling of micro-architecture for security with gem5
    • Forcioli Quentin
    , 2024. Embedded systems are the target of a wide variety of attacks, both software and hardware level. Microarchitectural attacks are particularly difficult to study. By taking advantage of the specific behaviors of systems-on-achip, these attacks enable an attacker to take control of a system or protected resources, bypassing process isolation mechanisms. These attacks can target all element in an SoC: CPU, caches, memory, accelerators (FPGA, GPU), interfaces, etc. The Trusted Execution Environment (TEE), key element of SoC security and involved in securing banking applications, is also the target of micro-architectural attacks. In this thesis, I adopt a simulation-based approach to security: through a virtual platform based on gem5, I reproduce and study micro-architectural attacks against TEEs. To achieve this, I improved gem5’s support for TEEs, allowing the use of an open-source TEE (OP-TEE) I also augmented the GDB debugger present in gem5 to allow the study of attack scenarios, leveraging the simulator environment. With this interface, I created TEE-Time, a tool to analyze cache-timing weaknesses. Thanks to TEE-Time, I found vulnerabilities in standard RSA implementations in OP-TEE, I validated this vulnerabilities with cache timing attacks simulated using my virtual platform. To further validate these attacks on a real system, I developed a virtual platform reproducing the RockPi4 board. To simulate the Rockchip RK3399 SoC on the RockPi4, I developed PyDevices fast-prototyping tools for system devices using gem5’s Python interface. Through cache timing simulation, I discovered that the RK3399 uses AutoLock, an ARM-specific cache protocol. Compiling AutoLock into gem5, I ran my attack scenario targeting OP-TEE’s RSA implementation on the RK3399 simulation. By executing this same attack without any modification on a RockPi4, I managed to leak an average of 30% of the RSA key bits, thus making the link between cache attacks and their exploitation in a real system.
  • Deep learning in medical image analysis: introduction to underlying principles and reviewer guide using diagnostic case studies in paediatrics
    • Dubois Constance
    • Eigen David
    • Delmas Emmanuel
    • Einfalt Margot
    • Lemaçon Clara
    • Berteloot Laureline
    • Bossuyt Patrick
    • Drummond David
    • Scherdel Pauline
    • Simon François
    • Torchin Héloïse
    • Vali Yasaman
    • Bloch Isabelle
    • Cohen Jérémie
    BMJ - British Medical Journal, BMJ, 2024, 387, pp.e076703. Deep learning, a subset of artificial intelligence, has gained attention in recent years for its ability to achieve human level performance in medical image analysis. As deep learning is increasingly being studied in medical image analysis, it is essential that clinicians are familiar with its underlying principles, strengths, and possible pitfalls in their evaluation. This article aims to clarify deep learning techniques applied in medical image analysis and to help frontline clinicians understand how to read and appraise studies about this new and rapidly advancing technology. While image analysis using deep learning has the potential to enhance the diagnosis of various medical conditions, clinicians, policy makers, and patients should exercise caution when evaluating the available evidence. (10.1136/bmj-2023-076703)
    DOI : 10.1136/bmj-2023-076703
  • Kernel‐Based Bootstrap Synthetic Data to Estimate Measurement Uncertainty in Analytical Sciences
    • Feinberg Max
    • Clémençon Stéphan
    • Rudaz Serge
    • Boccard Julien
    Journal of Chemometrics, Wiley, 2024, 38 (12), pp.1-15. ABSTRACT Measurement uncertainty (MU) is becoming a key figure of merit for analytical methods, and estimating MU from method validation data is cost‐effective and practical. Since MU can be defined as a coverage interval of a given result, the computation of statistical prediction intervals is a possible approach, but the quality of the intervals is questionable when the number of available data is reduced. In this context, the bootstrap procedure constitutes an efficient strategy to increase the observed data variability. While applying naive bootstrap to validation data raises some computational challenges, the use of smooth bootstrap is much more interesting when synthetic data are generated using an adapted kernel density estimation algorithm. MU can be directly obtained in a very convenient way as an uncertainty function applicable to any unknown future measurement. This publication presents the advantages and disadvantages of this new method illustrated using diverse in‐house and interlaboratory validation data. (10.1002/cem.3628)
    DOI : 10.1002/cem.3628
  • Stylometry for real-world expert coders: a zero-shot approach
    • Gurioli Andrea
    • Gabbrielli Maurizio
    • Zacchiroli Stefano
    PeerJ Computer Science, PeerJ, 2024, 10, pp.e2429. Code stylometry is the application of stylometry techniques to determine the authorship of software source code snippets. It is used in the industry to address use cases like plagiarism detection, code audits, and code review assignments. Most works in the code stylometry literature use machine learning techniques and (1) rely on datasets coming from in vitro coding competition for training, and (2) only attempt to recognize authors present in the training dataset (in-distribution authors). In this work we give a fresh look at code stylometry and challenge both these assumptions: (1) we recognize expert authors who contribute to real-world open-source projects, and (2) we show how to accurately recognize authors not present in the training set (out-distribution authors). We assemble a novel open dataset of code snippets for code stylometry tasks consisting of 114,400 code snippets, authored by 104 authors having contributed 1,100 snippets each. We develop a K-nearest neighbors algorithm (k-NN) classifier for the code stylometry task and train it on the dataset. Our system achieves a top accuracy of 69% among five randomly selected in-distribution authors, thus improving state of the art by more than 20%. We also show that when moving from in-distribution to outdistribution authors, the classification performances of the k-NN classifier remain the same, achieving a top accuracy of 71% among five randomly-selected out-distribution authors. (10.7717/peerj-cs.2429)
    DOI : 10.7717/peerj-cs.2429
  • Higher-Order GNNs Meet Efficiency: Sparse Sobolev Graph Neural Networks
    • Giraldo Jhony
    • Einizade Aref
    • Todorovic Andjela
    • Castro-Correa Jhon
    • Badiey Mohsen
    • Bouwmans Thierry
    • Malliaros Fragkiskos D.
    IEEE Transactions on Signal and Information Processing over Networks, IEEE, 2024, 11, pp.11-22. Graph Neural Networks (GNNs) have shown great promise in modeling relationships between nodes in a graph, but capturing higher-order relationships remains a challenge for large-scale networks. Previous studies have primarily attempted to utilize the information from higher-order neighbors in the graph, involving the incorporation of powers of the shift operator, such as the graph Laplacian or adjacency matrix. This approach comes with a trade-off in terms of increased computational and memory demands. Relying on graph spectral theory, we make a fundamental observation: the regular and the Hadamard power of the Laplacian matrix behave similarly in the spectrum. This observation has significant implications for capturing higher order information in GNNs for various tasks such as node classification and semi-supervised learning. Consequently, we propose a novel graph convolutional operator based on the sparse Sobolev norm of graph signals. Our approach, known as Sparse Sobolev GNN (S2-GNN), employs Hadamard products between matrices to maintain the sparsity level in graph representations. S2-GNN utilizes a cascade of filters with increasing Hadamard powers to generate a diverse set of functions. We theoretically analyze the stability of S2-GNN to show the robustness of the model against possible graph perturbations. We also conduct a comprehensive evaluation of S2-GNN across various graph mining, semi-supervised node classification, and computer vision tasks. In particular use cases, our algorithm demonstrates competitive performance compared to state-of-the-art GNNs in terms of performance and running time. (10.1109/TSIPN.2024.3503416)
    DOI : 10.1109/TSIPN.2024.3503416
  • Meta-Evaluation Methodology and Benchmark for Automatic Story Generation
    • Chhun Cyril
    , 2024. Storytelling is a central component of human culture. Multiple approaches have been proposedto explore computational storytelling, despite the inherent challenges posed by the tasks of generating stories and assessing their quality. In this thesis, we design a meta-evaluation methodology and benchmark for Automatic Story Generation (ASG).First, we lay the groundwork for conducting our metaevaluation: we describe our chosen setting, provide definitions for the ASG and Automatic Story Evaluation (ASE) tasks, and propose an original set of six criteria for story evaluation. Then, we introduce HANNA, our corpus of Human ANnotated NArratives, which contains 1,056 stories annotated w.r.t. our six criteria, and show that those criteria allow for a standardized human evaluation. We use Large Language Models (LLMs) to augment HANNA with 480 new stories and 150k+ rating annotations. We observe that LLMs obtain better grades than humans, as rated by selected LLMs. After that, we perform our meta-evaluation benchmark on HANNA.We mainly observe that specific measures for ASE are needed, and that commonly-used measures (e.g. BLEU) are sub-optimal. We then show our analysis of LLM performance at ASE: we find that LLMs are currently the best proxy for human evaluation of ASG and that, in our specific setting, providing detailed guidelines does not improve correlations between LLM and human ratings. Those results prompt us to study whether the performance displayed by LLMs at ASE and ASG can be explained through different factors.We perform a three-part study on LLM-generated explanations, and an analysis of pretraining data on LLM performance. Notably, we find that LLMs struggle to explain their answers with substantiated claims.Finally, we outline three main research perspectives: designing specific ASE measures, further investigating LLM performance at ASG and ASE, and assessing and mitigating the impact of LLMs on society.
  • 50 Shades of Delta Sigma
    • Jabbour Chadi
    • Frappé Antoine
    • Schlegel Nicolas
    , 2024. Delta Sigma modulators (DSM) have been around for more than 60 years. In 1962, Inose and his colleagues introduced one of the first implementations of DSM for a code modulation communication for a telemetering system. Since, during more than 6 decades, this remarkable architecture was employed to design most of the building blocks of audio systems, sensors or wireless transceivers. When we talk about DSM, we think first about Analogue to Digital Converters (ADC) but DSM are also excellent candidates to build high precision classical Digital to Analogue Converters (DAC) and also RF DACs with the signal directly centered at the LO frequency. DSM are also more and more used in hybrid ADC (SAR DSM or noise shaping SAR) or hybrid DACs (combination of Nyquist DAC and DSM DACs). Other innovative uses of DSM were also proposed such as building RF to digital receivers based on Band Pass DSM or Direct Delta Sigma Receivers. DSM principles are also used for massive MIMO systems to perform Noise Shaping with respect to transmission angles instead of frequency. Despite the diversity of these applications, their design methodologies share many aspects. In this tutorial, we will present the latest trends with Delta Sigma Modulators from the design methodology that was not spared by Artificial Intelligence, to system innovations, performance of the latest implementations and future applications. All along the presentation, we will share our experience with this architecture, we will share our success and also the mistakes we have made and problems we have faced. We will present our current and future projects in the field as well to the expected trends.
  • A Sampling Based Clock Calibration Technique for Low Power Systems
    • Jabbour Chadi
    , 2024, pp.1-4. This paper presents a novel approach for oscillator calibration suited mainly for IoT and other low power devices. As a matter of fact, oscillators in integrated circuits can see their resonance frequency altered due to process, voltage and temperature variations. Using an external reference, often available in IoT devices, the proposed solution uses the sampling theory to estimate the real oscillator frequency and to perform the correction. An electrical implementation of the approach in a 0.18 μm CMOS technology is presented for a leadless pacemaker scenario. The calibration of the 4 MHz oscillator has a precision of 0.5% and consumes only 0.69 nJ. (10.1109/ICECS61496.2024.10848589)
    DOI : 10.1109/ICECS61496.2024.10848589
  • Defining Lyapunov functions as the solution of a performance estimation saddle point problem
    • Fercoq Olivier
    , 2024. In this paper, we reinterpret quadratic Lyapunov functions as solutions to a performance estimation saddle point problem. This allows us to automatically detect the existence of such a Lyapunov function and thus numerically check that a given algorithm converges. The novelty of this work is that we show how to define the saddle point problem using the PEPit software and then solve it with DSP-CVXPY. This combination gives us a very strong modeling power because defining new points and their relations across iterates is very easy in PEPit. We can without effort define auxiliary points used for the sole purpose of designing more complex Lyapunov functions, define complex functional classes like the class of convex-concave saddle point problems whose smoothed duality gap has the quadratic error bound property or study complex algorithms like primal-dual coordinate descent method.
  • An Action Language-Based Formalisation of an Abstract Argumentation Framework
    • Munro Yann
    • Sarmiento Camilo
    • Bloch Isabelle
    • Bourgne Gauvain
    • Pelachaud Catherine
    • Lesot Marie-Jeanne
    , 2025, 15395, pp.155-171. An abstract argumentation framework is a commonly used formalism to provide a static representation of a dialogue. However, the order of enunciation of the arguments in an argumentative dialogue is very important and can affect the outcome of this dialogue. In this paper, we propose a new framework for modelling abstract argumentation graphs, a model that incorporates the order of enunciation of arguments. By taking this order into account, we have the means to deduce a unique outcome for each dialogue, called an extension. We also establish several properties, such as termination and correctness, and discuss two notions of completeness. In particular, we propose a modification of the previous transformation based on a "last enunciated last updated" strategy, which verifies the second form of completeness. (10.1007/978-3-031-77367-9_13)
    DOI : 10.1007/978-3-031-77367-9_13
  • Neural Networks for multi user Beam Management in mmWave Massive MIMO
    • Ktari Aymen
    • Rekaya Ghaya
    , 2024, pp.94-99. In this paper, we propose a new approach for Machine Learning (ML)-based multi user Beam Alignment (BA) for an uplink scenario using mmWaveMassive MIMO. We propose to sound the smallest possible subset of beams and compute their corresponding Signal to Noise and Interference Ratio (SINR). These sounded SINR values are then fed into several neural networks in order to predict for the remaining nonsounded beam pairs, benchmarking the de-facto method for the Alignment procedure, the Exhaustive BA. We propose two methods: generalized point-to-point matrix completion using Multi Layer Perceptron (MLP) and Auto Encoder (AE) on the one hand, and tensor completion using Convolution Neural Network (CNN) on the other hand. Our extensive numerical simulations illustrate encountering the large pilot signaling overhead problem with high prediction quality using only 10% of the total beam samples. (10.1109/MECOM61498.2024.10881791)
    DOI : 10.1109/MECOM61498.2024.10881791
  • Population Density and DL EMF Exposure Levels by Region in Korea
    • Lee Ae-Kyoung
    • Jeon Sangbong
    • Wang Shanshan
    • Wiart Joe
    • Choi Hyung-Do
    • Moon Jung Ick
    , 2024, pp.781-783. In 2023, the electric field strength within mobile communication bands was measured in the largest city (Seoul), a small city (Gwangju, Gyeonggi province), and a rural area (Yangpyeong, Gyeonggi province) in South Korea. The three measurement regions were selected based on population density; The population densities of Seoul, Gwangju, and Yangpyeong are about 15,550/km2, 856/km2, and 133/km2, respectively. Measurements were performed by mounting the SRM3006 antenna on the roof of a vehicle and driving for approximately 40 km in each region. In this paper, the authors report the results of analyzing downlink RF-EMF levels in mobile communication networks currently in operation by frequency, time, and region. (10.1109/APMC60911.2024.10867416)
    DOI : 10.1109/APMC60911.2024.10867416
  • Superselection rules and bosonic quantum computational resources
    • Descamps Eloi
    • Fabre Nicolas
    • Saharyan Astghik
    • Keller Arne
    • Milman Pérola
    , 2024. We present a method to systematically identify and classify quantum optical non-classical states as classical/non-classical based on the resources they create on a bosonic quantum computer. This is achieved by converting arbitrary bosonic states into multiple modes, each occupied by a single photon, thereby defining qubits of a bosonic quantum computer. Starting from a bosonic classical-like state in a representation that explicitly respects particle number super-selection rules, we apply universal gates to create arbitrary superpositions of states with the same total particle number. The non-classicality of the corresponding states can then be associated to the operations they induce in the quantum computer. We also provide a correspondence between the adopted representation and the more conventional one in quantum optics, where superpositions of Fock states describe quantum optical states, and we identify how multi-mode states can lead to quantum advantage. Our work contributes to establish a seamless transition from continuous to discrete properties of quantum optics while laying the grounds for a description of non-classicality and quantum computational advantage that is applicable to spin systems as well. (10.1103/PhysRevLett.133.260605)
    DOI : 10.1103/PhysRevLett.133.260605
  • Gaps or Hallucinations? Scrutinizing Machine-Generated Legal Analysis for Fine-grained Text Evaluations
    • Hou Abe
    • Jurayj William
    • Holzenberger Nils
    • Blair-Stanek Andrew
    • van Durme Benjamin
    , 2024, 2024, pp.280-302. Large Language Models (LLMs) show promise as a writing aid for professionals performing legal analyses. However, LLMs can often hallucinate in this setting, in ways difficult to recognize by non-professionals and existing text evaluation metrics. In this work, we pose the question: when can machine-generated legal analysis be evaluated as acceptable? We introduce the neutral notion of gaps – as opposed to hallucinations in a strict erroneous sense – to refer to the difference between human-written and machine-generated legal analysis. Gaps do not always equate to invalid generation. Working with legal experts, we consider the CLERC generation task proposed in Hou et al. (2024b), leading to a taxonomy, a fine-grained detector for predicting gap categories, and an annotated dataset for automatic evaluation. Our best detector achieves 67% F1 score and 80% precision on the test set. Employing this detector as an automated metric on legal analysis generated by SOTA LLMs, we find around 80% contain hallucinations of different kinds. (10.18653/v1/2024.nllp-1.24)
    DOI : 10.18653/v1/2024.nllp-1.24