Sorry, you need to enable JavaScript to visit this website.
Share

Publications

2024

  • Proactive VNF Redeployment and Traffic Routing for modern telco networks
    • liu qiong
    • Zhang Tianzhu
    • Cerroni Walter
    • Linguaglossa Leonardo
    , 2024. The last decade has witnessed the rise of Network Function Virtualization (NFV). Despite its benefits, resource allocation and traffic scheduling are still challenging. A practical issue is where to place Virtual Network Functions (VNFs) in the network to sustain long-term optimal objectives and when to reallocate resources given the dynamics of the substrate network. Most prior works either consider static settings or work in reactive fashions. This paper proposes a dual-window algorithm for proactive service redeployment and traffic routing. Specifically, our algorithm employs an entropy measure to gauge the uncertainty in the substrate network and cognitively updates the service redeployment interval to avoid unnecessary data collection overhead. Our algorithm is lightweight, intuitive, requires no offline training, and achieves the best overall effectiveness and efficiency compared with three state-of-the-art solutions.
  • Tight side-channel security bounds on hardware cryptographic engines
    • Béguinot Julien
    • Cheng Wei
    • Guilley Sylvain
    • Rioul Olivier
    , 2024. In this presentation, we provide an illustration why open-source is paramount in secure hardware systems. Namely, we will pedagogically recall that "random masking" is currently the most effective hardware countermeasure against side-channel attacks. Now, such protection cannot be considered sound unless its implementation can be checked in terms of correctness. We start by exhibiting several open implementations of cryptographic algorithms leveraging the random masking countermeasure at several orders. Also, we claim that trustworthy hardware should be based on (easy verifiable) mathematically proven security. Now, the state of the art is paradoxical in this respect. On the one hand, it is often considered that security against side-channel attacks increases exponentially with the masking order. On the other hand, the circuit size grows quadratically with the masking order for "ISW-like" implementations. There exist some attempts to reduce this overhead with the so-called quasi-linear implementations. Those are compositional, meaning that they can be built bottom-up, from gadgets. In this respect, they are amenable to being generated by open-source / free "high-level synthesis" tools. We deliver two main take away points to guide open implementations of masking schemes. The first point is that there exists a finite optimal masking order with respect to the security proofs. Thus increasing the masking order indefinitely may actually be detrimental to security. The second point is that quasi-linear implementations lead to better security guarantees. This shows that developing such innovative countermeasures is of interest for both efficiency _and_ security.
  • A Countermeasure to Side Channel Message Recovery Attacks Using Chosen Ciphertexts Against ML-KEM
    • Berthet Pierre-Augustin
    , 2024.
  • What Are "Good" Rhythms? Generating Rhythms Based on the Properties Set Out in The Geometry of Musical Rhythm
    • Lascabettes Paul
    • Bloch Isabelle
    , 2024, 14639, pp.45–57. We propose in this article to generate good rhythms from geometric properties. This approach is based on the work by Toussaint, who investigated the properties that make a “good” rhythm good in his book The Geometry of Musical Rhythm. To do this, he analyzed the shapes of polygons corresponding to certain rhythms to derive geometric properties of a good rhythm. In this article, we propose to quantify these properties using original mathematical formulas. This scores each rhythm against several properties to measure how good a rhythm is, resulting in the generation of rhythms with k onsets and n pulses. When k = 5 and n = 16, we reveal that the son rhythm obtains the best score, as predicted by Toussaint at the end of his book. Applying the method for other values of k and n, we demonstrate that some of the rhythms with the highest scores are musically important rhythms, such as the tresillo rhythm with k = 3 and n = 8, the fume-fume rhythm with k = 5 and n = 12, the samba rhythm with k = 7 and n = 16 or the Steve Reich’s signature rhythm with k = 8 and n = 12.
  • Query-Guided Resolution in Uncertain Databases
    • Drien Osnat
    • Freiman Matanya
    • Amarilli Antoine
    • Amsterdamer Yael
    , 2023, 1 (2), pp.Article No.: 180, Pages 1 - 27. We present a novel framework for uncertain data management. We start with a database whose tuple correctness is uncertain and an oracle that can resolve the uncertainty, i.e., decide if a tuple is correct or not. Such an oracle may correspond, e.g., to a data expert or to a crowdsourcing platform. We wish to use the oracle to clean the database with the goal of ensuring the correct answer for specific mission-critical queries. To avoid the prohibitive cost of cleaning the entire database and to minimize the expected number of calls to the oracle, we must carefully select tuples whose resolution would suffice to resolve the uncertainty in query results. In other words, we need a query-guided process for the resolution of uncertain data. We develop an end-to-end solution to this problem, based on the derivation of query answers and on correctness probabilities for the uncertain data. At a high level, we first track Boolean provenance to identify which input tuples contribute to the derivation of each output tuple, and in what ways. We then design an active learning solution for iteratively choosing tuples to resolve, based on the provenance structure and on an evolving estimation of tuple correctness probabilities. We conduct an extensive experimental study to validate our framework in different use cases. (10.1145/3589325)
    DOI : 10.1145/3589325
  • Multimode Quantum Communications and Hybrid Cryptography
    • Mazzoncini Francesco
    , 2024. Quantum cryptography has been largely defined as a novel form of cryptography that would not rely on any computational hardness assumption. However, as the field progresses, and in particular as Quantum Key Distribution (QKD) reaches high technological readiness levels, it appears that there might be a critical balance to strike. On the one hand, we have the quest for the highest theoretical security level. On the other, a second direction consists in optimizing security and performance for real-world use, while still providing an edge over classical cryptography. In this thesis, we have explored new paths towards this second direction, namely real-world quantum cryptography.In the first project, we promote a simple yet powerful message: the most dangerous attacks against QKD, for which the development of countermeasures is crucial, are the easiest ones to implement. Hence, we perform a vulnerability assessment of a Continuous-Variable QKD system device, proposing a novel methodology for security certification based on attack rating.In the second project, we introduce an explicit construction for a key distribution protocol in the Quantum Computational Timelock (QCT) security model, where one assumes that computationally secure encryption may only be broken after a time much longer than the coherence time of available quantum memories. Taking advantage of the QCT assumptions, we build a key distribution protocol on top of the Hidden Matching problem, for which there exists an exponential gap in one-way communication complexity between classical and quantum strategies.In particular, by exploiting this exponential gap, we unlock the possibility of sending multiple copies of the same state to perform everlasting secure key establishment with performances that go beyond standard QKD.Building on our theoretical work on key establishment, whose security and effectiveness hinge on the ability of two parties to address a quantum communication complexity problem more efficiently than is possible classically, in the last experimental project we investigate the feasibility of demonstrating a quantum advantage in communication complexity. In particular, we leverage the intricate mode mixing inherent in multimode fibers by employing wavefront shaping techniques to tackle quantum communication complexity problems.
  • A Model of Scores as Abstract Syntactic Trees
    • Romero-García Gonzalo
    • Agón Carlos
    • Bloch Isabelle
    , 2024, 14639, pp.268-279. This paper deals with the structure of a musical piece. The score is modeled as an Abstract Syntactic Tree (AST) to account for the hierarchy of its elements. Formal definitions of harmony, texture and instrumentation are proposed and constitute the main components of the model. Concatenation and parallelization operators are then proposed to combine these components and organize them in a tree structure. This approach is illustrated on some examples. (10.1007/978-3-031-60638-0_21)
    DOI : 10.1007/978-3-031-60638-0_21
  • Source Code Archiving to the Rescue of Reproducible Deployment
    • Courtès Ludovic
    • Sample Timothy
    • Tournier Simon
    • Zacchiroli Stefano
    , 2024. The ability to verify research results and to experiment with methodologies are core tenets of science. As research results are increasingly the outcome of computational processes, software plays a central role. GNU Guix is a software deployment tool that supports reproducible software deployment, making it a foundation for computational research workflows. To achieve reproducibility, we must first ensure the source code of software packages Guix deploys remains available. We describe our work connecting Guix with Software Heritage, the universal source code archive, making Guix the first free software distribution and tool backed by a stable archive. Our contribution is twofold: we explain the rationale and present the design and implementation we came up with; second, we report on the archival coverage for package source code with data collected over five years and discuss remaining challenges. (10.1145/3641525.3663622)
    DOI : 10.1145/3641525.3663622
  • Developing interactive artificial intelligence tools to assist pathologists with histology annotation
    • Habis Antoine Aurélien
    , 2024. Histopathology on Whole Slide Images (WSI) represents a very valuable field of medicine since the study of biopsies with microscopes can reveal several diseases that are sometimes difficult or impossible to diagnose with the naked eye or other imaging techniques. With the advent of deep learning, which requires a large number of annotated images to be effective, the need to obtain quickly high-quality annotations became clear. The purpose of this thesis is to develop artificial intelligence algorithms for fast interactive annotations and corrections to facilitate user supervision in histopathology image segmentation. This thesis presents our contributions using three different interaction strategies and underlying deep-learning mathematical formalisms. Together, our contributions cover a wide range of use cases:(1) The first tool is completely supervised and tackles the task of correcting nuclei segmentation. Nuclei are biological structures that can be observed distinctly at ×40 magnification and which are essential for several diagnosis tasks. In fact, markers such as the density of nuclei or the ratio between the area of the nucleusand that of the cytoplasm are indicative of certain conditions. The proposed tool proposes a Click and Refine pipeline, exploiting novel metrics on patch similarities and novel architecture training designs to refine four types of segmentation errors, specific to nuclei.(2) The second tool consists of a weakly supervised segmentation method tested on tumoral regions in lymph node metastatic breast cancer. These tumoral regions are biological structures clearly visible at low magnification(×5 or × 10). The first part of our algorithm provides an initial coarse segmentation of the entire WSI based on scribbles, which can then be corrected using fast interactive and non-local segmentation correction inputs.(3) Finally, the third tool proposes a completely unsupervised segmentation tool and a one-shot variant to segment complex heterogeneous biological structures on whole WSIs. The One-Shot learning version is evaluated on a dataset of kidney-dilated tubules. Dilated tubules are medium-sized biological structures that can be observed at an average magnification of ×10-20. They are indicative of some diseases such as urinary tract obstruction. The underlying proposed Deep ContourFlow method translates concepts of active contours into differentiable loss functions exploited in deep-learning architectures.
  • NARX Neural Network Bandwidth Generalization Capability in Power Amplifier Modeling
    • Pham Thuy T
    • Pham Dang-Kièn Germain
    • C. Bouazza Tayeb H
    • Almairac Pierre
    • Desgreys Patricia
    , 2024, pp.363-367. Nonlinear AutoRegressive with eXogenous input Neural Network (NARXNN) can exhibit strong generalization capabilities and rapid convergence by combining all features of a recurrent neural network (RNN), and the ability of training as a purely feedforward architecture,. As a result, it can be used to simulate the dynamic nonlinear behavior of radio frequency (RF) power amplifiers (PAs). In this paper, a comparative study of NARXNN and real-valued focused time-delay neural networks (RVFTDNN) has been executed to demonstrate that NARXNN is a more suitable option for describing the functions of PAs in terms of bandwidth generalization. The validation of NARXNN is carried out in two scenarios: i) single bandwidth and ii) bandwidth generalization using MATLAB-provided measured signals. By evaluating the accuracy in the predicting output spectrum, the normalized mean square error, and the complexity of the models, NARXNN performs a significantly reduced number of coefficients and a dominant precision compared to RVFTDNN. (10.1109/NewCAS58973.2024.10666110)
    DOI : 10.1109/NewCAS58973.2024.10666110
  • Collaborating Foundation Models for Domain Generalized Semantic Segmentation
    • Benigmim Yasser
    • Roy Subhankar
    • Essid Slim
    • Kalogeiton Vicky
    • Lathuilière Stéphane
    , 2024, pp.3108-3119. Domain Generalized Semantic Segmentation (DGSS) deals with training a model on a labeled source domain with the aim of generalizing to unseen domains during inference. Existing DGSS methods typically effectuate robust features by means of Domain Randomization (DR). Such an approach is often limited as it can only account for style diversification and not content. In this work, we take an orthogonal approach to DGSS and propose to use an assembly of CoLlaborative FOUndation models for Domain Generalized Semantic Segmentation (CLOUDS). In detail, CLOUDS is a framework that integrates FMs of various kinds: (i) CLIP backbone for its robust feature representation, (ii) generative models to diversify the content, thereby covering various modes of the possible target distribution, and (iii) Segment Anything Model (SAM) for iteratively refining the predictions of the segmentation model. Extensive experiments show that our CLOUDS excels in adapting from synthetic to real DGSS benchmarks and under varying weather conditions, notably outperforming prior methods by 5.6% and 6.7% on averaged miou, respectively. The code is available at : https://github.com/yasserben/CLOUDS (10.1109/CVPR52733.2024.00300)
    DOI : 10.1109/CVPR52733.2024.00300
  • The Curious Decline of Linguistic Diversity: Training Language Models on Synthetic Text
    • Guo Yanzhu
    • Shang Guokan
    • Vazirgiannis Michalis
    • Clavel Chloé
    , 2024. This study investigates the consequences of training language models on synthetic data generated by their predecessors, an increasingly prevalent practice given the prominence of powerful generative models. Diverging from the usual emphasis on performance metrics, we focus on the impact of this training methodology on linguistic diversity, especially when conducted recursively over time. To assess this, we adapt and develop a set of novel metrics targeting lexical, syntactic, and semantic diversity, applying them in recursive finetuning experiments across various natural language generation tasks in English. Our findings reveal a consistent decrease in the diversity of the model outputs through successive iterations, especially remarkable for tasks demanding high levels of creativity. This trend underscores the potential risks of training language models on synthetic text, particularly concerning the preservation of linguistic richness. Our study highlights the need for careful consideration of the long-term effects of such training approaches on the linguistic capabilities of language models.
  • MAFALDA: A Benchmark and Comprehensive Study of Fallacy Detection and Classification
    • Helwe Chadi
    • Calamai Tom
    • Paris Pierre-Henri
    • Clavel Chloé
    • Suchanek Fabian M
    , 2024, 1: Long Papers, pp.4810-4845. We introduce MAFALDA, a benchmark for fallacy classification that merges and unites previous fallacy datasets. It comes with a taxonomy that aligns, refines, and unifies existing classifications of fallacies. We further provide a manual annotation of a part of the dataset together with manual explanations for each annotation. We propose a new annotation scheme tailored for subjective NLP tasks, and a new evaluation method designed to handle subjectivity. We then evaluate several language models under a zero-shot learning setting and human performances on MAFALDA to assess their capability to detect and classify fallacies. (10.18653/v1/2024.naacl-long.270)
    DOI : 10.18653/v1/2024.naacl-long.270
  • A Survey of Meaning Representations - From Theory to Practical Utility
    • Sadeddine Zacchary
    • Opitz Juri
    • Suchanek Fabian
    , 2024, pp.2877-2892. Symbolic meaning representations of natural language text have been studied since at least the 1960s. With the availability of large annotated corpora, and more powerful machine learning tools, the field has recently seen several new developments. In this survey, we study today's most prominent Meaning Representation Frameworks. We shed light on their theoretical properties, as well as on their practical research environment, i.e., on datasets, parsers, applications, and future challenges. (10.18653/v1/2024.naacl-long.159)
    DOI : 10.18653/v1/2024.naacl-long.159
  • Assessment of RF-EMF Exposure in Shopping Centers and Covered Markets in Paris
    • Zhang Yarui
    • Wang Shanshan
    • Liu Jiang
    • Zheng Ce
    • Samaras Theodoros
    • Wiart Joe
    , 2024.
  • Convergent plug-and-play with proximal denoiser and unconstrained regularization parameter
    • Hurault Samuel
    • Chambolle Antonin
    • Leclaire Arthur
    • Papadakis Nicolas
    Journal of Mathematical Imaging and Vision, Springer Verlag, 2024, 66, pp.616–638. In this work, we present new proofs of convergence for Plug-and-Play (PnP) algorithms. PnP methods are efficient iterative algorithms for solving image inverse problems where regularization is performed by plugging a pre-trained denoiser in a proximal algorithm, such as Proximal Gradient Descent (PGD) or Douglas-Rachford Splitting (DRS). Recent research has explored convergence by incorporating a denoiser that writes exactly as a proximal operator. However, the corresponding PnP algorithm has then to be run with stepsize equal to $1$. The stepsize condition for nonconvex convergence of the proximal algorithm in use then translates to restrictive conditions on the regularization parameter of the inverse problem. This can severely degrade the restoration capacity of the algorithm. In this paper, we present two remedies for this limitation. First, we provide a novel convergence proof for PnP-DRS that does not impose any restrictions on the regularization parameter. Second, we examine a relaxed version of the PGD algorithm that converges across a broader range of regularization parameters. Our experimental study, conducted on deblurring and super-resolution experiments, demonstrate that both of these solutions enhance the accuracy of image restoration. (10.1007/s10851-024-01195-w)
    DOI : 10.1007/s10851-024-01195-w
  • Multimodal models of repair in social human-agent interactions
    • Ngo Anh
    • Clavel Chloé
    • Pelachaud Catherine
    • Rollet Nicolas
    , 2024. People often encounter troubles in everyday conversations, prompting them to initiate repairs, which are various approaches employed to recognize and resolve those problems, fostering mutual understanding across conversational turns. However, maintaining a smooth interaction remains challenging for Conversational Agents (CAs), which are dialogue systems designed to simulate conversation with humans (including chatbots, social robots, and virtual assistants). To foster seamless human-agent interaction, the CA should be able to recognize repairs initiated by humans, utilize multimodal cues, and participate in the repair process. This article, which is an overview of our thesis research project, outlines our ongoing efforts to accomplish this objective. The initial phase involves analyzing repair phenomena in human-human interactions.
  • The Blockwise Rank Syndrome Learning Problem and Its Applications to Cryptography
    • Aragon Nicolas
    • Briaud Pierre
    • Dyseryn Victor
    • Gaborit Philippe
    • Vinçotte Adrien
    , 2024, 14771, pp.75-106. <div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper is an extended version of [8] published in PQCrypto 2024, in which we combine two approaches, blockwise errors and multisyndromes, in a unique approach which leads to very ecient generalized RQC and LRPC schemes.</p><p>The notion of blockwise error in a context of rank based cryptography has been recently introduced in [31]. This notion of error, very close to the notion of sum-rank metric [27], permits, by decreasing the weight of the decoded error, to greatly improve parameters for the LRPC and RQC cryptographic schemes. A little before, the multi-syndromes approach introduced for LRPC and RQC schemes in [3, 18] also allowed to considerably decrease parameters sizes for LRPC and RQC schemes, through in particular the introduction of Augmented Gabidulin codes.</p><p>In order to combine these approaches, we introduced in [8] the Blockwise Rank Support Learning problem. It consists of guessing the support of the errors when several syndromes are given in input, with blockwise structured errors. The new schemes we introduced have very interesting features since for 128 bits security they permit to obtain generalized schemes for which the sum of public key and ciphertext is only 1.4 kB for the generalized RQC scheme and 1.7 kB for the generalized LRPC scheme.</p><p>In this extended version we give the following new features. First, we propose a new optimization on the main protocol which consists in considering 1 in the support of an error, allowing to deduce a subspace of the error to decode and improve the decoding capacity of our LRPC code, while maintaining an equal level of security. The approach of the original paper permits to reach a 40% gain in terms of parameters size when compared to previous results [18, 31], and this optimization allows to reduce the parameters by another 4% for higher security level. We obtain better results in terms of size than the KYBER scheme whose total sum is 1.5 kB. Second we give a more detailed analysis of the algebraic attacks on the ℓ-RD problem we proposed in [8], which allowed to cryptanalyze all blockwise LRPC parameters proposed in [31] (with an improvement of more than 40 bits in the case of structural attacks). And at last, third, we propose a more detailed introduction to the historical background about rank metric, especially on the RQC and LRPC cryptosystems and their recent improvements and we add some parameters for the case of classical RQC (the case of only one given syndrome, that is a special case of our scheme, for which we could achieve 1.5 kB for the sum of the public key and the ciphertext), which compares very well to the previous version of classical RQC.</p></div> (10.1007/978-3-031-62743-9_3)
    DOI : 10.1007/978-3-031-62743-9_3
  • Robust Convergence Technique against Multilevel Random Effects in Stochastic Modeling of Wearable Antennas' Far-Field
    • Du Jinxin
    • Wang Ruimeng
    • Roblin Christophe
    • Yang Xue-Xia
    • Han Bin
    IEEE Antennas and Wireless Propagation Letters, Institute of Electrical and Electronics Engineers, 2024, pp.1 - 5. Stochastic modeling is widely employed to charac-terize uncertainty propagation in fluctuating wearable antenna systems. A major challenge that hinders the convergence of sto-chastic models is the multilevel random effects on antenna’s far-field caused by random disturbances, which exacerbate the already difficult inherent issue tied to high dimensionality and nonlinearity. This paper proposes to separately model the “glob-al” random effect depending mainly on frequency, and the “fine” random effect depending mainly on antenna’s directional char-acteristics. The “decoupling” of global and fine effects is obtained by separately modeling the reflection coefficient S11 and a newly defined “desensitized” far-field, which is insensitive to detuning (or mismatch) phenomena. A “centering” technique based on cross-correlation is used to reduce the sensibility of S11 to the randomness. The whole strategy significantly accelerates the convergence of the modeling process, resulting in a “bi-level” surrogate model that exhibits enhanced robustness and accuracy. Comparative tests on a flexible textile patch antenna demonstrate that the proposed technique can reduce modeling costs by 57% while maintaining the same level of model accuracy. The proposed solution could expand the application of stochastic modeling to a broader spectrum of antenna characterization and optimization. (10.1109/lawp.2024.3412984)
    DOI : 10.1109/lawp.2024.3412984
  • The Price of Smart Contract Privacy
    • Massoni Sguerra Lucas
    • Jouvelot Pierre
    • Coelho Fabien
    • Gallego Arias Emilio Jesús
    • Memmi Gérard
    , 2024. Smart contracts face a significant challenge regarding the data transparency inherent to the blockchain-based decentralized systems on which they run. This transparency can limit the potential applications and use cases of smart contracts, especially when privacy and confidentiality are paramount. Presently, blockchain applications that require a certain level of privacy will tend to rely on off-chain, centralized solutions. However, this approach introduces trade-offs, potentially compromising the trust and security provided by blockchain technology. In this article, we advocate for the integration of cryptographic tools into smart contracts, aiming to enhance privacy and address transparency concerns in applications. We introduce the notion of a Privacy Framework (PF) as the general building block that addresses privacy issues in smart contracts by linking privacy requirements and adequate implementations. Since auction are important applications that strongly rely on privacy for reaching their full potential, we adopt in this paper the auction known as Vickrey-Clarke-Groves (VCG) Auction for Sponsored Search as a use case to develop the notion of PFs. In practice, we provide three PF instances, of increasing complexity, to improve the privacy assurances of specific auction smart contracts. Our experimental assessment of these PF instances suggest they are efficient, not only in terms of privacy preservation, but also in gas and monetary cost, two crucial factors for the viability of smart contracts.
  • Estimation de la permittivité et de la conductivité de matériaux de construction à faibles pertes diélectriques dans la bande 2-260 GHz et extension du modèle ITU-R P.2040 pour les fréquences supérieures à 100 GHz
    • Conrat Jean-Marc
    • Aliouane Mohamed
    • Cousin Jean-Christophe
    • Begaud Xavier
    , 2024. Dans cet article, nous estimons les caractéristiques électromagnétiques (permittivité et conductivité) des matériaux de construction usuels, supposés à faibles pertes, en fonction de la fréquence et à partir de la mesure du facteur de réflexion et de transmission dans la bande 2-260 GHz. Les résultats sont comparés au modèle ITU (Union Internationale des Télécommunications P.2040 défini pour une fréquence inférieure à 100 GHz. Comme le suggère le modèle de l'ITU, la permittivité ne varie pas et la conductivité augmente avec la fréquence. Cependant, les facteurs de réflexion et de transmission pour les fréquences supérieures à 100 GHz peuvent être fortement affectés par l'inhomogénéité et/ou la rugosité de la surface des matériaux qui génèrent un évanouissement de fréquence à petite échelle non prévu par le modèle de l'ITU.
  • Antenne multibande et large bande à double métasurface avec dépointage du faisceau dans l’une des bandes
    • Gonçalves Licursi de Mello Rafael
    • Lepage Anne Claire
    • Begaud Xavier
    , 2024. Dans cette communication, nous avons associé un élément rayonnant large bande à deux métasurfaces pour obtenir une antenne multibande et large bande. Les diagrammes de rayonnement dans quelques bandes de la 4G et du Wi-Fi 2.4/5/6E (2.40–2.70 GHz, 5.17–5.83 GHz et 5.93–6.45 GHz) sont directifs et le faisceau peut être dynamiquement dépointé jusqu’à ±51° de 3.50 à 3.65 GHz dans la bande 5G Europe (3.40 à 3.80 GHz) sans perturber les autres bandes de fonctionnement. L’élément rayonnant est du type nœud de papillon (bow-tie). Le rayonnement directif et stable est obtenu dans les bandes 4G/5G et Wi-Fi 2.4/5/6E grâce à un réflecteur du type conducteur magnétique artificiel et le dépointage dans la bande 5G grâce à une métasurface de Huygens.
  • Statistical Analysis of RF-EMF Exposure Induced by Cellular Wireless Networks in Public Transportation Facilities of the Paris Regionx
    • Zhang Yarui
    • Wang Shanshan
    • Ben Chikha Wassim
    • Liu Jiang
    • Zheng Ce
    • Samaras Theodoros
    • Wiart Joe
    IEEE Access, IEEE, 2024, 12, pp.79741-79753. Wireless communications are increasingly used today. Despite such use, there is a significant perception of risk which makes exposure monitoring a significant concern today. The work described in this article was carried out within the framework of the European SEAWave project and the French Beyond5G project. The exposure assessment was evaluated using a personal exposimeter (MVG EMF Spy) whose compactness and ease of use make it more suitable and portable than a system combining measuring probes and spectrum analyzers. Measurements were carried out on the cellular frequency bands used by 2G, 3G, 4G, and 5G, as well as that of Wi-Fi, in different modes of public transportation (RER, metro, tramway, bus, and train) circulating in the Paris region. The measurements have been analyzed by frequency band, type of public transportation, and type of environment encountered. For each set of measurements (e.g., metro lines, tramways), the mean, standard deviation, skewness, and kurtosis were evaluated and analyzed. For all exposure measurements taken in the 700, 800, 900, 1800, 2100, 2600, and 3500 MHz frequency bands, the overall average values are 0.39, 0.43, 0.30, 0.21, 0.18, 0.24 and 0.18 V/m, respectively. These measurements have, in all cases, a significant dispersion as shown by the ratios of standard deviations to mean values. The well-known K-means clustering technique was applied to these four parameters for different subsets of data. The number of clusters k = 3 has been chosen based on the analysis of the optimal value of k for the current dataset. Our analysis indicates that the first group’s members display the highest mean values with moderate variance and the lowest values for the third and fourth moments. The second cluster is distinguished by points with large mean and variance, accompanied by moderate skewness and kurtosis. Conversely, the third group comprises points with the smallest mean and variance values, yet the largest measurements for the third and fourth moments. (10.1109/ACCESS.2024.3410090)
    DOI : 10.1109/ACCESS.2024.3410090
  • Characterization of Retinal Arteries by Adaptive Optics Ophthalmoscopy Image Analysis
    • Rossant F Rossant
    • Bloch Isabelle
    • Trimèche I.
    • de Regnault de Bellescize J.-B.
    • Castro Farias D.
    • Krivosic V.
    • Chabriat H.
    • Paques M.
    IEEE Transactions on Biomedical Engineering, Institute of Electrical and Electronics Engineers, 2024, 71 (11), pp.3085-3097. Objective: This paper aims at quantifying biomarkers from the segmentation of retinal arteries in adaptive optics ophthalmoscopy images (AOO). Methods: The segmentation is based on the combination of deep learning and knowledge-driven deformable models to achieve a precise segmentation of the vessel walls, with a specific attention to bifurcations. Biomarkers (junction coefficient, branching coefficient, wall to lumen ratio ($wlr$)) are derived from the resulting segmentation. Results: reliable and accurate segmentations ($mse\ = \ 1.75 \pm 1.24$ pixel) and measurements are obtained, with high reproducibility with respect to images acquisition and users, and without bias. Significance: In a preliminary clinical study of patients with a genetic small vessel disease, some of them with vascular risk factors, an increased $wlr$ was found in comparison to a control population. Conclusion: The $wlr$ estimated in AOO images with our method (AOV, Adaptive Optics Vessel analysis) seems to be a very robust biomarker as long as the wall is well contrasted. (10.1109/TBME.2024.3408232)
    DOI : 10.1109/TBME.2024.3408232
  • Mapping AI ethics: a meso-scale analysis of its charters and manifestos
    • Gornet Mélanie
    • Delarue Simon
    • Boritchev Maria
    • Viard Tiphaine
    , 2024, pp.127-140. The recent years have seen a surge of initiatives with the goal of defining what “ethical” artificial intelligence would or should entail, resulting in the publication of various charters and manifestos discussing AI ethics; these documents originate from academia, AI industry companies, non-profits, regulatory institutions, and the civil society. The contents of such documents vary wildly, from short, vague position statements to verbatims of democratic debates or impact assessment studies. As such, they are a marker of the social world of artificial intelligence, outlining the tenets of different actors, the consensus and dissensus on important goals, and so on. Multiple meta-analyses have focused on qualitatively identifying recurring themes in these documents, highlighting the high polysemy of themes such as transparency or trust, among others. The broad term of “AI ethics” and its guiding principles hide multiple disparities, shaped by our collective imaginations, economic and regulatory incentives, and the pre-existing social and structural power asymmetries; through quantitative analyses, we validate and infirm previous qualitative results. In this paper, we create and present a corpus of charters and manifestos discussing AI ethics through the process of collection and its quantitative analysis using text analysis to shed light on common and distinct vocabularies. Through frequency analysis, hierarchical topic clustering and semantic graph modelling, we show that the charters and manifestos discuss AI ethics along three broad axes: technical documents, regulatory ones, and innovation and business ones. We use our quantitative analysis to back up and nuance previous qualitative results, showing how some themes remain specific while others have fully permeated the space of AI ethics. We document and release our corpus, comprising of 436 documents, charters and manifestos discussing AI ethics. We release the corpus, its datasheet and our analysis, to open the way to further studies and discussions around vocabulary, principles and their evolution, as well as interactions among actors of AI ethics, in order to foster further studies on the topic. (10.1145/3630106.3658545)
    DOI : 10.1145/3630106.3658545