Sorry, you need to enable JavaScript to visit this website.
Share

Publications

2023

  • A Differentiable Entropy Model for Learned Image Compression
    • Presta Alberto
    • Fiandrotti Attilio
    • Tartaglione Enzo
    • Grangetto Marco
    , 2023, 14233, pp.328-339. In an end-to-end learned image compression framework, an encoder projects the image on a low-dimensional, quantized, latent space while a decoder recovers the original image. The encoder and decoder are jointly trained with standard gradient backpropagation to minimize a rate-distortion (RD) cost function accounting for both distortions between the original and reconstructed image and the quantized latent space rate. State-of-the-art methods rely on an auxiliary neural network to estimate the rate R of the latent space. We propose a non-parametric entropy model that estimates the statistical frequencies of the quantized latent space during training. The proposed model is differentiable, so it can be plugged into the cost function to be minimized as a rate proxy and can be adapted to a given context without retraining. Our experiments show comparable performance with a learned rate estimator and better performance when is adapted over a temporal context (10.1007/978-3-031-43148-7_28)
    DOI : 10.1007/978-3-031-43148-7_28
  • All Predictions Matter: an Online Video Prediction Approach
    • Vijayaratnam Melan
    • Cagnazzo Marco
    • Valenzise Giuseppe
    • Tartaglione Enzo
    , 2023. To effectively manage and utilize the massive amount of visual data generated by the surging number of videos, decision-making systems must predict and reason about future outcomes. This paper proposes a novel online approach for video prediction that enables continual learning in the presence of new data, as periodic training of neural networks may not be practical. We utilize all predictions, including intermediate computations obtained during the inference process, to improve the performance of video prediction. To achieve this, we incorporate a weighting scheme in the loss that accounts for all the predictions during the learning process. Additionally, we leverage semantic segmentation to assess the performance of extrapolated frames by focusing on the position of the objects in the scene. Our approach stands out from state-of-the-art methods as it uses intermediate predictions, which are available due to the iterative nature of forecasting future frames. Our method improves the offline counterpart for the same network by 1.45 dB for predicting five steps in the future.
  • A Tale of Two Models: Discussing the Timing and Sampling EM Fault Injection Models
    • Nabhan Roukoz
    • Dutertre Jean-Max
    • Rigaud Jean-Baptiste
    • Danger Jean-Luc
    • Sauvage Laurent
    , 2023. Investigating the dynamics and mechanisms of Electromagnetic Fault Injection (EMFI) attacks, which expose an active circuit to electromagnetic disturbances, presents a persisting challenge due to the diverse and complex fault mechanisms involved. An improved understanding of EMFI modeling is paramount for developing proficient on-chip detection sensors, serving as countermeasures to these attacks. In light of this, our research evaluated the effectiveness of EMFI detection sensors, introduced by Elbaze et al., which rest on the premise that the sampling fault model accounts for EMFI. To assess the functionality of these sensors, we integrated them into an Advanced Encryption Standard (AES) accelerator of a Field-Programmable Gate Array (FPGA) and performed a series of experiments. The resulting evidence suggests that the explanation for EMFI is not a singular fault model but rather, two underlying mechanisms are implicated. At high frequencies, which corresponds to low slack, electromagnetic disturbances, in tandem with the target's Power Distribution Network (PDN), initiated timing constraint violations. This violation subsequently increased the logic propagation times, surpassing the clock period. Contrarily, at low to moderate frequencies, the induced faults generally aligned with the sampling fault model. However, certain deviations from the theoretical framework called into question the model's validity. Upon a deeper examination of the results, we determined that these faults, rather than being sampling faults, were tied to a different mechanism. Electromagnetic disturbances, when coupled with a target's Clock Distribution Network (CDN), can cause timing constraint violations due to EMFI-induced voltage glitches within the target's clock tree. By integrating the mechanisms of EMFI-induced clock glitches and timing faults into the timing violations fault model, we attain a holistic comprehension of EMFI mechanisms. It encapsulates both mechanisms induced by EMFI, spanning the full-frequency spectrum of the target.
  • Sparse Double Descent in Vision Transformers: Real or Phantom Threat?
    • Quétu Victor
    • Milovanović Marta
    • Tartaglione Enzo
    , 2023, 14234, pp.490-502. Vision transformers (ViT) have been of broad interest in recent theoretical and empirical works. They are state-of-the-art thanks to their attention-based approach, which boosts the identification of key features and patterns within images thanks to the capability of avoiding inductive bias, resulting in highly accurate image analysis. Meanwhile, neoteric studies have reported a “sparse double descent” phenomenon that can occur in modern deep-learning models, where extremely over-parametrized models can generalize well. This raises practical questions about the optimal size of the model and the quest over finding the best trade-off between sparsity and performance is launched: are Vision Transformers also prone to sparse double descent? Can we find a way to avoid such a phenomenon? Our work tackles the occurrence of sparse double descent on ViTs. Despite some works that have shown that traditional architectures, like Resnet, are condemned to the sparse double descent phenomenon, for ViTs we observe that an optimally-tuned ℓ2 regularization relieves such a phenomenon. However, everything comes at a cost: optimal lambda will sacrifice the potential compression of the ViT. (10.1007/978-3-031-43153-1_41)
    DOI : 10.1007/978-3-031-43153-1_41
  • Two is Better than One: Achieving High-Quality 3D Scene Modeling with a NeRF Ensemble
    • Di Sario Francesco
    • Renzulli Riccardo
    • Tartaglione Enzo
    • Grangetto Marco
    , 2023, 14234, pp.320-331. Neural Radiance Field (NeRF) is a popular method for synthesizing novel views of a scene from a set of input images. While NeRF has demonstrated state-of-the-art performance in several applications, it suffers from high computational requirements. Recent works have attempted to address these issues by including explicit volumetric information, which makes the optimization process difficult when fine-graining the voxel grids. In this paper, we propose an ensemble approach that combines the strengths of two NeRF models to achieve superior results compared to state-of-the-art architectures, with a similar number of parameters. Experimental results show that our ensemble approach is a promising strategy for performance enhancement, and beats vanilla approaches under the same parameter’s cardinality constraint. (10.1007/978-3-031-43153-1_27)
    DOI : 10.1007/978-3-031-43153-1_27
  • Metasurface for Enhanced Millimeter-Wave Communications under Imperfect Beam Alignment
    • Cumana-Morales Jesus A.
    • Coupechoux Marceau
    • Cordero Fuertes Juan Antonio
    , 2023. In this work, we investigate the impact of beam misalignment in the performance of a wireless system employing a metasurface to improve coverage in a non-line-of-sight (NLOS) scenario. The metasurface is modeled by an array of small radiating elements each of them terminated with a complex load. An equivalent Array Factor is defined, which allows visualizing the beamsteering properties of the metasurface in far-field conditions. Angular misalignment is modeled using a truncated Gaussian distribution and an expression to evaluate signal-to-noise ratio (SNR) in the presence of misalignment is derived. Numerical results show an SNR degradation close to 8 dB for 5 • error magnitude and up to 14 dB if high-gain unit cells are used. Three mechanisms are explored, which can be used to reduce SNR degradation: increasing Metasurface dimensions allows recovering SNR by 7.4 dB, low unit cell gain allows improving SNR by close to 10.5 dB when compared to a highgain cell and base station beamwidth decrease from 25.6 • to 12.7 • allows recovering SNR by 4 dB thanks to the higher BS beam gain.
  • Signal Inpainting from Fourier Magnitudes
    • Bahrman Louis
    • Krémé Marina
    • Magron Paul
    • Deleforge Antoine
    , 2023. Signal inpainting is the task of restoring degraded or missing samples in a signal. In this paper we address signal inpainting when Fourier magnitudes are observed. We propose a mathematical formulation of the problem that highlights its connection with phase retrieval, and we introduce two methods for solving it. First, we derive an alternating minimization scheme, which shares similarities with the Gerchberg-Saxton algorithm, a classical phase retrieval method. Second, we propose a convex relaxation of the problem, which is inspired by recent approaches that reformulate phase retrieval into a semidefinite program. We assess the potential of these methods for the task of inpainting gaps in speech signals. Our methods exhibit both a high probability of recovering the original signals and robustness to magnitude noise. (10.23919/EUSIPCO58844.2023.10289727)
    DOI : 10.23919/EUSIPCO58844.2023.10289727
  • Balancing Performance and Energy Consumption of Bagging Ensembles for the Classification of Data Streams in Edge Computing
    • Cassales Guilherme Weigert
    • Gomes Heitor Murilo
    • Bifet Albert
    • Pfahringer Bernhard
    • Senger Hermes
    IEEE Transactions on Network and Service Management, IEEE, 2023, 20 (3), pp.3038--3054. In recent years, the Edge Computing (EC) paradigm has emerged as an enabling factor for developing technologies like the Internet of Things (IoT) and 5G networks, bridging the gap between Cloud Computing services and end-users, supporting low latency, mobility, and location awareness to delay-sensitive applications. An increasing number of solutions in EC have employed machine learning (ML) methods to perform data classification and other information processing tasks on continuous and evolving data streams. Usually, such solutions have to cope with vast amounts of data that come as data streams while balancing energy consumption, latency, and the predictive performance of the algorithms. Ensemble methods achieve remarkable predictive performance when applied to evolving data streams due to several models and the possibility of selective resets. This work investigates a strategy that introduces short intervals to defer the processing of mini-batches. Well balanced, our strategy can improve the performance (i.e., delay, throughput) and reduce the energy consumption of bagging ensembles to classify data streams. The experimental evaluation involved six state-of-art ensemble algorithms (OzaBag, OzaBag Adaptive Size Hoeffding Tree, Online Bagging ADWIN, Leveraging Bagging, Adaptive RandomForest, and Streaming Random Patches) applying five widely used machine learning benchmark datasets with varied characteristics on three computer platforms. As a result, our strategy can significantly reduce energy consumption in 96% of the experimental scenarios evaluated. Despite the trade-offs, it is possible to balance them to avoid significant loss in predictive performance. (10.1109/TNSM.2022.3226505)
    DOI : 10.1109/TNSM.2022.3226505
  • Procédure de diffusion des publications de l'ATALA sur les archives ouvertes
    • Parmentier Yannick
    • Pogodalla Sylvain
    • Bawden Rachel
    • Labeau Matthieu
    • Eshkol-Taravella Iris
    , 2023, pp.17.
  • Calcul sans peine
    • Rioul Olivier
    , 2023.
  • Fast Kernel Methods for Generic Lipschitz Losses via p-Sparsified Sketches
    • Ahmad Tamim El
    • Laforgue Pierre
    • d'Alché-Buc Florence
    Transactions on Machine Learning Research Journal, [Amherst Massachusetts]: OpenReview.net, 2022, 2023. Kernel methods are learning algorithms that enjoy solid theoretical foundations while suffering from important computational limitations. Sketching, which consists in looking for solutions among a subspace of reduced dimension, is a well studied approach to alleviate these computational burdens. However, statisticallyaccurate sketches, such as the Gaussian one, usually contain few null entries, such that their application to kernel methods and their non-sparse Gram matrices remains slow in practice. In this paper, we show that sparsified Gaussian (and Rademacher) sketches still produce theoretically-valid approximations while allowing for important time and space savings thanks to an efficient decomposition trick. To support our method, we derive excess risk bounds for both single and multiple output kernel problems, with generic Lipschitz losses, hereby providing new guarantees for a wide range of applications, from robust regression to multiple quantile regression. Our theoretical results are complemented with experiments showing the empirical superiority of our approach over SOTA sketching methods.
  • L'aiguille de Buffon, encore et encore
    • Zayana Karim
    • Boyer Ivan
    Au fil des maths, APMEP, 2023. Dans la lignée du paradoxe de Bertrand et des méthodes de Monte-Carlo, le problème de Buffon appartient à la branche dite géométrique des probabilités. Énoncé en 1777, il fit l'objet de plusieurs démonstrations, volontiers techniques, parfois tortueuses, occasionnellement lacunaires ou fausses. À l'aide d'un changement de paradigme, nous le traiterons ici avec le seul bagage mathématique d'un élève de Terminale. Notre approche, assez intuitive, sera complétée de simulations réalisées en Python.
  • Uncertainty clustering internal validity assessment using Fréchet distance for unsupervised learning
    • Rendon Nestor
    • Giraldo Jhony H.
    • Bouwmans Thierry
    • Rodríguez-Buritica Susana
    • Ramirez Edison
    • Isaza Claudia
    Engineering Applications of Artificial Intelligence, Elsevier, 2023, 124, pp.106635. Knowing the number of clusters a priori is one of the most challenging aspects of unsupervised learning. Clustering Internal Validity Indices (CIVIs) evaluate partitions in unsupervised algorithms based on metrics like compactness, separation, and density. However, specialized CIVIs for specific applications have been designed, and there is no general CIVI that works in all scenarios. The absence of CIVIs based on crisp uncertainty metrics is especially critical in decision-making processes that involve ambiguity, non-convex distributions, outliers, and overlapping data. To address this problem, we propose a novel Uncertainty Fréchet (UF) CIVI that assesses the certainty of a well-defined partition. UF leverages uncertainty fingerprints based on Type-2 fuzzy Gaussian Mixture Models (T2FGMM) and the Fréchet distance between clusters to introduce a metric that evaluates partition quality. We integrate UF into a merging methodology that combines similar clusters within a partition, allowing us to determine the number of clusters without the need to run the clustering algorithms iteratively as other CIVIs require. We undertake a comprehensive evaluation of our proposal on 5,250 convex, 36 non-convex synthetic datasets, and five benchmark real datasets. In addition, we apply UF in a real-world scenario that involves high uncertainty: Passive Acoustic Monitoring (PAM) of ecosystems, which aims to study ecological transformations through acoustic recordings. The results show that UF exhibits notable performance in synthetic and real-world scenarios, obtaining an Adjusted Mutual Information (AMI) score higher than 0.88 for normal, uniform, gamma, and triangular distribution datasets. In the PAM application, UF identifies the transformation of ecosystems through sound using clustering algorithms and UF, achieving an F1 score of 0.84. Therefore, results show that the UF index is a suitable tool for researchers and practitioners working with highly uncertain data. (10.1016/j.engappai.2023.106635)
    DOI : 10.1016/j.engappai.2023.106635
  • Refinements for Open Automata (Extended Version)
    • Henrio Ludovic
    • Madelaine Eric
    • Ameur-Boulifa Rabéa
    • Corradi Quentin
    , 2023. Establishing equivalence and refinement relations between programs is an important mean for verifying their correctness. By establishing that the behaviours of a modified program simulate those of the source one, simulation relations formalise the desired relationship between a specification and an implementation, two equivalent implementations, or a program and its optimised implementation. This article discusses a notion of simulation between open automata, which are symbolic behavioural models for communicating systems. Open automata may have holes modelling elements of their context, and can be composed by instantiation of the holes. This allows for a compositional approach for verification of their behaviour. We define a simulation between open automata that may or may not have the same holes, and show under which conditions these refinements are preserved by composition of open automata.
  • Weakly stationary stochastic processes valued in a separable Hilbert space: Gramian-Cramér representations and applications
    • Durand Amaury
    • Roueff François
    ESAIM: Probability and Statistics, EDP Sciences, 2023. The spectral theory for weakly stationary processes valued in a separable Hilbert space has known renewed interest in the past decade. Here we follow earlier approaches which fully exploit the normal Hilbert module property of the time domain. The key point is to build the Gramian-Cramér representation as an isomorphic mapping from the modular spectral domain to the modular time domain. We also discuss the general Bochner theorem and provide useful results on the composition and inversion of lag-invariant linear filters. Finally, we derive the Cramér-Karhunen-Loève decomposition and harmonic functional principal component analysis, which are established without relying on additional assumptions. (10.1051/ps/2023014)
    DOI : 10.1051/ps/2023014
  • A historical perspective on Schützenberger-Pinsker inequalities
    • Rioul Olivier
    , 2023. This paper presents a tutorial overview of so-called Pinsker inequalities which establish a precise relationship between information and statistics, and whose use have become ubiquitous in many infor- mation theoretic applications. According to Stigler’s law of eponymy, no scientific discovery is named after its original discoverer. Pinsker’s inequality is no exception: Years before the publication of Pinsker’s book in 1960, the French medical doctor, geneticist, epidemiologist, and mathe- matician Marcel-Paul (Marco) Schützenberger, in his 1953 doctoral the- sis, not only proved what is now called Pinsker’s inequality (with the optimal constant that Pinsker himself did not establish) but also the op- timal second-order improvement, more than a decade before Kullback’s derivation of the same inequality. We review Schûtzenberger and Pinsker contributions as well as those of Volkonskii & Rozanov, Sakaguchi, McKean, Csiszár, Kullback, Kemperman, Vajda, Bretagnolle & Huber, Krafft & Schmitz, Toussaint, Reid & Williamson, Gilardoni, as well as the op- timal derivation of Fedotov, Harremoës, & Topsøe.
  • Reliability of ring oscillator PUFs with reduced helper data
    • Béguinot Julien
    • Cheng Wei
    • Danger Jean-Luc
    • Guilley Sylvain
    • Rioul Olivier
    • Yli-Mayry Ville
    , 2023. Enhancing the reliability of natively unstable Physically Un- clonable Functions (PUFs) is a major requirement when the PUF is to generate secret identifiers like cryptographic keys. One traditional method is to rely on an addition of a public word: the Helper Data. However, it involves extra complexity and constitutes a vulnerability against attacks manipulating it. In this work, we show that for PUFs based on oscillations, such as Loop-PUFs (LPUF) can simultaneously increase the stability of the PUFs responses and reduce the required amount of helper data to decrease the complexity and increase the security. We proceed in two steps: First, we improve the reliability of the LPUF using dynamically determined repeated measurements and decision process. The number of repetitions per challenge is automatically tuned according to its reliability level and measurement window. Second, we investigate lightweight helper data (less than one byte). Experimental validation of our approach is carried out on 640 LPUFs to characterize the PUF reliability under different temperatures. This provides the assessment of the probability that a given Key Error Rate (KER) is achieved. This, in turn, yields the probability that there is an oscillator with arbitrarily low KER among any given number of oscillators. Performances remain notably stable when subject to increasing temperature.
  • A Quadratic Speedup in the Optimization of Noisy Quantum Optical Circuits
    • de Prins Robbe
    • Yao Yuan
    • Apte Anuj
    • Miatto Filippo
    Quantum, Verein zur Förderung des Open Access Publizierens in den Quantenwissenschaften, 2023, 7, pp.1097. Linear optical quantum circuits with photon number resolving (PNR) detectors are used for both Gaussian Boson Sampling (GBS) and for the preparation of non-Gaussian states such as Gottesman-Kitaev-Preskill (GKP), cat and NOON states. They are crucial in many schemes of quantum computing and quantum metrology. Classically optimizing circuits with PNR detectors is challenging due to their exponentially large Hilbert space, and quadratically more challenging in the presence of decoherence as state vectors are replaced by density matrices. To tackle this problem, we introduce a family of algorithms that calculate detection probabilities, conditional states (as well as their gradients with respect to circuit parametrizations) with a complexity that is comparable to the noiseless case. As a consequence we can simulate and optimize circuits with twice the number of modes as we could before, using the same resources. More precisely, for an M -mode noisy circuit with detected modes D and undetected modes U , the complexity of our algorithm is O ( M 2 ∏ i ∈ U C i 2 ∏ i ∈ D C i ) , rather than O ( M 2 ∏ i ∈ D ∪ U C i 2 ) , where C i is the Fock cutoff of mode i . As a particular case, our approach offers a full quadratic speedup for calculating detection probabilities, as in that case all modes are detected. Finally, these algorithms are implemented and ready to use in the open-source photonic optimization library MrMustard. (10.22331/q-2023-08-29-1097)
    DOI : 10.22331/q-2023-08-29-1097
  • Débruitage multi-modal d'images radar à synthèse d'ouverture par apprentissage profond auto-supervisé
    • Gaya Victor
    • Dalsasso Emanuele
    • Denis Loïc
    • Tupin Florence
    • Pinel-Puysségur Béatrice
    • Guérin Cyrielle
    , 2023. L'observation de la Terre a été largement facilitée depuis de nombreuses années grâce à l'utilisation des satellites imageurs radar à synthèse d'ouverture (SAR), qui offrent des capacités d'imagerie indépendantes des conditions météorologiques. Toutefois, l'interprétation de ces images SAR est complexe en raison de la présence de bruit inhérent à l'imagerie cohérente. En effet, des fluctuations apparaissent dans les images, notamment là où la réflectivité radar est élevée. Ainsi, de nombreuses méthodes ont été développées pour réduire le bruit présent dans ces images, notamment des méthodes neuronales particulièrement efficaces. Dans cet article, nous proposons d'étudier comment l'ajout d'une donnée auxiliaire comme une image optique peut améliorer la restauration de la réflectivité dans ce cadre.
  • Confidential Truth Finding with Multi-Party Computation
    • Saadeh Angelo
    • Senellart Pierre
    • Bressan Stéphane
    , 2023. Federated knowledge discovery and data mining are challenged to assess the trustworthiness of data originating from autonomous sources while protecting confidentiality and privacy. Truth-finding algorithms help corroborate data from disagreeing sources. For each query it receives, a truth-finding algorithm predicts a truth value of the answer, possibly updating the trustworthiness factor of each source. Few works, however, address the issues of confidentiality and privacy. We devise and present a secure secret-sharing-based multi-party computation protocol for pseudo-equality tests that are used in truth-finding algorithms to compute additions depending on a condition. The protocol guarantees confidentiality of the data and privacy of the sources. We also present a variants of a truth-finding algorithm that would make the computation faster when executed using secure multi-party computation. We empirically evaluate the performance of the proposed protocol on a state-of-the-art truth-finding algorithm, 3-Estimates, and compare it with that of the baseline plain algorithm. The results confirm that the secret-sharing-based secure multi-party algorithms are as accurate as the corresponding baselines but for proposed numerical approximations that significantly reduce the efficiency loss incurred.
  • Compensation de la latence Glass-to-Glass via extrapolation du flux vidéo : faisabilité et cas d'usage
    • Kanj Hind
    • Trioux Anthony
    • Cagnazzo Marco
    • Coudoux François-Xavier
    • Corlay Patrick
    • Kieffer Michel
    , 2023, pp.133-136. Les applications telles que la télé-conduite et la téléprésence reposant sur des services vidéo doivent garantir une interaction en temps réel avec une qualité d'expérience satisfaisante. La réduction du délai G2G (Glass-to-Glass), c'est à dire le délai entre l'acquisition et l'affichage d'une image sur un terminal distant, est essentielle pour ces applications. L'extrapolation d'images vidéo basée sur l'apprentissage profond a récemment été considérée pour réduire le délai G2G. Dans cet article, nous examinons l'efficacité de cette technique pour réduire la latence globale dans un système de transmission vidéo point à point. L'objectif est de déterminer le domaine de fonctionnement, les avantages et les inconvénients de cette approche. Pour cela, nous comparons le compromis latence-qualité pour deux méthodes de compensation de latence : la réduction du débit de codage et l'extrapolation. Les résultats montrent que les méthodes d'extrapolation peuvent fournir une réduction significative du délai G2G avec une perte de qualité acceptable, surtout pour les applications avec des contenus vidéo à faible information temporelle.
  • Modèle de diffusion frugal pour l'inpainting d'images
    • Cherel Nicolas
    • Almansa Andrés
    • Gousseau Yann
    • Newson Alasdair
    , 2023. Très récemment, les modèles de diffusion probabilistes ont permis d'améliorer significativement l'état de l'art en synthèse d'images. Ces modèles génératifs se sont avérés particulièrement efficaces pour la résolution de problèmes inverses comme la super-résolution, le défloutage ou l'inpainting. Cependant beaucoup de ces modèles de diffusion reposent sur des centaines de millions de paramètres et le temps d'apprentissage se compte en dizaines de jours-GPU. Ces contraintes matérielles rendent leur utilisation particulièrement lourde et couteuse. Nous proposons un modèle de diffusion léger pour l'inpainting, adapté à des entraînements sur une ou quelques images, ne nécessitant qu'un apprentissage relativement rapide. Nous analysons précisément l'apport de la diffusion, à architecture constante, et nous comparons notre réseau léger avec l'état de l'art dans l'inpainting image.
  • Common Objects for Programming Workshops in Non-Formal Learning Contexts
    • Bressa Nathalie
    • Bødker Susanne
    • Klokmose Clemens
    • Eriksson Eva
    , 2023, 14142, pp.275-296. We investigate common objects as material support for programming workshops for children and adolescents in non-formal learning contexts. To this end, we engaged in a one-year participatory design process with a facilitator of programming workshops. Based on observations of workshops and interviews with the facilitator, we mapped out their artifact ecologies to investigate how the multiple artifacts and common objects were orchestrated by the facilitator and then adopted by the participants of the workshops. Building on these findings, we explored the development of a collaborative teaching tool, MicroTinker, through a participatory design process with the facilitator. This paper presents the results of our analyses and shows their constructive use to design technology in a non-formal learning setting. (10.1007/978-3-031-42280-5_16)
    DOI : 10.1007/978-3-031-42280-5_16
  • Schéma Plug-and-Play robuste à la corrélation spatiale du speckle pour le filtrage des données SAR polarimétriques
    • Ulondu-Mendes Cristiano
    • Denis Loïc
    • Tupin Florence
    , 2023. Les données PolSAR (Radar à Synthèse d'Ouverture polarimétriques) présentent un fort bruit de speckle lié à l'acquisition en imagerie cohérente. En raison de l'apodisation appliquée lors de la synthèse de l'image, ce bruit est spatialement corrélé. L'application d'algorithmes de débruitage supposant un bruit blanc produit des artefacts. Dans cet article nous proposons d'améliorer la restauration d'images PolSAR par l'introduction d'un débruiteur robuste aux corrélations spatiales du bruit au sein d'un formalisme plug-and-play.
  • Une approche pour la tomographie SAR des zones forestières par apprentissage profond supervisé
    • Berenger Zoé
    • Denis Loïc
    • Tupin Florence
    • Ferro-Famil Laurent
    , 2023. L'imagerie tomographique par radar à synthèse d'ouverture (SAR) reconstruit les informations tridimensionnelles de réflectivité d'une scène à partir d'un ensemble d'acquisitions cohérentes réalisées dans une configuration interférométrique. Dans les zones forestières, cette réflectivité est modélisée sous la forme d'un profil vertical pour des coordonnées distance azimut arbitraires. Pour reconstruire ce profil, des techniques d'estimation spectrale non paramétrique à faible résolution ou une inversion régularisée mise en oeuvre sous la forme d'algorithmes de minimisation itératifs, qui sont très coûteux en temps, sont utilisées. Nous présentons ici une approche d'apprentissage profond supervisée à partir de données simulées proposée dans [3]. Dans cet article nous étudions la variabilité inter-entraînement sur données simulées et réelles du réseau de neurones développé.