Sorry, you need to enable JavaScript to visit this website.
Share

Publications

2019

  • On q-ary plateaued functions over F_q and their explicit characterizations
    • Mesnager Sihem
    • Özbudak Ferruh
    • Smak Ahmet
    • Cohen Gérard
    European Journal of Combinatorics, Elsevier, 2019, 80, pp.Pages 71-81. Plateaued and bent functions play a significant role in cryptography, sequence theory, coding theory and combinatorics. In 1997, Coulter and Matthews redefined bent functions over any finite field F_q where q s a prime power, and established their properties. The objective of this work is to redefine the notion of plateaued functions over F_q, and to present several explicit characterizations of those functions. We first give, over F_q, the notion of q-ary plateaued functions, which relies on the concept of the Walsh–Hadamard transform in terms of canonical additive character of F_q. We then give a concrete example of q-ary plateaued function, that is not vectorial p-ary plateaued function. This suggests that the study of plateaued-ness is also significant for q-ary functions over F_q. We finally characterize q-ary plateaued functions in terms of derivatives, Walsh power moments and autocorrelation functions. (10.1016/j.ejc.2018.02.025)
    DOI : 10.1016/j.ejc.2018.02.025
  • VHR satellite image time series analysis for illegal building monitoring using multi-dimensional histogram earth mover's distance
    • Chaabane Ferdaous
    • Rejichi Safa
    • Kefi Chayma
    • Haythem Ismail
    • Tupin Florence
    , 2019.
  • Factorisation Matricielle Semi Non-Négative: Applicationà la Décomposition de Consommations Electriques
    • Henriet Simon
    • Şimşekli Umut
    • Santos Sérgio F.
    • Fuentes Benoît
    • Richard Gael
    , 2019. Depuis de nombreuses années, la mesure et le suivi des consommationsélectriques dans les bâtiments résidentiels et commerciaux comme les bureaux, les centres commerciaux ou les entrepôts ont connu un essor important. Cependant, obtenir la consommation individuelle deséquipementsà partir de la consommation totale (NILM), est un problème complexe. Plusieurs approches ontété proposées dans le cadre des bâtiments résidentiels. Des résultats prometteurs ont notammentété obtenus par le biais de techniques de factorisation de matrices appliquées aux mesures haute fréquence de la tension et du courant. Ces méthodes ne sont pas efficaces lorsqu'on les applique aux bâtiments commerciaux. Dans ce papier, nous proposons une nouvelle méthode de factorisation basée sur une extension de la factorisation semi non négative de matrices (SNMF)à laquelle est ajoutée une pénalisation de la variation totale (TV-SNMF). Pour résoudre ce problème d'optimisation sous contraintes, nous avons développé une stratégie d'optimisation alternée qui utilise une méthode quasi-Newton. Les expériences sur une base de données de simulations de bâtiments commerciaux montrent clairement un gain d'efficacité comparéà d'autres approches comme l'analyse en composantes indépendantes (ICA) ou la SNMF classique. Abstract-In the recent years, there has been an increasing academic and industrial interest for analysing the electrical consumption of commercial buildings. One approach to enable energy efficiency is to disaggregate total energy consumptions into individual ones. This problem is also called Non Intrusive Load Monitoring (NILM). While several approaches have been studied to solve it for residential building using high frequency current and voltage measurements, none of them seems efficient applied to commercial buildings. Amongst the NILM method for residential buildings, matrix factorization approached showed promising results. In this paper, we propose a novel method as an extension of factorization techniques based on Semi Non-Negative Matrix Factorization constrained with a total variation penalization (TV-SNMF). To solve this constrained optimization problem, we rely on an alternating minimization strategy involving a quasi-newton algorithm. The experiments on a simulated commercial building dataset demonstrate clear improvements compared to other approaches such as Independent Component Analysis (ICA) and classic SNMF.
  • Best information is most successful: Mutual information and success rate in side-channel analysis
    • de Chérisey Eloi
    • Guilley Sylvain
    • Rioul Olivier
    • Piantanida Pablo
    IACR Transactions on Cryptographic Hardware and Embedded Systems, IACR, 2019, 2019 (2), pp.49-79. Using information-theoretic tools, this paper establishes a mathematical link between the probability of success of a side-channel attack and the minimum number of queries to reach a given success rate, valid for any possible distinguishing rule and with the best possible knowledge on the attacker’s side. This link is a lower bound on the number of queries highly depends on Shannon’s mutual information between the traces and the secret key. This leads us to derive upper bounds on the mutual information that are as tight as possible and can be easily calculated. It turns out that, in the case of an additive white Gaussian noise, the bound on the probability of success of any attack is directly related to the signal to noise ratio. This leads to very easy computations and predictions of the success rate in any leakage model. (10.13154/tches.v2019.i2.49-79)
    DOI : 10.13154/tches.v2019.i2.49-79
  • Secrecy capacity-memory tradeoff of erasure broadcast channels
    • Kamel Sarah
    • Sarkiss Mireille
    • Wigger Michèle
    • Rekaya-Ben Othman Ghaya
    IEEE Transactions on Information Theory, Institute of Electrical and Electronics Engineers, 2019, 65 (8), pp.5094-5124. This paper derives upper and lower bounds on the secrecy capacity-memory tradeoff of a wiretap erasure broadcast channel (BC) with K w weak receivers and K s strong receivers, where weak receivers and strong receivers have the same erasure probabilities and cache sizes, respectively. The lower bounds are achieved by the schemes that meticulously combine joint cache-channel coding with wiretap coding and key-aided one-time pads. The presented upper bound holds more generally for arbitrary degraded BCs and arbitrary cache sizes. When only weak receivers have cache memories, upper and lower bounds coincide for small and large cache memories, thus providing the exact secrecy capacity-memory tradeoff for this setup. The derived bounds further allow us to conclude that the secrecy capacity is positive even when the eavesdropper is stronger than all the legitimate receivers with cache memories. Moreover, they show that the secrecy capacity-memory tradeoff can be significantly smaller than its non-secure counterpart, but it grows much faster when cache memories are small. This paper also presents a lower bound on the global secrecy capacity-memory tradeoff where one is allowed to optimize the cache assignment subject to a total cache budget. It is close to the best known lower bound without secrecy constraint. For small total cache budget, the global secrecy capacity-memory tradeoff is achieved by assigning all the available cache memory uniformly over all the receivers if the eavesdropper is stronger than all the legitimate receivers, and it is achieved by assigning the cache memory uniformly only over the weak receivers if the eavesdropper is weaker than the strong receivers. (10.1109/TIT.2019.2902578)
    DOI : 10.1109/TIT.2019.2902578
  • Channel Model and Optimal Core Scrambling for Multi-Core Fiber Transmission System
    • Abouseif Akram
    • Rekaya-Ben Othman Ghaya
    • Jaouën Yves
    Optics Communications, Elsevier, 2019, pp.55. Space division multiplexing (SDM) is a potential candidate to increase the capacity of the conventional single mode fiber based transmission systems. Several multi-core fiber (MCF) structures have been proposed, each one is impaired by different core depen- dent loss (CDL) resulting from the fiber structure, crosstalk, splicing in the optical fiber link and inline components. One of the solutions to mitigate the CDL is the core scrambling. In this paper, we introduce three deterministic core scrambling strategies for different MCFs. The strategies show their efficiency to reduce the CDL compared to the random scrambling. Moreover, in order to estimate the CDL level and predict the system performance for any MCF structure, we propose a theoretical channel model depends on the system configuration and the number of core scrambling installed in the transmission link. Lastly, the optimal deterministic core scrambler is obtained for further reduction of the scramblers number.
  • TEN YEARS OF PATCH-BASED APPROACHES FOR SAR IMAGING: A REVIEW
    • Tupin Florence
    • Denis Loïc
    • Deledalle Charles-Alban
    • Ferraioli Giampaolo
    , 2019. Speckle reduction is a major issue for many SAR imaging applications using amplitude, interferometric, polarimetric or tomographic data. This subject has been widely investigated using various approaches. Since a decade, breakthrough methods based on patches have brought unprecedented results to improve the estimation of radar properties. In this paper, we give a review of the different adaptations which have been proposed in the past years for different SAR modalities (mono-channel data like intensity images, multi-channel data like interferometric, tomographic or polarimetric data, or multi-modalities combining optic and SAR images), and discuss the new trends on this subject.
  • THE EXPLOITATION OF THE NON LOCAL PARADIGM FOR SAR 3D RECONSTRUCTION
    • Ferraioli Giampaolo
    • Denis Loïc
    • Deledalle Charles-Alban
    • Tupin Florence
    , 2019. In the last decades, several approaches for solving the Phase Unwrapping (PhU) problem using multi-channel Interfero-metric Synthetic Aperture Radar (InSAR) data have been developed. Many of the proposed approaches are based on statistical estimation theory, both classical and Bayesian. In particular, the statistical approaches based on the use of the whole complex multi-channel dataset have turned to be effective. The latter are based on the exploitation of the covariance matrix, which contains the parameters of interest. In this paper , the added value of the Non Local (NL) paradigm within the InSAR multi-channel PhU framework is investigated. The analysis of the impact of NL technique is performed using multi-channel realistic simulated data and X-band data. (10.1109/IGARSS.2019.8900595)
    DOI : 10.1109/IGARSS.2019.8900595
  • The Consensus Number of a Cryptocurrency
    • Guerraoui Rachid
    • Kuznetsov Petr
    • Monti Matteo
    • Pavlovič Matej
    • Seredinschi Dragos-Adrian
    , 2019, pp.307-316. (10.1145/3293611.3331589)
    DOI : 10.1145/3293611.3331589
  • Learning to Understand Earth Observation Images with Weak and Unreliable Ground Truth
    • Daudt Rodrigo Caye
    • Chan-Hon-Tong Adrien
    • Saux Bertrand Le
    • Boulch Alexandre
    , 2019, pp.5602-5605. In this paper we discuss the issues of using inexact and inaccurate ground truth in the context of supervised learning. To leverage large amounts of Earth observation data for training algorithms, one often has to use ground truth which was not been carefully assessed. We address both the problems of training and evaluation. We first propose a weakly supervised approach for training change classifiers which is able to detect pixel-level changes in aerial images. We then propose a data poisoning approach to get a reliable estimate of the accuracy that can be expected from a classifier, even when the only ground-truth available does not match the reality. Both are assessed on practical land use and land cover applications. (10.1109/IGARSS.2019.8898563)
    DOI : 10.1109/IGARSS.2019.8898563
  • From Patches to Deep Learning: Combining Self-Similarity and Neural Networks for Sar Image Despeckling
    • Denis Loïc
    • Deledalle Charles-Alban
    • Tupin Florence
    , 2019, pp.5113-5116. Speckle reduction has benefited from the recent progress in image processing, in particular patch-based non-local filtering and deep learning techniques. These two families of methods offer complementary characteristics but have not yet been combined. We explore strategies to make the most of each approach. (10.1109/IGARSS.2019.8898473)
    DOI : 10.1109/IGARSS.2019.8898473
  • MULTI-TEMPORAL SPECKLE REDUCTION OF POLARIMETRIC SAR IMAGES: A RATIO-BASED APPROACH
    • Deledalle Charles-Alban
    • Denis Loïc
    • Ferro-Famil Laurent
    • Nicolas Jean-Marie
    • Tupin Florence
    , 2019, pp.899-902. The availability of multi-temporal stacks of SAR images opens the way to new speckle reduction methods. Beyond mere spatial filtering, the time series can be used to improve the signal-to-noise ratio of structures that persist for several dates. Among multi-temporal filtering strategies to reduce speckle fluctuations, a recent approach has proved to be very effective: ratio-based filtering (RABASAR). This method, developed to reduce the speckle in multi-temporal intensity images, first computes a "mean image" with a high signal-to-noise ratio (a so-called super-image), and then processes the ratio between the multi-temporal stack and the super-image. In this paper, we propose an extension of this approach to polarimetric SAR images. We illustrate its potential on a stack of fully-polarimetric images from RADARSAT-2 satellite. (10.1109/IGARSS.2019.8898998)
    DOI : 10.1109/IGARSS.2019.8898998
  • Resolution-Preserving Speckle Reduction of SAR Images: the Benefits of Speckle Decorrelation and Targets Extraction
    • Abergel Rémy
    • Denis Loïc
    • Tupin Florence
    • Ladjal Saïd
    • Deledalle Charles-Alban
    • Almansa Andrés
    , 2019. Speckle reduction is a necessary step for many applications. Very effective methods have been developed in the recent years for single-image speckle reduction and multi-temporal speckle filtering. However , to reduce the presence of sidelobes around bright targets, SAR images are spectrally weighted and this processing impacts the speckle statistics by introducing spatial correlations. These correlations severely impact speckle reduction methods that require uncor-related speckle as input. Thus, spatial down-sampling is typically applied to reduce the speckle spatial correlations prior to speckle filtering. To better preserve the spatial resolution, we describe how to correctly resample SAR images and extract bright targets in order to process full-resolution images with speckle-reduction methods. (10.1109/IGARSS.2019.8900036)
    DOI : 10.1109/IGARSS.2019.8900036
  • Verifying complex software control systems from test objectives: application to the ETCS system
    • Ameur Boulifa Rabéa
    • Ana Cavalli Rosa
    • Maag Stephane
    , 2019. Ensuring the correctness of complex distributed software systems is a challenging task, the issue of building frameworks for developing such safe and correct systems still remains a difficult issue. Where test coverage is dissatisfying, formal analysis grants much higher potential to discover bugs during the development phase. This paper presents a framework for formal verification of complex systems based on standardized test objectives. The framework integrates a transformation of test objectives into formal properties that are verified on the system by model checking. The overall proposed approach for formal verification is evaluated by the application to the standard European Train Control System (ETCS). Some critical safety properties have been proved on the model, ensuring that the model is correct and reliable (10.5220/0007918203970406)
    DOI : 10.5220/0007918203970406
  • Work-conserving dynamic time-division multiplexing for multi-criticality systems
    • Hebbache Farouk
    • Brandner Florian
    • Jan Mathieu
    • Pautet Laurent
    Real-Time Systems, Springer Verlag, 2019. (10.1007/s11241-019-09336-w)
    DOI : 10.1007/s11241-019-09336-w
  • Size influence of checkerboard-like wideband metamaterial absorbers
    • Barka André
    • Begaud Xavier
    • Lepage Anne Claire
    • Varault Stefan
    • Soiron Michel
    • Rance Olivier
    , 2019, 2019. Recently the performances of radar absorbing materials have been extended by designing new thin structures with wideband properties and large angles of incidence. In this paper, the design and performances of such ultrawideband microwave absorber of low thickness material developed within the framework of the SAFAS project (self-complementary surface with low signature) are confirmed with complementary quasi monostatic measurements and finite array simulations using Finite Element Tearing and Interconnecting domain decomposition methods (FETI).
  • Linear codes with small hulls in semi-primitive case
    • Mesnager Sihem
    • Carlet Claude
    • Li Chengju
    Designs, Codes and Cryptography, Springer Verlag, 2019, 87 (12), pp.3063–3075. The hull of a linear code is defined to be the intersection of the code and its dual, and was originally introduced to classify finite projective planes. The hull plays an important role in determining the complexity of algorithms for checking permutation equivalence of two linear codes and computing the automorphism group of a linear code. It has been shown that these algorithms are very effective in general if the size of the hull is small. It is clear that the linear codes with the smallest hull are LCD codes and with the second smallest hull are those with one-dimensional hull. In this paper, we employ character sums in semi-primitive case to construct LCD codes and linear codes with one-dimensional hull from cyclotomic fields and multiplicative subgroups of finite fields. Some sufficient and necessary conditions for these codes are obtained, where prime ideal decompositions of prime p in cyclotomic fields play a key role. In addition, we show the non-existence of these codes in some cases. (10.1007/s10623-019-00663-4)
    DOI : 10.1007/s10623-019-00663-4
  • Promoting and characterizing the menu to keyboard shortcuts transition
    • Giannisakis Emmanouil
    , 2019. Users frequently select commands on personal computers, tablets or smartphones. They can use novice methods such as linear menus, ribbons and toolbars. They can also use expert methods such as keyboard shortcuts and stroke shortcuts that are more efficient. However, we observe that many users continue to use novice methods even experimented users because they fail to make the transition from novice to expert methods. This can have a strong impact on productivity. In this thesis, I investigate the transition from menus to keyboard shortcuts on personal computers and focus on two complementary objectives. The first objective consists of promoting keyboard shortcut usage by designing novel interaction techniques. My approach consists of (1) identifying the key elements of a command (name, icon and shortcut), (2) describing how these elements are represented in traditional interfaces and their impact on performance and (3) proposing alternative designs. In particular, I design, implement and evaluate two novel interaction techniques exploring different locations of the keyboard shortcut cues: The first one blends the keyboard shortcut cues into command icons. The second one explores the relative position of the keyboard shortcut cues with their corresponding command labels. We show that both interaction techniques are promising for promoting keyboard shortcut usage. In addition, I investigate how the command name - keyboard shortcut mapping impacts keyboard shortcut usage. The second objective consists of better characterizing the transition from menus to keyboard shortcuts. I highlight inconsistencies between theoretical and empirical characterizations of this transition making difficult to evaluate and compare interaction techniques. I present a novel methodology to estimate theoretical behavioral markers on empirical data. In particular, I design and evaluate an algorithm to automatically identify the beginning and the end of the transition. I also provide new insights regarding users behaviors before, during and after the transition. This thesis offers (1) a better understanding of the transition from menus to keyboard shortcuts and the design factors involved in this transition, (2) novel methods to characterize this transition and (3) two novel interaction techniques to promote keyboard shortcuts.
  • Comprehensive Pulse Shape Induced Failure Analysis in Voltage-Controlled MRAM
    • Liu Mingyue
    • Cai Hao
    • Han Menglin
    • Xie Lei
    • Yang Jun
    • Naviner Lirida
    , 2019, pp.1-6. Voltage-controlled (VC) magnetic random access memory (MRAM) is considered as an alternative magnetic free layer switching mechanism, owing to its reduced bit-cell size and writing energy consumption comparing to spin-transfer-torque (STT)-MRAM. Unlike current driven mechanisms, voltage pulse is applied to modulate the magnetic anisotropy of magnetic tunnel junction (MTJ), so the write operation of VC-MTJ is sensitive to pulse shape. This paper investigates the reliability-aware VC-MTJ switching in the 1T-1M bit cell structure and write failures mainly caused by the pulse shape uncertainties and amplitude fluctuations. We propose a write-after-read scheme combined with a reverse bias at array bit-line (BL) to alleviate the MTJ unsuccessful switching, which achieves 52%-68.7% failure rate reduction in parallel/antiparallel state change of MTJ free layer. The design trade-off exists as additional control units, 1.71× writing latency, as well as 3.27× energy consumption realized with a 28nm CMOS process. (10.1109/NANOARCH47378.2019.181292)
    DOI : 10.1109/NANOARCH47378.2019.181292
  • Nonverbal behavior in multimodal performances
    • Cafaro Angelo
    • Pelachaud Catherine
    • Marsella Stacy
    , 2019. (10.1145/3233795.3233803)
    DOI : 10.1145/3233795.3233803
  • Deep Learning for Agricultural Land Detection in Insular Areas
    • Charou Eleni
    • Felekis George
    • Stavroulopoulou Danai Bournou
    • Koutsoukou Maria
    • Panagiotopoulou Antigoni
    • Voutos Yorghos
    • Bratsolis Emmanuel
    • Mylonas Phivos
    • Likforman-Sulem Laurence
    , 2019, pp.1-4. Nowadays, governmental programs like ESA’s Copernicus provide freely available data that can be easily utilized for earth observation. In the present work, the problem of detecting agricultural and non-agricultural land cover is addressed. The methodology is based on classification with convolutional neural networks (CNNs) and transfer learning using AlexNet. The study area is located at the Ionian Islands, which include several land cover classes according to Copernicus CORINE Land Cover 2018 (CLC 2018). Furthermore, the dataset consists of natural color images acquired by Sentinel-2A multi-spectral instrument. Experimentation proves that extra addition of training data from foreign grounds, unfamiliar to the Greek data, serves much as a confusing agent regarding network performance. (10.1109/IISA.2019.8900670)
    DOI : 10.1109/IISA.2019.8900670
  • Standard Lattices of Compatibly Embedded Finite Fields
    • de Feo Luca
    • Randriambololona Hugues
    • Rousseau Édouard
    , 2019. Lattices of compatibly embedded finite fields are useful in computer algebra systems for managing many extensions of a finite field F_p at once. They can also be used to represent the algebraic closure F̄_p , and to represent all finite fields in a standard manner. The most well known constructions are Conway polynomials, and the Bosma–Cannon–Steel framework used in Magma. In this work, leveraging the theory of the Lenstra-Allombert isomorphism algorithm, we generalize both at the same time. Compared to Conway polynomials, our construction defines a much larger set of field extensions from a small pre-computed table; however it is provably as inefficient as Conway polynomials if one wants to represent all field extensions, and thus yields no asymptotic improvement for representing F̄_p . Compared to Bosma–Cannon–Steel lattices, it is considerably more efficient both in computation time and storage: all algorithms have at worst quadratic complexity, and storage is linear in the number of represented field extensions and their degrees. Our implementation written in C/Flint/Julia/Nemo shows that our construction in indeed practical. (10.1145/3326229.3326251)
    DOI : 10.1145/3326229.3326251
  • Adaptive Random Forests with Resampling for Imbalanced data Streams
    • Boiko Ferreira Luis Eduardo
    • Murilo Gomes Heitor
    • Bifet Albert
    • Oliveira Luiz F.L.
    , 2019, pp.1-6. (10.1109/IJCNN.2019.8852027)
    DOI : 10.1109/IJCNN.2019.8852027
  • Arabic Cyberbullying Detection: Enhancing Performance by Using Ensemble Machine Learning
    • Haidar Batoul
    • Chamoun Maroun
    • Serhrouchni Ahmed
    , 2019, pp.323-327. Cyberbullying is a prevalent threat nearing Arab adolescents and youth around the world. As the internet and smart devices are taking more and more space in the lives of youth, interest in finding solutions for hindering cyberbullying is rising. Interest in Arabic Natural Language Processing techniques is also taking a big part in current research works. A lot of research work nowadays is presenting solutions for the automatic detection of cyberbullying. However, very scarce solutions are prevailing for cyberbullying in Arabic language content. Cyberbullying detection solutions employ Machine Learning and Natural Language Processing techniques. Lately, Ensemble Machine learning techniques had been contemplated as means of enhancing the classification of machine learners. Thus, this paper presents a solution for Arabic Cyberbullying detection. The solution presented hereby uses Ensemble Machine Learning techniques to achieve an enhancement over a previous work presented by the authors in the realm of Arabic Cyberbullying detection. The enhancements are assessed by means of performance measures - that is Precision, Recall and F-Measure. (10.1109/iThings/GreenCom/CPSCom/SmartData.2019.00074)
    DOI : 10.1109/iThings/GreenCom/CPSCom/SmartData.2019.00074
  • Benchmarking GNN-CMA-ES on the BBOB noiseless testbed
    • Faury Louis
    • Calauzènes Clément
    • Fercoq Olivier
    , 2019, pp.1928-1936. Popular machine learning estimators involve regularization parameters that can be challenging to tune, and standard strategies rely on grid search for this task. In this paper, we revisit the techniques of approximating the regularization path up to predefined tolerance $\epsilon$ in a unified framework and show that its complexity is $O(1/\sqrt[d]{\epsilon})$ for uniformly convex loss of order $d \geq 2$ and $O(1/\sqrt{\epsilon})$ for Generalized Self-Concordant functions. This framework encompasses least-squares but also logistic regression, a case that as far as we know was not handled as precisely in previous works. We leverage our technique to provide refined bounds on the validation error as well as a practical algorithm for hyperparameter tuning. The latter has global convergence guarantee when targeting a prescribed accuracy on the validation set. Last but not least, our approach helps relieving the practitioner from the (often neglected) task of selecting a stopping criterion when optimizing over the training set: our method automatically calibrates this criterion based on the targeted accuracy on the validation set.