Sorry, you need to enable JavaScript to visit this website.
Partager

Publications

 

Les publications de nos enseignants-chercheurs sont sur la plateforme HAL :

 

Les publications des thèses des docteurs du LTCI sont sur la plateforme HAL :

 

Retrouver les publications figurant dans l'archive ouverte HAL par année :

2021

  • Fast and lightweight binary and multi-branch Hoeffding Tree Regressors
    • Mastelini Saulo Martiello
    • Montiel Jacob
    • Gomes Heitor Murilo
    • Bifet Albert
    • Pfahringer Bernhard
    • Carvalho André C. P. L. F. De
    , 2021, pp.380--388. Incremental Hoeffding Tree Regressors (HTR) are powerful non-linear online learning tools. However, the commonly used strategy to build such structures limits their applicability to real-time scenarios. In this paper, we expand and evaluate Quantization Observer (QO), a feature discretization-based tool to speed up incremental regression tree construction and save memory resources. We enhance the original QO proposal to create multi-branch trees when dealing with numerical attributes, creating a mix of interval and binary splits rather than binary splits only. We evaluate the multi-branch and strictly binary QO-based HTRs against other tree-building strategies in an extensive experimental setup of 15 data streams. In general, the QO-based HTRs are as accurate as traditional HTRs, incurring one-third of training time at only a fraction of the memory resource usage. The obtained numerical multi-branch HTRs are shallower than the strictly binary ones, significantly faster to train, and they keep predictive performance similar to the traditional incremental trees. (10.1109/ICDMW53433.2021.00053)
    DOI : 10.1109/ICDMW53433.2021.00053
  • Nonlinear impairments aware resource allocation for cognitive satellite systems
    • Louchart Arthur
    , 2021. This thesis addresses the problem of resource allocation for cognitive satellite communications.In a context of increasing throughput demands, satellite communication systems are required to use frequencies already used by terrestrial systems. The underlay cognitive radio paradigm allows a secondary network to use the same frequency band that a primary network uses. However, the interference created by the secondary network must not exceed a certain limit, which is set by the primary network.We consider a cognitive satellite communication system, where terrestrial users transmit information to a satellite, and that satellite sends it back to a terrestrial gateway in a relay-like fashion.In this way, the terrestrial users of the satellite secondary network generate interference on the primary network, which is due to the secondary lobes of the transmitting antennas. The management of the transmission power of the secondary satellite users becomes essential to limit the interference on the primary terrestrial network and, at the same time, to reach the maximum throughput of the system. Moreover, taking into account the nonlinearities coming from the satellite, especially the high-power amplifier, becomes crucial when the satellite is used at its maximum capabilities.This is because the non-linear effects produced by the amplifier, modeled by Volterra series, degrade the system throughput.In this context, the thesis consists in proposing resource allocation algorithms in order to optimize the sum-rate of the satellite system, while respecting the interference limits set by the primary terrestrial network, taking into account the nonlinear effects generated by the satellite components.
  • Scaling A Blockchain System For 5G-based Vehicular Networks Using Heuristic Sharding
    • Gu Pengwenlong
    • Zhong Dingjie
    • Hua Cunqing
    • Nait-Abdesselam Farid
    • Serhrouchni Ahmed
    • Khatoun Rida
    , 2021, 76 (9-10), pp.1-6. (10.1109/GLOBECOM46510.2021.9685234)
    DOI : 10.1109/GLOBECOM46510.2021.9685234
  • Contributions to non-convex stochastic optimization and reinforcement learning
    • Barakat Anas
    , 2021. This thesis is focused on the convergence analysis of some popular stochastic approximation methods in use in the machine learning community with applications to optimization and reinforcement learning.The first part of the thesis is devoted to a popular algorithm in deep learning called ADAM used for training neural networks. This variant of stochastic gradient descent is more generally useful for finding a local minimizer of a function. Assuming that the objective function is differentiable and non-convex, we establish the convergence of the iterates in the long run to the set of critical points under a stability condition in the constant stepsize regime. Then, we introduce a novel decreasing stepsize version of ADAM. Under mild assumptions, it is shown that the iterates are almost surely bounded and converge almost surely to critical points of the objective function. Finally, we analyze the fluctuations of the algorithm by means of a conditional central limit theorem.In the second part of the thesis, in the vanishing stepsizes regime, we generalize our convergence and fluctuations results to a stochastic optimization procedure unifying several variants of the stochastic gradient descent such as, among others, the stochastic heavy ball method, the Stochastic Nesterov Accelerated Gradient algorithm, and the widely used ADAM algorithm. We conclude this second part by an avoidance of traps result establishing the non-convergence of the general algorithm to undesired critical points, such as local maxima or saddle points. Here, the main ingredient is a new avoidance of traps result for non-autonomous settings, which is of independent interest.Finally, the last part of this thesis which is independent from the two previous parts, is concerned with the analysis of a stochastic approximation algorithm for reinforcement learning. In this last part, we propose an analysis of an online target-based actor-critic algorithm with linear function approximation in the discounted reward setting. Our algorithm uses three different timescales: one for the actor and two for the critic. Instead of using the standard single timescale temporal difference (TD) learning algorithm as a critic, we use a two timescales target-based version of TD learning closely inspired from practical actor-critic algorithms implementing target networks. First, we establish asymptotic convergence results for both the critic and the actor under Markovian sampling. Then, we provide a finite-time analysis showing the impact of incorporating a target network into actor-critic methods.
  • Deep Reinforcement Learning for Scheduling Uplink IoT Traffic with Strict Deadlines
    • Robaglia Benoît-Marie
    • Destounis Apostolos
    • Coupechoux Marceau
    • Tsilimantos Dimitrios
    , 2021, pp.1-6. This paper considers the Multiple Access problem where N Internet of Things (IoT) devices share a common wireless medium towards a central Base Station (BS). We propose a Reinforcement Learning (RL) method where the BS is the agent and the devices are part of the environment. A device is allowed to transmit only when the BS decides to schedule it. Besides the information packets, devices send additional messages like the delay or the number of discarded packets since their last transmission. This information is used to design the RL reward function and constitutes the next observation that the agent can use to schedule the next device. Leveraging RL allows us to learn the sporadic and heterogeneous traffic patterns of the IoT devices and an optimal scheduling policy that maximizes the channel throughput. We adapt the Proximal Policy Optimization (PPO) algorithm with a Recurrent Neural Network (RNN) to handle the partial observability of our problem and exploit the temporal correlations of the users' traffic. We demonstrate the performance of our model through simulations on different number of heterogeneous devices with periodic traffic and individual latency constraints. We show that our RL algorithm outperforms traditional scheduling schemes and distributed medium access algorithms. (10.1109/GLOBECOM46510.2021.9685561)
    DOI : 10.1109/GLOBECOM46510.2021.9685561
  • Mixture weights optimisation for alpha-divergence variational inference
    • Daudel Kamélia
    • Douc Randal
    , 2021, 34, pp.4397--4408. This paper focuses on $\alpha$-divergence minimisation methods for Variational Inference. We consider the case where the posterior density is approximated by a mixture model and we investigate algorithms optimising the mixture weights of this mixture model by $\alpha$-divergence minimisation, without any information on the underlying distribution of its mixture components parameters. The Power Descent, defined for all $\alpha \neq 1$, is one such algorithm and we establish in our work the full proof of its convergence towards the optimal mixture weights when $\alpha < 1$. Since the $\alpha$-divergence recovers the widely-used exclusive Kullback-Leibler when $\alpha \to 1$, we then extend the Power Descent to the case $\alpha = 1$ and show that we obtain an Entropic Mirror Descent. This leads us to investigate the link between Power Descent and Entropic Mirror Descent: first-order approximations allow us to introduce the Rényi Descent, a novel algorithm for which we prove an $O(1/N)$ convergence rate. Lastly, we compare numerically the behavior of the unbiased Power Descent and of the biased Rényi Descent and we discuss the potential advantages of one algorithm over the other. (10.48550/arXiv.2106.05114)
    DOI : 10.48550/arXiv.2106.05114
  • The above-threshold linewidth enhancement factor of silicon-based quantum dot lasers
    • Ding Shihao
    • Dong Bozhang
    • Huang Heming
    • Bowers John
    • Grillot Frederic
    , 2021, pp.1-2. The spectral dependence of the linewidth enhancement factor of silicon based InAs/GaAs quantum dot lasers is investigated above the threshold by using a phase modulation method. Low values of linewidth enhancement factor measured at twice the threshold are reported. Such results confirm the high quality of the quantum dot material. (10.1109/GFP51802.2021.9673822)
    DOI : 10.1109/GFP51802.2021.9673822
  • Expected smoothness for stochastic variance-reduced methods and sketch-and-project methods for structured linear systems
    • Gazagnadou Nidham
    , 2021. The considerable increase in the number of data and features complicates the learning phase requiring the minimization of a loss function. Stochastic gradient descent (SGD) and variance reduction variants (SAGA, SVRG, MISO) are widely used to solve this problem. In practice, these methods are accelerated by computing these stochastic gradients on a "mini-batch": a small group of samples randomly drawn.Indeed, recent technological improvements allowing the parallelization of these calculations have generalized the use of mini-batches.In this thesis, we are interested in the study of variants of stochastic gradient algorithms with reduced variance by trying to find the optimal hyperparameters: step and mini-batch size. Our study allows us to give convergence results interpolating between stochastic methods drawing a single sample per iteration and the so-called "full-batch" gradient descent using all samples at each iteration. Our analysis is based on the expected smoothness constant which allows to capture the regularity of the random function whose gradient is calculated.We study another class of optimization algorithms: the "sketch-and-project" methods. These methods can also be applied as soon as the learning problem boils down to solving a linear system. This is the case of ridge regression. We analyze here variants of this method that use different strategies of momentum and acceleration. These methods also depend on the sketching strategy used to compress the information of the system to be solved at each iteration. Finally, we show that these methods can also be extended to numerical analysis problems. Indeed, the extension of sketch-and-project methods to Alternating-Direction Implicit (ADI) methods allows to apply them to large-scale problems, when the so-called "direct" solvers are too slow.
  • Conditional Alignment and Uniformity for Contrastive Learning with Continuous Proxy Labels
    • Dufumier Benoit
    • Gori Pietro
    • Victor Julie
    • Grigis Antoine
    • Duchesnay Edouard
    , 2021. Contrastive Learning has shown impressive results on natural and medical images, without requiring annotated data. However, a particularity of medical images is the availability of meta-data (such as age or sex) that can be exploited for learning representations. Here, we show that the recently proposed contrastive y-Aware InfoNCE loss, that integrates multi-dimensional meta-data, asymptotically optimizes two properties: conditional alignment and global uniformity. Similarly to [33], conditional alignment means that similar samples should have similar features, but conditionally on the meta-data. Instead, global uniformity means that the (normalized) features should be uniformly distributed on the unit hypersphere, independently of the meta-data. Here, we propose to define conditional uniformity, relying on the meta-data, that repel only samples with dissimilar metadata. We show that direct optimization of both conditional alignment and uniformity improves the representations, in terms of linear evaluation, on both CIFAR-100 and a brain MRI dataset.
  • A Framework to Learn with Interpretation
    • Parekh Jayneel
    • Mozharovskyi Pavlo
    • d'Alché-Buc Florence
    , 2021. To tackle interpretability in deep learning, we present a novel framework to jointly learn a predictive model and its associated interpretation model. The interpreter provides both local and global interpretability about the predictive model in terms of human-understandable high level attribute functions, with minimal loss of accuracy. This is achieved by a dedicated architecture and well chosen regularization penalties. We seek for a small-size dictionary of high level attribute functions that take as inputs the outputs of selected hidden layers and whose outputs feed a linear classifier. We impose strong conciseness on the activation of attributes with an entropy-based criterion while enforcing fidelity to both inputs and outputs of the predictive model. A detailed pipeline to visualize the learnt features is also developed. Moreover, besides generating interpretable models by design, our approach can be specialized to provide post-hoc interpretations for a pre-trained neural network. We validate our approach against several state-of-the-art methods on multiple datasets and show its efficacy on both kinds of tasks.
  • Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections
    • Nadjahi Kimia
    • Durmus Alain
    • Jacob Pierre E.
    • Badeau Roland
    • Şimşekli Umut
    , 2021. The Sliced-Wasserstein distance (SW) is being increasingly used in machine learning applications as an alternative to the Wasserstein distance and offers significant computational and statistical benefits. Since it is defined as an expectation over random projections, SW is commonly approximated by Monte Carlo. We adopt a new perspective to approximate SW by making use of the concentration of measure phenomenon: under mild assumptions, one-dimensional projections of a highdimensional random vector are approximately Gaussian. Based on this observation, we develop a simple deterministic approximation for SW. Our method does not require sampling a number of random projections, and is therefore both accurate and easy to use compared to the usual Monte Carlo approximation. We derive nonasymptotical guarantees for our approach, and show that the approximation error goes to zero as the dimension increases, under a weak dependence condition on the data distribution. We validate our theoretical findings on synthetic datasets, and illustrate the proposed approximation on a generative modeling problem. (10.5555/3540261.3541211)
    DOI : 10.5555/3540261.3541211
  • Mersenne et la conjecture de Collatz
    • Prado Jacques
    , 2021. En donnant une interprétation différente de la conjecture de Collatz, aussi dite conjecture de Syracuse ou encore conjecture 3n + 1, il est possible d'aborder différemment le problème. Cette nouvelle formulation permet aussi de modifier la notion de convergence vers 1 par une convergence vers les diviseurs Q2m des nombres de Mersenne définis par M2m = 2^(2m )− 1 = 3 • Q2m. Cette soumission a pour but de présenter la conjecture sous une forme qui ne semble pas avoir été explorée jusqu'à aujourd'hui. Cette approche permet de construire les trajectoires convergentes par inversion de la relation de Collatz et conduit ainsi à un partitionnement complet des nombres impairs ce qui implique la véracité de la conjecture.
  • Survey on recent trends towards generalized differential and boomerang uniformities
    • Mesnager Sihem
    • Mandal Bimal
    • Msahli Mounira
    Cryptography and Communications - Discrete Structures, Boolean Functions and Sequences, Springer, 2021. (10.1007/s12095-021-00551-6)
    DOI : 10.1007/s12095-021-00551-6
  • Machine learning approaches for the prediction of credit risk
    • Ausset Guillaume
    , 2021. Predicting the possible occurrence of a future event, which may eventually never happen, is a fundamental problem that naturally occurs in most scientific as well as industrial fields.This problem, commonly referred to as survival analysis after its canonical application in epidemiology, has long been one of the classical problems in statistics whose exceptional contributions have enabled immeasurable advancements in the natural sciences.More recently, through advancements in the field of machine learning, those same natural scientific fields and industrial applications have also been able to achieve significant leap forwards by exploiting large amounts of high-dimensional data using highly flexible estimators.In this thesis we try to reconcile both approaches and show how to best make use of the highly flexible machine learning approaches in the survival analysis setting in a principled and motivated way.We show in this work how the classical ERM framework can be adapted to the survival analysis setting by introducing a reweighted objective called the Kaplan-Meier ERM and derive non-asymptotic error bounds without parametric assumptions on the true generating process, effectively bringing the results one has come to expect in the machine learning field to survival analysis.We also show how to construct highly flexible estimators of the survival function, one of the key building blocks of our Kaplan-Meier ERM framework. We formulate the survival as a normalizing flow problem and introduce a novel conditional normalizing flow estimator of the survival density, giving a tractable, easy to sample from, but highly expressive estimator of the survival density.In order to reduce the complexity of the two previous approaches, we introduce an estimator of the gradient of a black box function and show how to use it for variable selection, a simple yet highly effective method for dimensionality reduction.Finally, we apply the methods developed here to a particular instance of the survival problem: predicting the defaults of companies. We show how to use estimators of the probability of default to build optimal portfolios as well as how to efficiently make use of small data through hierarchical methods.
  • Complete solution over F p n of the equation X p k + 1 + X + a = 0
    • Kim Kwang Ho
    • Choe Jong Hyok
    • Mesnager Sihem
    Finite Fields and Their Applications, Elsevier, 2021, 76, pp.101902. (10.1016/j.ffa.2021.101902)
    DOI : 10.1016/j.ffa.2021.101902
  • Provably robust blind source separation of linear-quadratic near-separable mixtures
    • Kervazo Christophe
    • Gillis Nicolas
    • Dobigeon Nicolas
    SIAM Journal on Imaging Sciences, Society for Industrial and Applied Mathematics, 2021, 14 (4), pp.1848-1889. In this work, we consider the problem of blind source separation (BSS) by departing from the usual linear model and focusing on the linear-quadratic (LQ) model. We propose two provably robust and computationally tractable algorithms to tackle this problem under separability assumptions which require the sources to appear as samples in the data set. The first algorithm generalizes the successive nonnegative projection algorithm (SNPA), designed for linear BSS, and is referred to as SNPALQ. By explicitly modeling the product terms inherent to the LQ model along the iterations of the SNPA scheme, the nonlinear contributions of the mixing are mitigated, thus improving the separation quality. SNPALQ is shown to be able to recover the ground truth factors that generated the data, even in the presence of noise. The second algorithm is a brute-force (BF) algorithm, which is used as a post-processing step for SNPALQ. It enables to discard the spurious (mixed) samples extracted by SNPALQ, thus broadening its applicability. The BF is in turn shown to be robust to noise under easier-to-check and milder conditions than SNPALQ. We show that SNPALQ with and without the BF postprocessing is relevant in realistic numerical experiments. (10.1137/20M1382878)
    DOI : 10.1137/20M1382878
  • Reducing Aging Impacts in Digital Sensors via Run-Time Calibration
    • Anik Md Toufiq Hasan
    • Ebrahimabadi Mohammad
    • Danger Jean-Luc
    • Guilley Sylvain
    • Karimi Naghmeh
    Journal of Electronic Testing: : Theory and Applications, Springer Verlag, 2021, 37 (5-6), pp.653-673. Hazards or intentional perturbations must be identified in safety- and security-critical applications. Digital sensors have been shown to be an appealing approach to detect such abnormalities. However, as any sensor technology, digital sensors are prone to mis-calibration. In particular, even if the digital sensor initial calibration is correct, the rate of false and missed alarms might increase when the sensor is aged. In this paper, we thoroughly study the impact of aging-induced false and missed alarms. Indeed aging relates to the usage time, and a priori model (historical data for environmental variation) for predicting the aging is unrealistic for digital sensors as tracking the usage time with related temperature and voltage variation imposes high overhead. Accordingly, we propose an alternative approach where not one but two sensors are deployed. In practice, one sensor is used to detect environmental deviations, while the second one is used as the reference. In this respect, the second sensor is only operated seldom, mostly to re-calibrate the active sensor when aged. From this dual input (unaged and aged sensor), corrective models are derived. We account for two methods, namely simple but effective offset correction, and adjustment based on machine-learning. We conduct extensive characterizations (both pre-silicon simulations and post-silicon measurements on FPGA) which quantitatively confirm the applicability and high sensitivity of digital sensors (10.1007/s10836-021-05976-8)
    DOI : 10.1007/s10836-021-05976-8
  • Uncovering recent progress in nanostructured light-emitters for information and communication technologies
    • Grillot Frédéric
    • Duan Jianan
    • Dong Bozhang
    • Huang Heming
    Light: Science and Applications, Nature Publishing Group, 2021, 10 (1). Abstract Semiconductor nanostructures with low dimensionality like quantum dots and quantum dashes are one of the best attractive and heuristic solutions for achieving high performance photonic devices. When one or more spatial dimensions of the nanocrystal approach the de Broglie wavelength, nanoscale size effects create a spatial quantization of carriers leading to a complete discretization of energy levels along with additional quantum phenomena like entangled-photon generation or squeezed states of light among others. This article reviews our recent findings and prospects on nanostructure based light emitters where active region is made with quantum-dot and quantum-dash nanostructures. Many applications ranging from silicon-based integrated technologies to quantum information systems rely on the utilization of such laser sources. Here, we link the material and fundamental properties with the device physics. For this purpose, spectral linewidth, polarization anisotropy, optical nonlinearities as well as microwave, dynamic and nonlinear properties are closely examined. The paper focuses on photonic devices grown on native substrates (InP and GaAs) as well as those heterogeneously and epitaxially grown on silicon substrate. This research pipelines the most exciting recent innovation developed around light emitters using nanostructures as gain media and highlights the importance of nanotechnologies on industry and society especially for shaping the future information and communication society. (10.1038/s41377-021-00598-3)
    DOI : 10.1038/s41377-021-00598-3
  • Private communication with quantum cascade laser photonic chaos
    • Spitz Olivier
    • Herdt Andreas
    • Wu Jiagui
    • Maisons Grégory
    • Carras Mathieu
    • Wong Chee-Wei
    • Elsässer Wolfgang
    • Grillot Frédéric
    Nature Communications, Nature Publishing Group, 2021, 12 (1). Abstract Mid-infrared free-space optical communication has a large potential for high speed communication due to its immunity to electromagnetic interference. However, data security against eavesdroppers is among the obstacles for private free-space communication. Here, we show that two uni-directionally coupled quantum cascade lasers operating in the chaotic regime and the synchronization between them allow for the extraction of the information that has been camouflaged in the chaotic emission. This building block represents a key tool to implement a high degree of privacy directly on the physical layer. We realize a proof-of-concept communication at a wavelength of 5.7 μm with a message encryption at a bit rate of 0.5 Mbit/s. Our demonstration of private free-space communication between a transmitter and receiver opens strategies for physical encryption and decryption of a digital message. (10.1038/s41467-021-23527-9)
    DOI : 10.1038/s41467-021-23527-9
  • Compressed Σ-Protocols for Bilinear Group Arithmetic Circuits and Application to Logarithmic Transparent Threshold Signatures
    • Attema Thomas
    • Cramer Ronald
    • Rambaud Matthieu
    , 2021, 13093, pp.526-556. (10.1007/978-3-030-92068-5_18)
    DOI : 10.1007/978-3-030-92068-5_18
  • Moving Target Defense Strategy in Critical Embedded Systems: A Game-theoretic Approach
    • Ayrault Maxime
    • Borde Etienne
    • Kühne Ulrich
    • Leneutre Jean
    , 2021, pp.27-36. (10.1109/PRDC53464.2021.00014)
    DOI : 10.1109/PRDC53464.2021.00014
  • Apprentissage des systèmes de communications sans fil à base de plateformes SDR
    • Jabbour Chadi
    • Pham Dang-Kièn Germain
    , 2021.
  • Versatile Video Coding for 3.0 Next Generation Digital TV in Brazil
    • Biatek Thibaud
    • Abdoli Mohsen
    • Raulet Mickael
    • Wieckowski Adam
    • Lehman Christian
    • Bross Benjamin
    • de Lagrange Philippe
    • François Edouard
    • Schaefer Ralf
    • Le Feuvre Jean
    SET INTERNATIONAL JOURNAL OF BROADCAST ENGINEERING, 2021, 2021 (1), pp.9-17. In the past few decades, the video broadcast ecosystem has gone through major changes; Originally transmitted using analog signals, it has been more and more transitioned toward digital, leveraging compression technologies and transport protocols, principally developed by MPEG. Along this way, the introduction of new video formats was achieved with standardization of new compression technologies for their better bandwidth preservation. Notably, SD with MPEG-2, HD with H.264, 4K/UHD with HEVC. In Brazil, the successive generations of digital broadcasting systems were developed by the SBTVD Forum, from TV-1.0 to TV-3.0 nowadays. The ambition of TV-3.0 is significantly higher than that of previous generations as it targets the delivery of IPbased signals for applications, such as 8K, HDR, virtual and augmented reality. To deliver such services, compressed video signals shall fit into a limited bandwidth, requiring even more advanced compression technologies. The Versatile Video Coding standard (H.266/VVC), has been finalized by the JVET committee in 2021 and is a relevant candidate to address the TV3.0 requirements. VVC is versatile by nature thanks to its dedicated tools for efficient compression of various formats, from 8K to 360°, and provides around 50% of bitrate saving compared to its predecessor HEVC. This paper presents the VVC-based compression system that has been proposed to the SBTVD call for proposals for TV-3.0. A technical description of VVC and an evaluation of its coding performance is provided. In addition, an end-to-end live transmission chain is demonstrated, supporting 4K real-time encoding and decoding with a low glass-to-glass latency. (10.18580/setijbe.2021.1)
    DOI : 10.18580/setijbe.2021.1
  • An Anonymous Trace-and-Revoke Broadcast Encryption Scheme
    • Blazy Olivier
    • Mukherjee Sayantan
    • Nguyen Huyen
    • Hieu Phan Duong
    • Stehlé Damien
    , 2021, 13083, pp.214-233. Broadcast Encryption is a fundamental cryptographic primitive, that gives the ability to send a secure message to any chosen target set among registered users. In this work, we investigate broadcast encryption with anonymous revocation, in which ciphertexts do not reveal any information on which users have been revoked. We provide a scheme whose ciphertext size grows linearly with the number of revoked users. Moreover, our system also achieves traceability in the black-box confirmation model. Technically, our contribution is threefold. First, we develop a generic transformation of linear functional encryption toward trace-and-revoke systems. It is inspired from the transformation by Agrawal et al. (CCS’17) with the novelty of achieving anonymity. Our second contribution is to instantiate the underlying linear functional encryptions from standard assumptions. We propose a DDH-based construction which does no longer require discrete logarithm evaluation during the decryption and thus significantly improves the performance compared to the DDH-based construction of Agrawal et al.. In the LWE-based setting, we tried to instantiate our construction by relying on the scheme from Wang et al. (PKC’19) but finally found an attack to this scheme. Our third contribution is to extend the 1-bit encryption from the generic transformation to n-bit encryption. By introducing matrix multiplication functional encryption, which essentially performs a fixed number of parallel calls on functional encryptions with the same randomness, we can prove the security of the final scheme with a tight reduction that does not depend on n, in contrast to employing the hybrid argument. (10.1007/978-3-030-90567-5_11)
    DOI : 10.1007/978-3-030-90567-5_11
  • Le bon virage
    • Zayana Karim
    • Lusseau Cédric
    • Pantaloni Vincent
    • Rabiet Victor
    CultureMath, ENS, 2021.