• [digest] 2024 Week 3 (2/3)

    From IACR ePrint Archive@21:1/5 to All on Mon Jan 22 03:18:58 2024
    [continued from previous message]

    Money laundering is a serious financial crime where criminals aim to conceal the illegal source of their money via a series of transactions. Although banks have an obligation to monitor transactions, it is difficult to track these illicit money flows
    since they typically span over multiple banks, which cannot share this information due to privacy concerns. We present secure risk propagation, a novel efficient algorithm for money laundering detection across banks without violating privacy concerns. In
    this algorithm, each account is assigned a risk score, which is then propagated through the transaction network. In this article we present two results. Firstly, using data from a large Dutch bank, we show that it is possible to detect unusual activity
    using this model, with cash ratio as the risk score. With a recall of 20%, the precision improves from 15% to 40% by propagating the risk scores, reducing the number of false positives significantly. Secondly, we present a privacy-preserving solution for
    securely performing risk propagation over a joint, inter-bank transaction network. To achieve this, we use Secure Multi-Party Computation (MPC) techniques, which are particularly well-suited for the risk propagation algorithm due to its structural
    simplicity. We also show that the running time of this secure variant scales linearly in the amount of accounts and transactions. For 200, 000 transactions, two iterations of the secure algorithm between three virtual parties, run within three hours on a
    consumer-grade server.



    ## 2024/66

    * Title: Exploiting the Central Reduction in Lattice-Based Cryptography
    * Authors: Tolun Tosun, Amir Moradi, Erkay Savas
    * [Permalink](https://eprint.iacr.org/2024/066)
    * [Download](https://eprint.iacr.org/2024/066.pdf)

    ### Abstract

    This paper presents a novel and efficient way of exploiting side-channel leakage of masked implementations of lattice-based cryptography (LBC). The presented attack specifically targets the central reduction technique, which is widely adapted in
    efficient implementations of LBC. We show that the central reduction leads to a vulnerability by creating a strong dependency between the power consumption and the sign of sensitive intermediate variables. We exploit this dependency by introducing a
    novel hypothetical power model, the range power model, which can be employed in higher-order multi-query side-channel analysis attacks. We particularly show that our approach is valid for the prime moduli employed by Kyber and Dilithium, the lattice-
    based post-quantum algorithms selected by NIST, while it generalizes to other primes used in LBC as well. We practically evaluate our introduced approach by performing second-order non-profiled attacks against a masked implementation of Kyber on an Arm
    Cortex-M4 micro-processor. In our experiments we revealed the full secret key of the aforementioned implementation with only 2100 electro-magnetic (EM) traces without profiling, achieving a more than 14 times reduction in the number of traces compared to
    classical attacks.



    ## 2024/67

    * Title: A Refined Hardness Estimation of LWE in Two-step Mode
    * Authors: Wenwen Xia, Leizhang Wang, Geng Wang, Dawu Gu, Baocang Wang
    * [Permalink](https://eprint.iacr.org/2024/067)
    * [Download](https://eprint.iacr.org/2024/067.pdf)

    ### Abstract

    Recently, researchers have proposed many LWE estimators, such as lattice-estimator (Albrecht et al, Asiacrypt 2017) and leaky-LWE-Estimator (Dachman-Soled et al, Crypto 2020), while the latter has already been used in estimating the security level of
    Kyber and Dilithium using only BKZ. However, we prove in this paper that solving LWE by combining a lattice reduction step (by LLL or BKZ) and a
    target vector searching step (by enumeration or sieving), which we call a Two-step mode, is more efficient than using only BKZ.
    Moreover, we give a refined LWE estimator in Two-step mode by analyzing the relationship between the probability distribution of the target vector and the solving success rate in a Two-step mode LWE solving algorithm. While the latest Two-step
    estimator for LWE, which is the “primal-bdd” mode in lattice-estimator1, does not take into account some up-to-date results and lacks a thorough theoretical analysis. Under the same gate-count model, our estimation for NIST PQC standards drops by 2.1
    3.4 bits (2.2∼4.6 bits while considering more flexible blocksize and jump strategy) compared with leaky-LWE-Estimator.
    Furthermore, we also give a conservative estimation for LWE from the Two-step solving algorithm. Compared with the Core-SVP model, which is used in previous conservative estimations, our estimation relies on weaker assumptions and outputs higher
    evaluation results than the Core-
    SVP model. For NIST PQC standards, our conservative estimation is 4.17∼8.11 bits higher than the Core-SVP estimation. Hence our estimator can give a closer estimation for both upper bound and lower bound of LWE hardness.



    ## 2024/68

    * Title: Laconic Function Evaluation, Functional Encryption and Obfuscation for RAMs with Sublinear Computation
    * Authors: Fangqi Dong, Zihan Hao, Ethan Mook, Daniel Wichs
    * [Permalink](https://eprint.iacr.org/2024/068)
    * [Download](https://eprint.iacr.org/2024/068.pdf)

    ### Abstract

    Laconic function evaluation (LFE) is a "flipped" version of fully homomorphic encryption, where the server performing the computation gets the output. The server commits itself to a function $f$ by outputting a small digest. Clients can later efficiently
    encrypt inputs $x$ with respect to the digest in much less time than computing $f$, and ensure that the server only decrypts $f(x)$, but does not learn anything else about $x$. Prior works constructed LFE for circuits under LWE, and for Turing Machines (
    TMs) from indistinguishability obfuscation (iO). In this work we introduce LFE for Random-Access Machines (RAM-LFE). The server commits itself to a potentially huge database $y$ via a short digest. Clients can later efficiently encrypt inputs $x$ with
    respect to the digest and the server decrypts $f(x,y)$ for some specified RAM program $f$ (e.g., a universal RAM), without learning anything else about $x$. The main advantage of RAM-LFE is that the server's decryption run-time only scales with the RAM
    run-time $T$ of the computation $f(x,y)$, which can be sublinear in both $|x|$ and $|y|$. We consider a weakly efficient variant, where the client's run-time is also allowed to scale linearly with $T$, but not $|y|$, and a fully efficient variant, where
    the client's run-time must be sublinear in both $T$ and $|y|$. We construct the former from doubly efficient private information retrieval (DEPIR) and laconic OT (LOT), both of which are known from RingLWE, and the latter from an additional use of iO.
    We then show how to leverage fully efficient RAM-LFE to also get (many-key) functional encryption for RAMs (RAM-FE) where secret keys are associate with big databases $y$ and the decryption time is sublinear in $|y|$, as well as iO for RAMs where the
    obfuscated program contains a big database $y$ and the evaluation time is sublinear in $|y|$.



    ## 2024/69

    * Title: SDitH in Hardware
    * Authors: Sanjay Deshpande, James Howe, Jakub Szefer, Dongze Yue
    * [Permalink](https://eprint.iacr.org/2024/069)
    * [Download](https://eprint.iacr.org/2024/069.pdf)

    ### Abstract

    This work presents the first hardware realisation of the Syndrome-Decoding-in-the-Head (SDitH) signature scheme, which is a candidate in the NIST PQC process for standardising post-quantum secure digital signature schemes. SDitH's hardness is based on
    conservative code-based assumptions, and it uses the Multi-Party-Computation-in-the-Head (MPCitH) construction.
    This is the first hardware design of a code-based signature scheme based on traditional decoding problems and only the second for MPCitH constructions, after Picnic. This work presents optimised designs to achieve the best area efficiency, which we
    evaluate using the Time-Area Product (TAP) metric. This work also proposes a novel hardware architecture by dividing the signature generation algorithm into two phases, namely offline and online phases for optimising the overall clock cycle count. The
    hardware designs for key generation, signature generation, and signature verification are parameterised for all SDitH parameters, including the NIST security levels, both syndrome decoding base fields (GF256 and GF251), and thus conforms to the SDitH
    specifications. The hardware design further supports secret share splitting, and the hypercube optimisation which can be applied in this and multiple other NIST PQC candidates.
    The results of this work result in a hardware design with a drastic reducing in clock cycles compared to the optimised AVX2 software implementation, in the range of 2-4x for most operations. Our key generation outperforms software drastically, giving a
    11-17x reduction in runtime, despite the significantly faster clock speed. On Artix 7 FPGAs we can perform key generation in 55.1 Kcycles, signature generation in 6.7 Mcycles, and signature verification in 8.6 Mcycles for NIST L1 parameters, which
    increase for GF251, and for L3 and L5 parameters.



    ## 2024/70

    * Title: Hints from Hertz: Dynamic Frequency Scaling Side-Channel Analysis of Number Theoretic Transform in Lattice-Based KEMs
    * Authors: Tianrun Yu, Chi Cheng, Zilong Yang, Yingchen Wang, Yanbin Pan, Jian Weng
    * [Permalink](https://eprint.iacr.org/2024/070)
    * [Download](https://eprint.iacr.org/2024/070.pdf)

    ### Abstract

    Number Theoretic Transform (NTT) has been widely used in accelerating computations in lattice-based cryptography. However, attackers can potentially launch power analysis targeting NTT because it is usually the most time-consuming part of the
    implementation. This extended time frame provides a natural window of opportunity for attackers. In this paper, we investigate the first CPU frequency leakage (Hertzbleed-like) attacks against NTT in lattice-based KEMs. Our key observation is that
    different inputs to NTT incur different Hamming weights in its output and intermediate layers. By measuring the CPU frequency during the execution of NTT, we propose a simple yet effective attack idea to find the input to NTT that triggers NTT
    processing data with significantly low Hamming weight. We further apply our attack idea to real-world applications that are built upon NTT: CPA-secure Kyber without Compression and Decompression functions, and CCA-secure NTTRU. This leads us to extract
    information or frequency Hints about the secret key. Integrating these Hints into the LWE-estimator framework, we estimate a minimum of $35\%$ security loss caused by the leakage. The frequency and timing measurements on the Reference and AVX2
    implementations of NTT in both Kyber and NTTRU align well with our theoretical analysis, confirming the existence of frequency side-channel leakage in NTT. It is important to emphasize that our observation is not limited to a specific implementation but
    rather the algorithm on which NTT is based. Therefore, our results call for more attention to the analysis of power leakage against NTT in lattice-based cryptography.



    ## 2024/71

    * Title: Too Hot To Be True: Temperature Calibration for Higher Confidence in NN-assisted Side-channel Analysis
    * Authors: Seyedmohammad Nouraniboosjin, Fatemeh Ganji
    * [Permalink](https://eprint.iacr.org/2024/071)
    * [Download](https://eprint.iacr.org/2024/071.pdf)

    ### Abstract

    The past years have witnessed a considerable increase in research efforts put into neural network-assisted profiled side-channel analysis (SCA). Studies have also identified challenges, e.g., closing the gap between metrics for machine learning (ML)
    classification and side-channel attack evaluation. In fact, in the context of NN-assisted SCA, the NN’s output distribution forms the basis for successful key recovery. In this respect, related work has covered various aspects of integrating neural
    networks (NNs) into SCA, including applying a diverse set of NN models, model selection and training, hyperparameter tuning, etc. Nevertheless, one well-known fact has been overlooked in the SCA-related literature, namely NNs’ tendency to become “
    over-confident,” i.e., suffering from an overly high probability of correctness when predicting the correct class (secret key in the sense of SCA). Temperature scaling is among the powerful and effective techniques that have been devised as a remedy
    for this. Regarding the principles of deep learning, it is known that temperature scaling does not affect the NN’s accuracy; however, its impact on metrics for secret key recovery, mainly guessing entropy, is worth investigating. This paper
    reintroduces temperature scaling into SCA and demonstrates that key recovery can become more effective through that. Interestingly, temperature scaling can be easily integrated into SCA, and no re-tuning of the network is needed. In doing so, temperature
    can be seen as a metric to assess the NN’s performance before launching the attack. In this regard, the impact of hyperparameter tuning, network variance, and capacity have been studied. This leads to recommendations on how network miscalibration and
    overconfidence can be prevented.



    ## 2024/72

    * Title: 1/0 Shades of UC: Photonic Side-Channel Analysis of Universal Circuits * Authors: Dev M. Mehta, Mohammad Hashemi, Domenic Forte, Shahin Tajik, Fatemeh Ganji
    * [Permalink](https://eprint.iacr.org/2024/072)
    * [Download](https://eprint.iacr.org/2024/072.pdf)

    ### Abstract

    A universal circuit (UC) can be thought of as a programmable circuit that
    can simulate any circuit up to a certain size by specifying its secret configuration bits. UCs have been incorporated into various applications, such as private function evaluation (PFE). Recently, studies have attempted to formalize the concept of
    semiconductor intellectual property (IP) protection in the context of UCs. This is despite the observations made in theory and practice that, in reality, the adversary may obtain additional information about the secret when executing cryptographic
    protocols. This paper aims to answer the question of whether UCs leak information unintentionally, which can be leveraged by the adversary to disclose the configuration bits. In this regard, we propose the first photon emission analysis against UCs
    relying on computer vision-based approaches. We demonstrate that the adversary can utilize a cost-effective solution to take images to be processed by off-the-shelf algorithms to extract configuration bits. We examine the efficacy of our method in two
    scenarios: (1) the design is small enough to be captured in a single image during the attack phase, and (2) multiple images should be captured to launch the attack by deploying a divide-and-conquer strategy. To evaluate the effectiveness of our attack,
    we use metrics commonly applied in side-channel analysis, namely rank and success rate. By doing so, we show that our profiled photon emission analysis achieves a success rate of 1 by employing a few templates (concretely, only 18 images were used as
    templates).



    ## 2024/73

    * Title: A Comparative Examination of Network and Contract-Based Blockchain Storage Solutions for Decentralized Applications
    * Authors: Lipeng He
    * [Permalink](https://eprint.iacr.org/2024/073)
    * [Download](https://eprint.iacr.org/2024/073.pdf)

    ### Abstract

    Decentralized applications (DApps), which are innovative blockchain-powered software systems designed to serve as the fundamental building blocks for the next generation of Internet services, have witnessed exponential growth in recent years. This paper
    thoroughly compares and analyzes two blockchain-based decentralized storage networks (DSNs), which are crucial foundations for DApp and blockchain ecosystems. The study examines their respective mechanisms for data persistence, strategies for enforcing
    data retention, and token economics. In addition to delving into technical details, the suitability of each storage solution for decentralized application development is assessed, taking into consideration network performance, storage costs, and existing
    use cases. By evaluating these factors, the paper aims to provide insights into the effectiveness of these technologies in supporting the desirable properties of truly decentralized blockchain applications. In conclusion, the findings of this research
    are discussed and synthesized, offering valuable perspectives on the capabilities of these technologies. It sheds light on their potential to facilitate the development of DApps and provides an understanding of the ongoing trends in blockchain
    development.



    ## 2024/74

    * Title: PRIDA: PRIvacy-preserving Data Aggregation with multiple data customers
    * Authors: Beyza Bozdemir, Betül Aşkın Özdemir, Melek Önen
    * [Permalink](https://eprint.iacr.org/2024/074)
    * [Download](https://eprint.iacr.org/2024/074.pdf)

    ### Abstract

    We propose a solution for user privacy-oriented privacy-preserving data aggregation with multiple data customers. Most existing state-of-the-art approaches present too much importance on performance efficiency and seem to ignore privacy properties except
    for input privacy. Most solutions for data aggregation do not generally discuss the users’ birthright, namely their privacy for their own data control and anonymity when they search for something on the browser or volunteer to participate in a survey.
    Still, they are ambitious to secure data customers’ rights (which should come later). They focus on resulting in an efficiency-oriented data aggregation enabling input privacy only. We aim to give importance to user privacy, and we have designed a
    solution for data aggregation in which we keep efficiency in balance. We show that PRIDA provides a good level of computational and communication complexities and is even better in timing evaluation than existing studies published recently (i.e.,
    Bonawitz et al. (CCS’17), Corrigan-Gibbs et al. (NSDI’17), Bell et al. (CCS’20), Addanki et al. (SCN’22)). We employ threshold homomorphic encryption and secure two-party computation to ensure privacy properties. We balance the trade-off between
    a proper design for users and the desired privacy and efficiency.



    ## 2024/75

    * Title: Succinct Verification of Compressed Sigma Protocols in the Updatable SRS setting
    * Authors: Moumita Dutta, Chaya Ganesh, Neha Jawalkar
    * [Permalink](https://eprint.iacr.org/2024/075)
    * [Download](https://eprint.iacr.org/2024/075.pdf)

    ### Abstract

    We propose protocols in the Compressed Sigma Protocol framework that achieve a succinct verifier. Towards this, we construct a new inner product argument and cast it in the Compressed Sigma Protocol (CSP) framework as a protocol for opening a committed
    linear form, achieving logarithmic verification.

    We then use our succinct-verifier CSP to construct a zero-knowledge argument for circuit satisfiability (under the discrete logarithm assumption in bilinear groups) in the updatable Structured Reference String (SRS) setting that achieves $O(\log n)$
    proof size and $O(\log n)$ verification complexity. Our circuit zero-knowledge protocol has concretely better proof/prover/verifier complexity compared to the the state-of-the-art protocol in the updatable setting under the same assumption. Our
    techniques of achieving verifier-succinctness in the compression framework is of independent interest.

    We then show a commitment scheme for committing to group elements using a structured commitment key. We construct protocols to open a committed homomorphism on a committed vector with verifier succinctness in the designated verifier setting. This has
    applications in making the verifier in compressed sigma protocols for bilinear group arithmetic circuits, succinct.



    ## 2024/76

    * Title: A provably masked implementation of BIKE Key Encapsulation Mechanism
    * Authors: Loïc Demange, Mélissa Rossi
    * [Permalink](https://eprint.iacr.org/2024/076)
    * [Download](https://eprint.iacr.org/2024/076.pdf)

    ### Abstract

    BIKE is a post-quantum key encapsulation mechanism (KEM) selected for the 4th round of the NIST’s standardization campaign. It relies on the hardness of the syndrome decoding problem for quasi-cyclic codes and on the indistinguishability of the public
    key from a random element, and provides the most competitive performance among round 4 candidates, which makes it relevant for future real-world use cases. Analyzing its side-channel resistance has been highly encouraged by the community and several
    works have already outlined various side-channel weaknesses and proposed ad-hoc countermeasures. However, in contrast to the well-documented research line on masking lattice-based algorithms, the possibility of generically protecting code-based
    algorithms by masking has only been marginally studied in a 2016 paper by Cong Chen et al. At this stage of the standardization campaign, it is important to assess the possibility of fully masking BIKE scheme and the resulting cost in terms of
    performances.
    In this work, we provide the first high-order masked implementation of a code-based algorithm. We had to tackle many issues such as finding proper ways to handle large sparse polynomials, masking the key-generation algorithm or keeping the benefit of the
    bitslicing. In this paper, we present all the gadgets necessary to provide a fully masked implementation of BIKE, we discuss our different implementation choices and we propose a full proof of masking in the Ishai Sahai and Wagner (Crypto 2003) model.
    More practically, we also provide an open C-code masked implementation of the key-generation, encapsulation and decapsulation algorithms with extensive benchmarks. While the obtained performance is slower than existing masked lattice-based algorithms,
    the scaling in the masking order is still encouraging and no Boolean to Arithmetic conversion has been used.
    We hope that this work can be a starting point for future analysis and optimization.



    ## 2024/77

    * Title: OBSCURE: Versatile Software Obfuscation from a Lightweight Secure Element
    * Authors: Darius Mercadier, Viet Sang Nguyen, Matthieu Rivain, Aleksei Udovenko
    * [Permalink](https://eprint.iacr.org/2024/077)
    * [Download](https://eprint.iacr.org/2024/077.pdf)

    ### Abstract

    Software obfuscation is a powerful tool to protect the intellectual property or secret keys inside programs. Strong software obfuscation is crucial in the context of untrusted execution environments (e.g., subject to malware infection) or to face
    potentially malicious users trying to reverse-engineer a sensitive program. Unfortunately, the state-of-the-art of pure software-based obfuscation (including white-box cryptography) is either insecure or infeasible in practice.

    This work introduces OBSCURE, a versatile framework for practical and cryptographically strong software obfuscation relying on a simple stateless secure element (to be embedded, for example, in a protected hardware chip or a token). Based on the
    foundational result by Goyal et al. from TCC 2010, our scheme enjoys provable security guarantees, and further focuses on practical aspects, such as efficient execution of the obfuscated programs, while maintaining simplicity of the secure element. In
    particular, we propose a new rectangular universalization technique, which is also of independent interest. We provide an implementation of OBSCURE taking as input a program source code written in a subset of the C programming language. This ensures
    usability and a broad range of applications of our framework. We benchmark the obfuscation on simple software programs as well as on cryptographic primitives, hence highlighting the possible use cases of the framework as an alternative to pure software-
    based white-box implementations.



    ## 2024/78

    * Title: Formal Security Analysis of the OpenID FAPI 2.0: Accompanying a Standardization Process
    * Authors: Pedram Hosseyni, Ralf Kuesters, Tim Würtele
    * [Permalink](https://eprint.iacr.org/2024/078)
    * [Download](https://eprint.iacr.org/2024/078.pdf)

    ### Abstract

    In recent years, the number of third-party services that can access highly-sensitive data has increased steadily, e.g., in the financial sector, in eGovernment applications, or in high-assurance identity services. Protocols that enable this access must
    provide strong security guarantees.

    A prominent and widely employed protocol for this purpose is the OpenID Foundation's FAPI protocol. The FAPI protocol is already in widespread use, e.g., as part of the UK's Open Banking standards and Brazil's Open Banking Initiative as well as outside
    of the financial sector, for instance, as part of the Australian government's Consumer Data Rights standards.

    Based on lessons learned from FAPI 1.0, the OpenID Foundation has developed a completely new protocol, called FAPI 2.0. The specifications of FAPI 2.0 include a concrete set of security goals and attacker models under which the protocol aims to be secure.

    Following an invitation from the OpenID Foundation's FAPI Working Group (FAPI WG), we have accompanied the standardization process of the FAPI 2.0 protocol by an in-depth formal security analysis. In this paper, we report on our analysis and findings.

    Our analysis incorporates the first formal model of the FAPI 2.0 protocol and is based on a detailed model of the web infrastructure, the Web Infrastructure Model, originally proposed by Fett, Küsters, and Schmitz. Our analysis has uncovered several
    types of attacks on the protocol, violating the aforementioned security goals set by the FAPI WG. We subsequently have worked with the FAPI WG to fix the protocol, resulting in several changes to the specifications. After adapting our model to the
    changed specifications, we have proved the security properties to hold under the strong attacker model defined by the FAPI WG.



    ## 2024/79

    * Title: On Modular Algorithms and Butterfly Operations in Number Theoretic Transform
    * Authors: Yanze Yang, Yiran Jia, Guangwu Xu
    * [Permalink](https://eprint.iacr.org/2024/079)
    * [Download](https://eprint.iacr.org/2024/079.pdf)

    ### Abstract

    Number theoretic transform (NTT) has been a very useful tool in computations for number theory, algebra and cryptography.
    Its performance affects some post-quantum
    cryptosystems. In this paper, we discuss the butterfly operation of NTT. This basic module of NTT requires heavy modular
    arithmetics. Montgomery reduction is commonly used in this setting. Recently several variants of Montgomery have been proposed
    for the purpose of speeding up NTT. We observe that the Chinese remainder theorem (CRT) can be
    involved in this type of algorithms in nature and transparent ways.
    In this paper, a framework of using CRT to model Montgomery type algorithms is described.
    The derivation of these algorithms as well as their correctness are all treated in the CRT framework.
    Under our approach,
    some problems of a modular reduction algorithm ((published in IACR Transactions on Cryptographic Hardware and Embedded Systems, doi:10.46586/tches.v2022.i4.614-636 )
    are identified, and a counterexample is generated to show that the algorithm is incorrect.



    ## 2024/80

    * Title: Memory adds no cost to lattice sieving for computers in 3 or more spatial dimensions
    * Authors: Samuel Jaques
    * [Permalink](https://eprint.iacr.org/2024/080)
    * [Download](https://eprint.iacr.org/2024/080.pdf)

    ### Abstract

    The security of lattice-based crytography (LWE, NTRU, and FHE) depends on the hardness of the shortest-vector problem (SVP). Sieving algorithms give the lowest asymptotic runtime to solve SVP, but depend on exponential memory. Memory access costs much
    more in reality than in the RAM model, so we consider a computational model where processors, memory, and meters of wire are in constant proportions to each other. While this adds substantial costs to route data during lattice sieving, we modify existing
    algorithms to amortize these costs and find that, asymptotically, a classical computer can achieve the previous RAM model cost of $2^{0.2925d+o(d)}$ to sieve a $d$-dimensional lattice for a computer existing in 3 or more spatial dimensions, and can reach
    $2^{0.3113d+o(d)}$ in 2 spatial dimensions, where "spatial dimensions" are the dimensions of the physical geometry in which the computer exists.

    Under some assumptions about the constant terms of memory access, we estimate increases in bit security between $3$ to $29$ bits for different Kyber parameter sets and $4$ to $28$ bits for Dilithium.



    ## 2024/81

    * Title: SuperFL: Privacy-Preserving Federated Learning with Efficiency and Robustness
    * Authors: Yulin Zhao, Hualin Zhou, Zhiguo Wan
    * [Permalink](https://eprint.iacr.org/2024/081)
    * [Download](https://eprint.iacr.org/2024/081.pdf)

    ### Abstract

    Federated Learning (FL) accomplishes collaborative model training without the need to share local training data. However, existing FL aggregation approaches suffer from inefficiency, privacy vulnerabilities, and neglect of poisoning attacks, severely
    impacting the overall performance and reliability of model training. In order to address these challenges, we propose SuperFL, an efficient two-server aggregation scheme that is both privacy preserving and secure against poisoning attacks. The two semi-
    honest servers $\mathcal{S}_0$ and $\mathcal{S}_1$ collaborate with each other, with a shuffle server $\mathcal{S}_0$ in charge of privacy-preserving random clustering, while an analysis server $\mathcal{S}_1$ responsible for robustness detection,
    identifying and filtering malicious model updates. Our scheme employs a novel combination of homomorphic encryption and proxy re-encryption to realize secure server-to-server collaboration. We also utilize a novel sparse matrix projection compression
    technique to enhance communication efficiency and significantly reduce communication overhead. To resist poisoning attacks, we introduce a dual-filter algorithm based on trusted root, combine dimensionality reduction and norm calculation to identify
    malicious model updates.
    Extensive experiments validate the efficiency and robustness of our scheme. SuperFL achieves impressive compression ratios, ranging from $5\text{-}40$x, under different models while maintaining comparable model accuracy as the baseline. Notably, our
    solution demonstrates a maximal model accuracy decrease of no more than $2\%$ and $6\%$ on the MNIST and CIFAR-10 datasets respectively, under specific compression ratios and the presence of malicious clients.



    ## 2024/82

    * Title: Quantum State Obfuscation from Classical Oracles
    * Authors: James Bartusek, Zvika Brakerski, Vinod Vaikuntanathan
    * [Permalink](https://eprint.iacr.org/2024/082)
    * [Download](https://eprint.iacr.org/2024/082.pdf)

    ### Abstract

    A major unresolved question in quantum cryptography is whether it is possible to obfuscate arbitrary quantum computation. Indeed, there is much yet to understand about the feasibility of quantum obfuscation even in the classical oracle model, where one
    is given for free the ability to obfuscate any classical circuit.


    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)