[continued from previous message]
We propose to move away from static keys and instead use a group key progression (GKP) scheme, a novel primitive that enables a dynamic group of users to agree on a persistent sequence of keys while keeping a compact local state. GKP ensures that group
members can only derive keys within a certain interval of the sequence, a notion that we call interval access control (IAC), and also provide post-compromise security. Our GKP construction, called Grappa, combines continuous group key agreement (CGKA, by
Alwen et al., 2020) with a new abstraction called interval scheme. The latter is a symmetric-key primitive that can derive a sequence of keys from a compact state while preserving IAC. We explore different interval scheme constructions and simulate their
storage and communication costs when used in group settings. The most efficient of them is a generalization of dual key regression (Shafagh et al., 2020), which we formalize and prove secure. Overall, our protocols offer a practical and robust solution
to protect persistent data shared by a group.
## 2025/1029
* Title: Improved Key Recovery Attacks of Ascon
* Authors: Shuo Peng, Kai Hu, Jiahui He, Meiqin Wang
* [Permalink](
https://eprint.iacr.org/2025/1029)
* [Download](
https://eprint.iacr.org/2025/1029.pdf)
### Abstract
Ascon, a family of algorithms that support hashing and Authenticated Encryption with Associated Data (AEAD), is the final winner of the NIST Lightweight Cryptography Project.
As a research hotspot, Ascon has received substantial third-party security evaluation. Among all the results of Ascon-128 (the primary recommendation of AEAD), the key recovery attack can only be achieved by reducing the initialization phase to 7 rounds
or fewer, regardless of whether it violates the security claims made by the designers (i.e., misuse of the nonce or exceeding data limits $2^{64}$).
In this paper, we, from two aspects (misuse-free setting and misused setting), improve the key recovery attack on Ascon-128 using the cube attack method.
In one part, we present a faster method to recover the superpolies for a 64-dimensional cube in the output bits of the 7-round initialization, enabling us to recover the secret key with a time complexity of $2^{95.96}$ and a data complexity of $2^{64}$.
Our 7-round key recovery attack, based on the full key space, greatly improves the time complexity, making it the best result to date.
Additionally, we utilize several techniques to extend state recovery to key recovery, answering the open problem of transitioning from full state recovery in the encryption phase to key recovery for Ascon-128 (ToSc Vol 4, 2022). By combining encryption
phase state recovery with initialization phase key recovery, we can achieve 8-round and 9-round initialization phase key recovery in the nonce misuse scenario, with time complexities of $2^{101}$ and $2^{123.92}$, respectively.
This represents an improvement of two rounds over previous results in the misused setting.
Our first key recovery attack is also applicable to Ascon-128a, achieving
the same result. In cases where the full state, prior to the encryption phase, can be recovered in other Ascon AEAD modes, our second key recovery attack will also be useful. It is worth noting that this work does not threaten the security of the full
12 rounds Ascon, but we expect that our results provide new insights
into the security of Ascon.
## 2025/1030
* Title: Everlasting Anonymous Rate-Limited Tokens
* Authors: Rutchathon Chairattana-Apirom, Nico Döttling, Anna Lysyanskaya, Stefano Tessaro
* [Permalink](
https://eprint.iacr.org/2025/1030)
* [Download](
https://eprint.iacr.org/2025/1030.pdf)
### Abstract
Anonymous rate-limited tokens are a special type of credential that can be used to improve the efficiency of privacy-preserving authentication systems like Privacy Pass. In such a scheme, a user obtains a "token dispenser" by interacting with an issuer,
and the dispenser allows the user to create up to a pre-determined number $k$ of unlinkable and publicly verifiable tokens. Unlinkable means that one should not be able to tell that two tokens originate from the same dispenser, but also they cannot be
linked to the interaction that generated the dispenser. Furthermore, we can limit the rate at which these tokens are created by linking each token to a context (e.g., the service we are authenticating to), and imposing a limit $N \leq k$ such that seeing
more than $N$ tokens for the same context will reveal the identity of the user. Constructions of such tokens were first given by Camenisch, Hohenberger and Lysyanskaya (EUROCRYPT '05) and Camenisch, Hohenberger, Kohlweiss, Lysyanskaya, and Meyerovich (CCS '06).
In this work, we present the first construction of \emph{everlasting} anonymous rate-limited tokens, for which unlinkability holds against computationally unbounded adversaries, whereas other security properties (e.g., unforgeability) remain
computational. Our construction relies on pairings. While several parameters in our construction unavoidably grow with $k$, the key challenge we resolve is ensuring that the complexity of dispensing a token is independent of the parameter $k$.
We are motivated here by the goal of providing solutions that are robust to potential future quantum attacks against the anonymity of previously stored tokens. A construction based on post-quantum secure assumptions (e.g., based on lattices) would be
rather inefficient---instead, we take a pragmatic approach dispensing with post-quantum security for properties not related to privacy.
## 2025/1031
* Title: Quasidifferential Saves Infeasible Differential: Improved Weak-Key Key-Recovery Attacks on Round-Reduced GIFT
* Authors: Chengcheng Chang, Meiqin Wang, Wei Wang, Kai Hu
* [Permalink](
https://eprint.iacr.org/2025/1031)
* [Download](
https://eprint.iacr.org/2025/1031.pdf)
### Abstract
\gift, including \gift-64 and \gift-128, is a family of lightweight block ciphers with outstanding implementation performance and high security, which is a popular underlying primitive chosen by many AEADs such as \sundae. Currently, differential
cryptanalysis is the best key-recovery attack on both ciphers, but they have stuck at 21 and 27 rounds for \gift-64 and \gift-128, respectively. Recently, Beyne and Rijmen proposed the quasidifferential transition matrix for differential cryptanalysis at
CRYPTO 2022 and showed that the fixed-key probability of a differential (characteristic) can be expressed as the sum of correlations of all quasidifferential trails corresponding to this differential (characteristic). As pointed out by Beyne and Rijmen
in their paper, the quasidifferential methodology is useful in identifying weak-key differential attacks.
In this paper, we apply Beyne and Rijmen's method to \gift. Some differential characteristics with small (average) probabilities can have much larger probabilities when weak-key conditions hold. Improved weak-key differential attacks on \gift-64 and \
gift-128 are thus obtained. For \gift-64, the probability of a 13-round differential is improved from $2^{-62.06}$ to $2^{-57.82}$ with 4 bits of weak-key conditions, then an improved differential key-recovery attack on 21-round \gift-64 is obtained with
$2^{117.42}/2^{64}$ time/data complexities; the probability of a 13-round multiple differential (containing 33 characteristics) is improved from $2^{-58.96}$ to $2^{-55.67}$ with 4 bits of weak-key conditions, then an improved multiple differential key-
recovery attack on 21-round \gift-64 is obtained with $2^{123.27}/2^{64}$ time/data complexities. For \gift-128, the probability of a 20-round differential is improved from $2^{-121.83}$ to $2^{-114.77}$ with 6 bits of weak-key conditions; the
probability of a 21-round multiple differential (containing 2 differentials) is improved from $2^{-128.38}$ to $2^{-122.77}$ with 4 bits of weak-key conditions. Improved (multiple) differential weak-key key-recovery attacks are obtained for 27 and 28
rounds of \gift-128 with $2^{115.77}$/$2^{115.77}$ and $2^{123.77}$/$2^{123.77}$ time/data complexities, respectively. As far as we know, this is the first time that a (weak-key) key-recovery attack can reach 28 rounds of \gift-128.
Additionally, as an independent interest, we perform the first differential attack on \sundae. The differential used in this attack is checked with quasidifferential trails, thus the probability is reliable. Our attack is nonce-respecting and has
significantly better complexities than the currently best attack.
## 2025/1032
* Title: Constant-Round Asynchronous MPC with Optimal Resilience and Linear Communication
* Authors: Junru Li, Yifan Song
* [Permalink](
https://eprint.iacr.org/2025/1032)
* [Download](
https://eprint.iacr.org/2025/1032.pdf)
### Abstract
In this work, we consider secure multiparty computation (MPC) in the asynchronous network setting. MPC allows $n$ parties to compute a public function on their private inputs against an adversary corrupting at most $t$ of them. We consider both
communication complexity and round complexity of asynchronous MPC (AMPC) with the optimal resilience $n=3t+1$.
Without fully homomorphic encryptions, the best-known result in this setting is achieved by Coretti, Garay, Hirt, and Zikas (ASIACRYPT 2016), which requires $O(|C|n^3\kappa)$ bits of communication assuming one-way functions, where $\kappa$ is the
security parameter. On the other hand, the best-known non-constant-round AMPC by Goyal, Liu, and Song (CRYPTO 2024) can achieve $O(|C|n)$ communication even in the information-theoretic setting. In this work, we give the first construction of a constant-
round AMPC with $O(|C|n\kappa)$ bits of communication that achieves malicious security with abort assuming random oracles. We provide new techniques for adapting the MPC-in-the-head framework in the asynchronous network to compute a constant-size garbled
circuit.
## 2025/1033
* Title: Trusted Hardware-Assisted Leaderless Byzantine Fault Tolerance Consensus
* Authors: Liangrong Zhao, Jérémie Decouchant, Joseph K. Liu, Qinghua Lu, Jiangshan Yu
* [Permalink](
https://eprint.iacr.org/2025/1033)
* [Download](
https://eprint.iacr.org/2025/1033.pdf)
### Abstract
Byzantine Fault Tolerance (BFT) Consensus protocols with trusted hardware assistance have been extensively explored for their improved resilience to tolerate more faulty processes. Nonetheless, the potential of trust hardware has been scarcely
investigated in leaderless BFT protocols. RedBelly is assumed to be the first blockchain network whose consensus is based on a truly leaderless BFT algorithm. This paper proposes a trusted hardware-assisted leaderless BFT consensus protocol by offering a
hybrid solution for the set BFT problem defined in the RedBelly blockchain. Drawing on previous studies, we present two crucial trusted services: the counter and the collector. Based on these two services, we introduce two primitives to formulate our
leaderless BFT protocol: a hybrid verified broadcast (VRB) protocol and a hybrid binary agreement. The hybrid VRB protocol enhances the hybrid reliable broadcast protocol by integrating a verification function. This addition ensures that a broadcast
message is verified not only for authentication but also for the correctness of its content. Our hybrid BFT consensus is integrated with these broadcast protocols to deliver binary decisions on all proposals. We prove the correctness of the proposed
hybrid protocol and demonstrate its enhanced performance in comparison to the prior trusted BFT protocol.
## 2025/1034
* Title: JANUS: Enhancing Asynchronous Common Subset with Trusted Hardware
* Authors: Liangrong Zhao, Hans Schmiedel, Qin Wang, Jiangshan Yu
* [Permalink](
https://eprint.iacr.org/2025/1034)
* [Download](
https://eprint.iacr.org/2025/1034.pdf)
### Abstract
Asynchronous common subset (ACS) has been extensively studied since the asynchronous Byzantine fault tolerance (BFT) framework was introduced by Ben-Or, Kemler, and Rabin (BKR).
The line of work (i.e., HoneyBadgerBFT, BEAT, EPIC) uses parallel reliable broadcast (RBC) and asynchronous binary agreement (ABA) instances to reach an agreement on a subset of proposed transactions.
In this paper, we further progress the BKR paradigm by presenting Janus, the first hybrid ACS protocol leveraging trusted hardware components.
Janus is the first ACS protocol that tolerates a minority of Byzantine processes and that has O(n^2) message complexity.
Supported by trusted hardware components, we introduce a provable broadcast primitive to replace RBC, and develop a resilient binary agreement protocol. Messages for concurrent instances of agreement are aggregated into vectors. Our experimental results
demonstrate significant performance improvements over predominant ACS constructions with a 92%+ increase compared to HoneyBadgerBFT and a 47%+ increase compared to BEAT.
Additionally, we provide a comparison with open-source hybrid BFT protocols that operate under a partially synchronous network, highlighting the performance enhancement compared to previous hybrid protocols that also tolerate the Byzantine minority (e.g.,
MinBFT and Damysus, by 49%+).
## 2025/1035
* Title: Continuous Group-Key Agreement: Concurrent Updates without Pruning
* Authors: Benedikt Auerbach, Miguel Cueto Noval, Boran Erol, Krzysztof Pietrzak
* [Permalink](
https://eprint.iacr.org/2025/1035)
* [Download](
https://eprint.iacr.org/2025/1035.pdf)
### Abstract
Continuous Group Key Agreement (CGKA) is the primitive underlying secure group messaging.
It allows a large group of $N$ users to maintain a shared secret key that is frequently rotated by the group members in order to achieve forward secrecy and post compromise security.
The group messaging scheme Messaging Layer Security (MLS) standardized by the IETF makes use of a CGKA called TreeKEM which arranges the $N$ group members in a binary tree.
Here, each node is associated with a public-key,
each user is assigned one of the leaves,
and a user knows the corresponding secret keys from their leaf to the root.
To update the key material known to them, a user must just replace keys at $\log(N)$ nodes, which requires them to create and upload $\log(N)$ ciphertexts. Such updates must be processed sequentially by all users, which for large groups is impractical.
To allow for concurrent updates, TreeKEM uses the ``propose and commit'' paradigm, where multiple users can concurrently propose to update (by just sampling a fresh leaf key), and a single user can then commit to all proposals at once.
Unfortunately, this process destroys the binary tree structure as the tree gets pruned and some nodes must be ``blanked'' at the cost of increasing the in-degree of others, which makes the commit operation, as well as, future commits more costly.
In the worst case, the update cost (in terms of uploaded ciphertexts) per user can grow from $\log(N)$ to $\Omega(N)$.
In this work we provide two main contributions.
First, we show that MLS' communication complexity is bad not only in the worst case but also if the proposers and committers are chosen at random:
even if there's just one update proposal for every commit the expected cost is already over $\sqrt{N}$, and it approaches $N$ as this ratio changes towards more proposals.
Our second contribution is a new variant of propose and commit for TreeKEM which for moderate amounts of update proposals per commit provably achieves an update cost of $\Theta(\log(N))$ assuming the proposers and committers are chosen at random.
## 2025/1036
* Title: A Critique on Average-Case Noise Analysis in RLWE-Based Homomorphic Encryption
* Authors: Mingyu Gao, Hongren Zheng
* [Permalink](
https://eprint.iacr.org/2025/1036)
* [Download](
https://eprint.iacr.org/2025/1036.pdf)
### Abstract
Homomorphic encryption schemes based on the Ring-Learning-with-Errors problem require accurate ciphertext noise analysis to ensure correctness and security. However, ring multiplications during homomorphic computations make the noise in the result
ciphertexts difficult to characterize. Existing average-case noise analyses derive a bound on the noise by either assuming it follows a Gaussian distribution, or giving empirical formulae, with strong independence assumption and the Central Limit Theorem
extensively applied. In this work, we question the validity of these methods, by showing that the noise exhibits a heavy-tailed distribution via exact calculation of its variance and kurtosis, for both independent and dependent noises. The heavy-
tailedness suggests the failing probability of bounds derived from these methods may not be negligible, and we experimentally demonstrate several cases where the noise growth is underestimated.
## 2025/1037
* Title: Committed Vector Oblivious Linear Evaluation and Its Applications
* Authors: Yunqing Sun, Hanlin Liu, Kang Yang, Yu Yu, Xiao Wang, Chenkai Weng
* [Permalink](
https://eprint.iacr.org/2025/1037)
* [Download](
https://eprint.iacr.org/2025/1037.pdf)
### Abstract
We introduce the notion of committed vector oblivious linear evaluation (C-VOLE), which allows a party holding a pre-committed vector to generate VOLE correlations with multiple parties on the committed value. It is a unifying tool that can be found
useful in zero-knowledge proofs (ZKPs) of committed values, actively secure multi-party computation, private set intersection (PSI), etc.
To achieve the best efficiency, we design a tailored commitment scheme and matching C-VOLE protocols, both based on the learning parity with noise assumption. In particular, exploiting the structures of the carefully designed LPN-based commitment
minimizes the cost of ensuring consistency between the committed vector and VOLE correlation. As a result, we achieve a 28$\times$ improvement over the protocol proposed in prior work (Usenix 2021) that uses ZKP to prove the correct opening of the
commitment. We also apply C-VOLE to design a PSI protocol that allows one server to run PSI repeatedly with multiple clients while ensuring that the same set is used across all executions. Compared with the state-of-the-art PSI (CCS 2024) with similar
security requirements, our protocol reduces the communication overhead by a factor of 35$\times$.
## 2025/1038
* Title: Security of Operations on Random Numbers: A Review
* Authors: Tejas Sharma, Ashish Kundu
* [Permalink](
https://eprint.iacr.org/2025/1038)
* [Download](
https://eprint.iacr.org/2025/1038.pdf)
### Abstract
Random numbers are often used in cryptography algorithms,
protocols, and in several security and non-security applications. Such us-
ages often apply Arithmetic and Boolean operations on pseudorandom
numbers, such as addition, XOR, NOT, bit shifts, and other operations,
in order to achieve the desired amount of entropy and desired level of security. In this paper, we have reviewed, studied, and analyzed the se-
curity properties of these operations on random numbers: do Arithmetic
and Boolean operations and other related operations on cryptograph-
ically secure pseudorandom numbers lead to cryptographically secure pseudorandom numbers; do they lead to loss of preservation of entropy?
## 2025/1039
* Title: Unbounded Distributed Broadcast Encryption and Registered ABE from Succinct LWE
* Authors: Hoeteck Wee, David J. Wu
* [Permalink](
https://eprint.iacr.org/2025/1039)
* [Download](
https://eprint.iacr.org/2025/1039.pdf)
### Abstract
We construct distributed broadcast encryption and registered attribute-based encryption (ABE) that support an arbitrary polynomial of users from the succinct LWE assumption. Specifically, if we take $\lambda$ to be the security parameter and $N$ to be
the number of users, we obtain the following:
* We obtain a distributed broadcast encryption scheme where the size of the public parameters, user public/secret keys, and ciphertexts are optimal (i.e., have size $\mathsf{poly}(\lambda, \log N)$). Security relies on the $\mathsf{poly}(\lambda, \log N)$
-succinct LWE assumption. Previously, this was only known from indistinguishability obfuscation or witness encryption. All constructions that did not rely on these general tools could only support an a priori bounded number of users.
* We obtain a key-policy registered ABE scheme that supports arbitrary bounded-depth Boolean circuit policies from the $\mathsf{poly}(\lambda, d, \log N)$-succinct LWE assumption in the random oracle model, where $d$ is the depth of the circuit computing
the policy. The public parameters, user public/secret keys, and ciphertexts have size $\mathsf{poly}(\lambda, d, \log N)$, which are optimal up to the $\mathsf{poly}(d)$ factor. This is the first registered ABE scheme with nearly-optimal parameters. All
previous schemes (including constructions based on indistinguishability obfuscation, witness encryption, or evasive LWE) either have ciphertexts that scale with the policy size and attribute length, or can only support a bounded number of users (with
long public parameters and public keys that scale with the number of users).
## 2025/1040
* Title: Weave: Efficient and Expressive Oblivious Analytics at Scale
* Authors: Mahdi Soleimani, Grace Jia, Anurag Khandelwal
* [Permalink](
https://eprint.iacr.org/2025/1040)
* [Download](
https://eprint.iacr.org/2025/1040.pdf)
### Abstract
Many distributed analytics applications that are offloaded to the cloud operate on sensitive data. Even when the computations for such analytics workloads are confined to trusted hardware enclaves and all stored data and network communications are
encrypted, several studies have shown that they are still vulnerable to access pattern attacks. Prior efforts towards preventing access pattern leakage often incur network and compute overheads that are logarithmic in dataset size, while also limiting
the functionality of supported analytics jobs.
We present Weave, an efficient, expressive, and secure analytics platform that scales to large datasets. Weaveemploys a combination of noise injection and hardware memory isolation via enclave page caches to reduce the network and compute overheads for
oblivious analytics to a constant factor. Weave also employs several optimizations and extensions that exploit dataset and workload-specific properties to ensure performance at scale without compromising on functionality. Our evaluations show that Weave
reduces the end-to-end execution time for a wide range of analytics jobs on large real-world datasets by $4$--$10\times$ compared to prior state-of-the-art while providing strong obliviousness guarantees.
## 2025/1041
* Title: Rubato: Provably Post-Quantum Secure and Batched Asynchronous Randomness Beacon
* Authors: Linghe Yang, Jian Liu, Jingyi Cui, Guangquan Xu, Yude Bai, Wei Wang * [Permalink](
https://eprint.iacr.org/2025/1041)
* [Download](
https://eprint.iacr.org/2025/1041.pdf)
### Abstract
Distributed Randomness Beacons (DRBs) provide secure, unbiased random numbers for decentralized systems. However, existing protocols face critical limitations. Most rely on cryptographic assumptions which are vulnerable to quantum attacks, risking long-
term security in asynchronous networks where unbounded delays may allow attackers time to exploit these weaknesses. Many achieve low beacon generation rates, often below 100 beacons per minute in moderate-scale networks (e.g., Spurt IEEE S&P’22),
hindering their use in applications requiring high-throughput randomness. Additionally, traditional Verifiable Secret Sharing (VSS)-based DRBs, using a share-consensus-reconstruct paradigm, are unsuitable for asynchronous networks due to circular
dependencies between beacon generation and consensus. Given these limitations, we propose Rubato, the first provably post-quantum secure DRB for asynchronous environments, incorporating a lattice-based batched Asynchronous Verifiable Secret Sharing
scheme (bAVSS-PQ). Rubato supports batching of $\mathcal{O}(\lambda^2)$ secrets with communication complexity $\mathcal{O}(\lambda n^3 \log n)$ and tolerates Byzantine faults in up to one-third of the nodes. Integrated with DAG-based consensus protocols
like Bullshark or Tusk, its epoch-staggered architecture resolves circular dependencies, enabling efficient and secure randomness generation. Evaluations across 10 to 50 nodes show Rubato generates 5200 to 350 beacons per minute with per-beacon latencies
of 11.60 to 96.37 milliseconds, achieving a consensus throughput of 186,088 transactions per second with a latency of 16.78 seconds at 30 nodes. Rubato offers robust post-quantum security and high performance for small-to-medium-scale decentralized
systems.
## 2025/1042
* Title: Crowhammer: Full Key Recovery Attack on Falcon with a Single Rowhammer Bit Flip
* Authors: Calvin Abou Haidar, Quentin Payet, Mehdi Tibouchi
* [Permalink](
https://eprint.iacr.org/2025/1042)
* [Download](
https://eprint.iacr.org/2025/1042.pdf)
### Abstract
The Rowhammer attack is a fault-injection technique leveraging the density of RAM modules to trigger persistent hardware bit flips that can be used for probing or modifying protected data. In this paper, we show that Falcon, the hash-and-sign signature
scheme over NTRU lattices selected by NIST for standardization, is vulnerable to an attack using Rowhammer.
Falcon's Gaussian sampler is the core component of its security, as it allows to provably decorrelate the short basis used for signing and the generated signatures. Other schemes, lacking this guarantee (such as NTRUSign, GGH or more recently Peregrine)
were proven insecure. However, performing efficient and secure lattice Gaussian sampling has proved to be a difficult task, fraught with numerous potential vulnerabilities to be exploited. To avoid timing attacks, a common technique is to use
distribution tables that are traversed to output a sample. The official Falcon implementation uses this technique, employing a hardcoded reverse cumulative distribution table (RCDT). Using Rowhammer, we target Falcon's RCDT to trigger a very small number
of targeted bit flips, and prove that the resulting distribution is sufficiently skewed to perform a key recovery attack.
Namely, we show that a single targeted bit flip suffices to fully recover the signing key, given a few hundred million signatures, with more bit flips enabling key recovery with fewer signatures. Interestingly, the Nguyen–Regev parallelepiped learning
attack that broke NTRUSign, GGH and Peregrine does not readily adapt to this setting unless the number of bit flips is very large. However, we show that combining it with principal component analysis (PCA) yields a practical attack.
This vulnerability can also be triggered with other types of persistent fault attacks on memory like optical faults. We suggest cheap countermeasures that largely mitigate it, including rejecting signatures that are unusually short.
## 2025/1043
* Title: Designing QC-MDPC Public Key Encryption Schemes with Niederreiter's Construction and a Bit Flipping Decoder with Bounded DFR
* Authors: Alessandro Annechini, Alessandro Barenghi, Gerardo Pelosi, Simone Perriello
* [Permalink](
https://eprint.iacr.org/2025/1043)
* [Download](
https://eprint.iacr.org/2025/1043.pdf)
### Abstract
Post-quantum public key encryption (PKE) schemes employing Quasi-cyclic (QC) sparse
parity-check matrix codes are enjoying significant success, thanks to their good performance profile and reduction to believed-hard problems from coding theory.
However, using QC sparse parity-check matrix codes (i.e., QC-MDPC/LDPC codes) comes with a significant challenge: determining in closed-form their decoding failure rate (DFR), as decoding failures are known to leak information on the private key.
Furthermore, there is no formal proof that changing the (constant) rate of the employed codes does not change the nature of the underlying hard problem, nor of the hardness of decoding random QC codes is formally related to the decoding hardness of random codes.
In this work, we address and solve these challenges, providing a novel closed-form estimation of the decoding failure rate for three-iteration bit flipping decoders, and proving computational equivalences among the aforementioned problems.
This allows us to design systematically a Niederreiter-style QC-MDPC PKE, enjoying the flexibility granted by freely choosing the code rate, and the significant improvements in tightness of our DFR bound.
We report a $2\times$ improvement in public key and ciphertext size
w.r.t. the previous best cryptosystem design with DFR closed-form bounds, LEDAcrypt-KEM. Furthermore, we show that our PKE parameters yield $30$% smaller public key size and $2.6\times$ smaller ciphertexts w.r.t. HQC,
which is the key encapsulation method employing a code based PKE, recently selected by the US NIST for standardization.
## 2025/1044
* Title: When Threshold Meets Anamorphic Signatures: What is Possible and What is Not!
* Authors: Hien Chu, Khue Do, Lucjan Hanzlik, Sri AravindaKrishnan Thyagarajan * [Permalink](
https://eprint.iacr.org/2025/1044)
* [Download](
https://eprint.iacr.org/2025/1044.pdf)
### Abstract
Anamorphic signatures allow covert communication through signatures in environments where encryption is restricted. They enable trusted recipients with a double key to extract hidden messages while the signature remains indistinguishable from a fresh and
regular one. However, the traditional notion of anamorphic signatures suffers from vulnerabilities, particularly when a single recipient or sender is compromised, exposing all hidden messages and providing undeniable proof that citizens are part of the
anamorphic exchange.
To address these limitations, we explore a threshold-based approach to distribute trust among multiple recipients, preventing adversaries from decrypting anamorphic messages even if some recipients are compromised. Our first contribution is the
formalization of the notion of \emph{threshold-recipient anamorphic signatures}, where decryption is possible only through collaboration among a subset of recipients.
We then explore a \emph{stronger model} where the dictator controls the key generation process through which it learns all secret keys and how citizens store cryptographic keys. A particular example of this model in the real world is a dictator providing
citizens with electronic identity documents (eIDs) and blocking all other usage of cryptography. We demonstrate that anamorphic communication is still possible even in such a scenario. Our construction is secure against post-quantum adversaries and does
not rely on any computational assumptions except the random oracle model.
Finally, we show an \emph{impossibility result} for encoding anamorphic messages with a threshold-sender model when using many existing threshold signature schemes and the adversary is part of the signing group. Our work outlines both the possibilities
and limitations of extending anamorphic signatures with threshold cryptography, offering new insights into improving the security and privacy of individuals under authoritarian regimes.
## 2025/1045
* Title: Constrained Verifiable Random Functions Without Obfuscation and Friends
* Authors: Nicholas Brandt, Miguel Cueto Noval, Christoph U. Günther, Akin Ünal, Stella Wohnig
* [Permalink](
https://eprint.iacr.org/2025/1045)
* [Download](
https://eprint.iacr.org/2025/1045.pdf)
### Abstract
CVRFs are PRFs that unify the properties of verifiable and constrained PRFs. Since they were introduced concurrently by Fuchsbauer and Chandran-Raghuraman-Vinayagamurthy in 2014, it has been an open problem to construct CVRFs without using heavy
machinery such as multilinear maps, obfuscation or functional encryption.
[continued in next message]
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)