[continued from previous message]
Private set intersection (PSI) allows two parties to jointly compute the intersection of their private sets without revealing any additional information. Structure-aware PSI (sa-PSI), introduced by Garimella et al. (Crypto'22), is a variant where Alice's
input set has a publicly known structure and Bob's input set remains unstructured, enabling new applications like fuzzy PSI. Their construction relies solely on lightweight cryptographic primitives such as symmetric-key primitives and oblivious transfer (
OT) extension. Since then, there has been active research on sa-PSI based on lightweight cryptography. Notably, recent work by Garimella et al. (Crypto'24) achieves sa-PSI with both communication and computation costs only scaling with the description
size of Alice's set, rather than its potentially large cardinality. However, this line of work remains largely theoretical, lacking efficient concrete implementations.
In this work, we close this gap by presenting a new framework for sa-PSI that achieves practical efficiency. We identify and eliminate a hidden multiplicative overhead proportional to the security parameter (e.g., 128) in prior symmetric-key-based sa-PSI
constructions. A key building block of our new framework is a distributed Function Secret Sharing (dFSS) key generation protocol tailored to the structure of Alice's set, which may be of independent interest. To demonstrate the practicality of our
framework, we extend our dFSS protocol to support incremental evaluation, introduce new techniques for spatial hashing, and develop several new optimization techniques, including reducing the exponential dependence on dimension and enabling load
balancing between the two parties.
We instantiate our framework for structured sets defined by unions of $d$-dimensional $\ell_\infty$ balls, and implement our protocols using only lightweight symmetric-key primitives and OT extension. Our experiments show concrete performance
improvements of up to $27\times$ speedup in computation and $7.7\times$ reduction in communication in low-dimensional, large-radius settings compared to existing public-key-based fuzzy PSI protocols by van Baarsen & Pu (Eurocrypt'24) and Gao et al. (
Asiacrypt'24).
## 2025/908
* Title: SubLogarithmic Linear Time SNARKs from Compressed Sum-Check
* Authors: Nitin Singh, Sikhar Patranabis
* [Permalink](
https://eprint.iacr.org/2025/908)
* [Download](
https://eprint.iacr.org/2025/908.pdf)
### Abstract
We leverage recently proposed multilinear polynomial commitment schemes, with linear time prover and constant proof size to reduce the communication complexity of the classical sum-check protocol for multivariate polynomials. Specifically, for degree $d$
multivariate polynomial in $\mu$ variables which can be decomposed into multilinear polynomials, we exhibit a sum-check protocol with $O((\ell +d)n)$ prover cost and $O(\ell + d\log \log n)$ communication for $n=2^\mu$. This enables us to break the $O(\
log n)$ barrier for this ubiquitous primitive used in multilinear SNARKs and implies first construction of prover efficient (with $O_\lambda(n)$ proving cost) SNARKs using multilinear polynomials with $O(\log \log n)$ proof-size. Currently, lower
communication is only obtained by SNARKs based on univariate polynomials which incur $O(n\log n)$ prover complexity due to inherent polynomial multiplication.
Concretely, using compressed sum-check in HyperPlonk protocol allows us to reduce the proof size from about $5.5$ KB $-$ $6$ KB to only about $1.5$ KB, making it competitive with univariate SNARKs like PLONK.
## 2025/909
* Title: Energy Consumption Framework and Analysis of Post-Quantum Key-Generation on Embedded Devices
* Authors: J Cameron Patterson, William J Buchanan, Callum Turino
* [Permalink](
https://eprint.iacr.org/2025/909)
* [Download](
https://eprint.iacr.org/2025/909.pdf)
### Abstract
The emergence of quantum computing and Shor's algorithm necessitates an imminent shift from current public key cryptography techniques to post-quantum robust techniques. NIST has responded by standardising Post-Quantum Cryptography (PQC) algorithms,
with ML-KEM (FIPS-203) slated to replace ECDH (Elliptic Curve Diffie-Hellman) for key exchange. A key practical concern for PQC adoption is energy consumption. This paper introduces a new framework for measuring the PQC energy consumption on a
Raspberry Pi when performing key generation. The framework uses both available traditional methods and the newly standardised ML-KEM algorithm via the commonly utilised OpenSSL library.
## 2025/910
* Title: Robust Threshold ECDSA with Online-Friendly Design in Three Rounds
* Authors: Guofeng Tang, Haiyang Xue
* [Permalink](
https://eprint.iacr.org/2025/910)
* [Download](
https://eprint.iacr.org/2025/910.pdf)
### Abstract
Threshold signatures, especially ECDSA, enhance key protection by addressing the single-point-of-failure issue. Threshold signing can be divided into offline and online phases, based on whether the message is required. Schemes with low-cost online phases
are referred to as ``online-friendly". Another critical aspect of threshold ECDSA for real-world applications is robustness, which guarantees the successful completion of each signing execution whenever a threshold number $t$ of semi-honest participants
is met, even in the presence of misbehaving signatories.
The state-of-the-art online-friendly threshold ECDSA without robustness was developed by Doerner et al. in S\&P'24, requiring only three rounds. Recent work by Wong et al. in NDSS'23 (WMY\(^+\)23) and NDSS'24 (WMC24) achieves robustness but demands
additional communication rounds (7 and 4, respectively) or incurs costly operations in the online phase, such as computations over a homomorphic encryption scheme.
This paper presents the first three-round threshold ECDSA scheme with both robustness and an online-friendly design. The online phase of our scheme relies solely on several elliptic-curve group operations, which are 2 to 3 orders of magnitude less
computationally intensive than those based on linearly homomorphic encryption schemes. We implement our protocol and conduct a comprehensive comparison with WMY$^+$23 and WMC24. Benchmark results show that the online phase of our scheme is $2.5\times$
faster than that of WMY$^+$23 and hundreds of times faster than that of WMC24. Lastly, we demonstrate that our techniques can be extended to construct an online-friendly and robust three-round threshold BBS+ scheme.
## 2025/911
* Title: Fuzzy Private Set Intersection from VOLE
* Authors: Aron van Baarsen, Sihang Pu
* [Permalink](
https://eprint.iacr.org/2025/911)
* [Download](
https://eprint.iacr.org/2025/911.pdf)
### Abstract
Private set intersection (PSI) is a well-researched cryptographic primitive that allows two parties to compute the intersection of their input sets without revealing any information about items outside of the intersection. Fuzzy private set intersection
is a relatively new variant of PSI, where items are not matched exactly but ``fuzzily''. Most commonly, items are points $\mathbf{q},\mathbf{w}$ in $d$-dimensional integer space $\mathbb{Z}^d$ and a point is a fuzzy match to another if it lies within a
ball of radius $\delta$ centered at this point, with respect to some distance metric.
Previous works either only support infinity $(L_{\infty}$) distance metric and standard PSI functionality, or support general Minkowski ($L_{\mathsf{p}}$, $\mathsf{p}\in[1,\infty]$) distance metrics and realize richer functionalities but rely on
expensive homomorphic encryptions. Our work aims to bridge this gap by giving the first construction of a fuzzy PSI protocol for general Minkowski distance metrics relying on significantly cheaper operations during the online phase.
Our main building block is a novel fuzzy matching protocol based on an oblivious pseudorandom function (OPRF), which can be realized very efficiently from vector oblivious linear evaluation (VOLE). Our protocol is able to preserve the asymptotic
complexity as well as the simplicity of the fuzzy matching protocol from van Baarsen and Pu (Eurocrypt '24), while being much more concretely efficient. Additionally, we achieve several asymptotic improvements by representing intervals succinctly.
Finally, we present the first fuzzy PSI protocol for infinity distance that places no assumptions on the sets of points, while maintaining asymptotic complexities comparable to the state-of-the-art fuzzy PSI protocol.
## 2025/912
* Title: Enforcing arbitrary constraints on Bitcoin transactions
* Authors: Federico Barbacovi, Enrique Larraia
* [Permalink](
https://eprint.iacr.org/2025/912)
* [Download](
https://eprint.iacr.org/2025/912.pdf)
### Abstract
The challenge of enforcing constraints on Bitcoin transac-
tions has recently gained a lot of attention. The current approach to
solve this problem falls short in certain aspects, such as privacy and programmability. We design a new solution that leverages zkSNARKs
and allows enforcing arbitrary constraints on Bitcoin transactions while maintaining some information private. Our approach also bypasses the
non-Turing completeness of Bitcoin Script, allowing the enforcement of unbounded constraints, namely constraints that repeat a certain opera-
tion an unbounded number of times.
## 2025/913
* Title: Hidden Number Problems in Fiat-Shamir based Post-Quantum Signatures
* Authors: Yi-Fu Lai, Jonas Meers, Julian Nowakowski
* [Permalink](
https://eprint.iacr.org/2025/913)
* [Download](
https://eprint.iacr.org/2025/913.pdf)
### Abstract
Schnorr and (EC)DSA signatures famously become completely insecure once a few bits of the random nonce are revealed to an attacker. In this work, we investigate whether post-quantum signature schemes allow for similar attacks. Specifically, we consider
three Fiat-Shamir based signature schemes: Dilithium (lattices), LESS (codes) and CSI-FiSh (isogenies). Analogous to the attacks on Schnorr and (EC)DSA, we assume knowledge of some bits of the commitment randomness used in the underlying ID protocol.
Such attacks belong to the broader class of Hidden Number Problems.
In the case of LESS, we give an efficient attack that requires knowledge of a single bit of the randomness in roughly \(20000\) signatures to completely recover the secret key. Our attack is based on the observation that knowledge of a single bit in the
randomness is enough to distinguish secret key entries from random ones. Since for proposed parameters there are at most \(548\) candidates for each entry, we can enumerate all candidates and use the distinguisher to recover the secret key one entry at a
time.
For Dilithium we are able to recover the whole secret key using at most 1792 signatures when either 3 or 4 least significant bits of the randomness are known (depending on the parameter set). Here the attack is based on the simple observation that
knowledge of the least significant bits of the randomness immediately yields a linear relation in the secret key.
For CSI-FiSh, on the other hand, we obtain only an inefficient attack that requires exponentially many signatures. However, we prove that this attack is essentially optimal. Thus, our results show that CSI-FiSh comes equipped with an inherent leakage
resilience. The argument easily extends to a wider class of signature schemes, but notably does not apply to the predecessor SeaSign.
## 2025/914
* Title: Tweakable Permutation-based Luby-Rackoff Constructions
* Authors: Bishwajit Chakraborty, Abishanka Saha
* [Permalink](
https://eprint.iacr.org/2025/914)
* [Download](
https://eprint.iacr.org/2025/914.pdf)
### Abstract
Liskov, Rivest, and Wagner, in their seminal work, formulated tweakable blockciphers and proposed two blockcipher-based design paradigms, LRW1 and LRW2, where the basic design strategy is to xor the masked tweak to the input and output of a blockcipher.
The 2-round cascaded LRW2 and 4-round cascaded LRW1 have been proven to be secure up to $O(2^{3n/4})$ queries, but $n$-bit optimal security still remains elusive for these designs. In their paper, Liskov also posed an open challenge of embedding the
tweak directly in the blockcipher, and to address this, Goldenberg et al. introduced the tweakable Luby-Rackoff (LR) constructions. They showed that if the internal primitives are random functions, then for tweaks with $t$ blocks, the construction needs $
t + 6$ rounds to be optimally $n$-bit CPA secure and $2t + 8$ rounds to be optimally $n$-bit CCA secure, where respectively $t$ and $2t$ rounds were required to process the tweaks. Since blockciphers can be designed much more efficiently than
pseudorandom functions, in many practical applications the internal primitives of LR ciphers are instantiated as blockciphers, which however would lead to a birthday-bound factor, which is not ideal for say lightweight cryptography.
This paper addresses the following two key questions affirmatively: (1) Can Goldenberg et al.'s results be extended to LR constructions with random permutations as internal primitives without compromising the optimal $n$-bit security? (2) Can the number
of rounds required for handling long tweaks be reduced?
We formally define TLR-compatible functions, for processing the tweak, which when composed with 4-rounds and 5-rounds of LR construction with random permutations as internal primitives gives us respectively $n$-bit CPA and CCA secure tweakable
permutations. For the security analysis, we proved general Mirror Theory result for three permutations. We also propose instantiating TLR-compatible functions with one round LR where a permutation (resp, two AXU hash functions) is used to mask single-
block tweaks (resp., variable-length tweaks), thus proposing the $n$-bit CPA (resp., CCA) secure tweakable permutation candidates, $\mathsf{TLRP5}$ and $\mathsf{TLRP5+}$ (resp., $\mathsf{TLRP7}$ and $\mathsf{TLRP7+}$), using $5$ (resp., $7$) LR rounds,
which is a significant reduction from the tweak-length-dependent results of Goldenberg et al. As a corollary, we also show $n$-bit CPA (resp., CCA) security of $5$-rounds (resp. $7$-rounds) permutation-based LR construction, which is quite an improvement
over the existing $2n/3$-bit security proved by Guo et al.
## 2025/915
* Title: Improved differential cryptanalysis of SPEEDY
* Authors: Tim Beyne, Addie Neyt
* [Permalink](
https://eprint.iacr.org/2025/915)
* [Download](
https://eprint.iacr.org/2025/915.pdf)
### Abstract
SPEEDY is a family of lightweight block ciphers designed by Leander et al. Several differential attacks have been reported on the SPEEDY variants. However, nearly all of these attacks are based on differential characteristics with probabilities that
differ from their reported values. These discrepancies arise from incorrect calculations of the (key-averaged) probability, particularly in consecutive steps within one round without intermediate key addition. In this paper, we revisit all reported
differential characteristics and accurately calculate their key-averaged probabilities using quasidifferential trails. We extend this to also estimate the fixed-key probability. Our analysis reveals several characteristics with zero or significantly
altered probability, invalidating several proposed attacks. We further implement a search algorithm and find a 5.5-round differential distinguisher that can be used to mount a full-round key recovery attack with a data complexity of $2^{183}$ and a time
complexity of $2^{185}$. The memory complexity varies: in the chosen-plaintext setting, it is $2^{156}$, whereas in the chosen-ciphertext setting, it is $2^{36}$.
## 2025/916
* Title: Automated Verification of Consistency in Zero-Knowledge Proof Circuits * Authors: Jon Stephens, Shankara Pailoor, Isil Dillig
* [Permalink](
https://eprint.iacr.org/2025/916)
* [Download](
https://eprint.iacr.org/2025/916.pdf)
### Abstract
Circuit languages like Circom and Gnark have become essential tools for programmable zero-knowledge cryptography, allowing developers to build privacy-preserving applications. These domain-specific languages (DSLs) encode both the computation to be
verified (as a witness generator) and the corresponding arithmetic circuits, from which the prover and verifier can be automatically generated. However, for these programs to be correct, the witness generator and the arithmetic circuit need to be
mutually consistent in a certain technical sense, and inconsistencies can result in security vulnerabilities. This paper formalizes the consistency requirement for circuit DSLs and proposes the first automated technique for verifying it. We evaluate the
method on hundreds of real-world circuits, demonstrating its utility for both automated verification and uncovering errors that existing tools are unable to detect.
## 2025/917
* Title: Jagged Polynomial Commitments (or: How to Stack Multilinears)
* Authors: Tamir Hemo, Kevin Jue, Eugene Rabinovich, Gyumin Roh, Ron D. Rothblum
* [Permalink](
https://eprint.iacr.org/2025/917)
* [Download](
https://eprint.iacr.org/2025/917.pdf)
### Abstract
Modern SNARK constructions, almost ubiquitously, rely on a polynomial commitment scheme (PCS) --- a method by which a prover can commit to a large polynomial $P$ and later provide evaluation proofs of the form "P(x)=y" to the verifier.
In the context of zkVMs (i.e., proof-systems for general-purpose RAM computations), the common design is to represent the computation trace as a sequence of tables, one per CPU instruction, and commit to the these tables, or even their individual columns,
as separate polynomials. Committing separately to these polynomials has a large overhead in verification costs, especially in hash-based systems.
In this work we drastically reduce this cost via a new construction which we call the jagged PCS. This PCS enables the prover to commit to the entire computation trace as a single polynomial, but then allows for the verifier to emulate access to the
individual table or column polynomials, so that the arithmetization can proceed in the usual manner. The jagged PCS may be thought of as a sparse PCS for a very particular form of sparsity - namely, a "jagged" matrix in which each column has a different
height.
Our construction of the jagged PCS is highly performant in practice. In contrast to existing sparse PCS constructions for general sparse polynomials, the jagged PCS does not require the prover to commit to any additional oracles and the prover cost is
dominated by 5 finite field multiplications per element in the trace area. Furthermore, we implement the verifier as an arithmetic circuit that depends only on the total trace area - thereby significantly reducing the problem of "combinatorial explosion"
often encountered in zkVM recursion.
## 2025/918
* Title: The Accidental Computer: Polynomial Commitments from Data Availability * Authors: Alex Evans, Guillermo Angeris
* [Permalink](
https://eprint.iacr.org/2025/918)
* [Download](
https://eprint.iacr.org/2025/918.pdf)
### Abstract
In this paper, we show two simple variations of a data availability scheme which enable it to act as a multilinear polynomial commitment scheme over the data in a block. The first variation enables commitments over all of the block's data with zero
prover overhead: the data availability construction simply serves both purposes. The second variation allows commitments over subsets of data with nonzero but still concretely small proving costs, since most work is already done during data encoding.
This works especially well for blockchains with a high degree of data parallelism, as data-parallel computation is particularly amenable to efficient GKR proofs. Since, in GKR, opening the polynomial commitment contributes significantly to prover costs,
our construction enables the prover to reuse work already done by the data availability scheme, reducing—or wholly removing—work associated with the polynomial commitment scheme.
## 2025/919
* Title: Rep3 Reloaded: On the Cost of Function-Dependent Preprocessing in Semi-Honest 3PC with Honest Majority
* Authors: Marcel Keller
* [Permalink](
https://eprint.iacr.org/2025/919)
* [Download](
https://eprint.iacr.org/2025/919.pdf)
### Abstract
Rep3 denotes the implementation of semi-honest three-party computation with an honest majority in MP-SPDZ (CCS'20). It uses replicated secret sharing with one message per multiplication and party as proposed by Araki et al. (CCS'16). This approach is
rivaled by Astra (CCSW'19) and Trio (PETS'25), which use function-dependent preprocessing. The latter is more involved than, e.g., Beaver triples which can be used as a commodity.
In this work, we present a full implementation of Astra and Trio in MP-SPDZ, and we evaluate the costs of the different approaches. We show the equivalence of the schemes, which implies that a protocol in any of the schemes can be translated to one in
another with the same overall communication cost. We also present an improvement to two important building blocks for privacy-preserving computation, namely secure comparison and probabilistic truncation used in fixed-point arithmetic. To evaluate our
implementation, we have benchmarked machine learning training and inference in all three schemes, improving on Keller and Sun (ICML'22) by over 30%. Our implementation also highlights the large storage requirements of function-dependent preprocessing as
it runs the two phases separately. To the best of our knowledge, this is the first implementation to do so.
## 2025/920
* Title: SQIsign2D$^2$: New SQIsign2D Variant by Leveraging Power Smooth Isogenies in Dimension One
* Authors: Zheng Xu, Kaizhan Lin, Chang-An Zhao, Yi Ouyang
* [Permalink](
https://eprint.iacr.org/2025/920)
* [Download](
https://eprint.iacr.org/2025/920.pdf)
### Abstract
In this paper, we propose SQIsign2D$^2$, a novel digital signature scheme within the SQIsign2D family. Unlike other SQIsign2D variants, SQIsign2D$^2$ employs the prime $p=CD-1$ as the field characteristic, where $D=2^{e_2}$, $C=3^{e_3}$ and $C\approx D\
approx \sqrt{p}$. By leveraging accessible $C$-isogenies, SQIsign2D$^2$ significantly reduces the degree requirements for two-dimensional isogeny computations, thereby lowering the overall computational overhead compared to other SQIsign2D variants.
We also provide a proof-of-concept implementation of SQIsign2D$^2$, and give an efficiency comparison between SQIsign2D$^2$ and other SQIsign2D variants. In particular, the experimental results demonstrate that the key generation and signing phases of
SQIsign2D$^2$ are more than twice as fast as those of SQIsign2D-East at the NIST-I security level, respectively. Additionally, the verification performance in SQIsign2D$^2$ exhibits marginally improved efficiency.
## 2025/921
* Title: Zero-knowledge Authenticator for Blockchain: Policy-private and Obliviously Updateable
* Authors: Kostas Kryptos Chalkias, Deepak Maram, Arnab Roy, Joy Wang, Aayush Yadav
* [Permalink](
https://eprint.iacr.org/2025/921)
* [Download](
https://eprint.iacr.org/2025/921.pdf)
### Abstract
Transaction details and participant identities on the blockchain are often publicly exposed. In this work, we posit that blockchain's transparency should not come at the cost of privacy. To that end, we introduce zero-knowledge authenticators (zkAt), a
new cryptographic primitive for privacy-preserving authentication on public blockchains. zkAt utilizes zero-knowledge proofs to enable users to authenticate transactions, while keeping the underlying authentiction policies private.
Prior solutions for such {policy-private authentication} required the use of threshold signatures, which can only hide the threshold access structure itself. In comparison, zkAt provides privacy for arbitrarily complex authentication policies, and offers
a richer interface even within the threshold access structure by, for instance, allowing for the combination of signatures under distinct signature schemes.
In order to construct zkAt, we design a compiler that transforms the popular Groth16 non-interactive zero knowledge (NIZK) proof system into a NIZK with equivocable verification keys, a property that we define in this work. Then, for any zkAt constructed
using proof systems with this new property, we show that all public information must be independent of the policy, thereby achieving policy-privacy.
Next, we give an extension of zkAt, called zkAt+ wherein, assuming a trusted authority, policies can be updated obliviously in the sense that a third-party learns no new information when a policy is updated by the policy issuer. We also give a
theoretical construction for zkAt+ using recursive NIZKs, and explore the integration of zkAt into modern blockchains. Finally, to evaluate their feasibility, we implement both our schemes for a specific threshold access structure. Our findings show that
zkAt achieves comparable performance to traditional threshold signatures, while also attaining privacy for significantly more complex policies with very little overhead.
## 2025/922
* Title: $\mathsf{HyperWolf}$: Efficient Polynomial Commitment Schemes from Lattices
* Authors: Lizhen Zhang, Shang Gao, Bin Xiao
* [Permalink](
https://eprint.iacr.org/2025/922)
* [Download](
https://eprint.iacr.org/2025/922.pdf)
### Abstract
This work introduces $\mathsf{HyperWolf}$, a Hypercube-Wise Optimized polynomial commitment scheme based on Lattices over ring Fields.
The scheme achieves succinctness with $O(\log N)$ proof size and verifier time, along with linear prover cost. It supports both univariate and multilinear polynomials under a unified framework.
Inspired by the two-dimensional tensor structure employed in \cite{golovnev2021brakedown} to achieve sublinear efficiency, we generalize the idea to a $k$-dimensional tensor (hypercube) structure and design a $k$-round recursive proof protocol, where
each round performs a dimensionality reduction, which results in an overall efficiency of $O(kN^{1/k})$. By setting $k = \log N$, our scheme achieves near-optimal asymptotic performance.
$\mathsf{HyperWolf}$ is fully transparent and relies only on the standard lattice assumption over rings. In terms of concrete efficiency, for polynomials with $N = 2^{25}$ coefficients, our scheme yields proof sizes that are $8\times$ smaller than the
lattice-based scheme of \cite{cini2024polynomial} (Crypto24), and over $200\times$ smaller than $\mathsf{SLAP}$ \cite{albrecht2024slap} (Eurocrypt24). Compared to $\mathsf{Greyhound}$\cite{nguyen2024greyhound} (Crypto24), our proof size is of similar
magnitude, while achieving significantly lower verifier time.
## 2025/923
* Title: SPECK: Signatures from Permutation Equivalence of Codes and Kernels
* Authors: Marco Baldi, Michele Battagliola, Rahmi El Mechri, Paolo Santini, Riccardo Schiavoni, Davide De Zuane
* [Permalink](
https://eprint.iacr.org/2025/923)
* [Download](
https://eprint.iacr.org/2025/923.pdf)
### Abstract
The ongoing search for secure post-quantum cryptographic primitives has led to the development of numerous novel digital signature schemes. In this paper we introduce $\mathsf{SPECK}$, a new signature protocol based on the similarities between the
Permuted Kernel Problem ($\mathsf{PKP}$) and the Permutation Code Equivalence Problem ($\mathsf{PEP}$). At its core, $\mathsf{SPECK}$ is built on the permutation version of LESS, but introduces a key modification to the commitment step. Indeed, instead
of committing to an entire permuted code, the prover commits to a random relaxed $\mathsf{PKP}$ (that we call $\mathsf{PECK}$, Permutation Equivalence of Codes and Kernel) instance by randomly choosing a codeword from a random permutation of the initial
code. In this sense, the secret key is used as a trapdoor to solve the committed $\mathsf{PECK}$ instance. The new approach allows for a faster verification that does not involve gaussian elimination, while maintains roughly the same signature size as
LESS. We present the $\mathsf{Speck}$ protocol in detail and provide a deep analysis of the security of the new introduced assumptions.
## 2025/924
* Title: Card-Based Protocol Counting Connected Components of Graphs
* Authors: Koji Nuida
* [Permalink](
https://eprint.iacr.org/2025/924)
* [Download](
https://eprint.iacr.org/2025/924.pdf)
### Abstract
Card-based cryptography is a research area for realizing cryptographic functionality, such as secure multiparty computation and zero-knowledge proofs, by using a deck of physical cards and/or other non-electrical tools. Motivated by zero-knowledge
proofs for solutions in pencil puzzles, there is a direction of recent studies on card-based protocols to verify connectivity of a set of cells or edges on lattice-shaped boards. In this paper, we generalize the problem to counting connected components
of subsets on any graph, and propose a card-based protocol for the problem.
## 2025/925
* Title: SCMAC and LOL2.0: An AEAD Design Framework and A New Version of LOL Stream Cipher Design Framework
* Authors: Dengguo Feng, Lin Jiao, Yonglin Hao, Qunxiong Zheng, Wenling Wu, Wenfeng Qi, Lei Zhang, Liting Zhang, Siwei Sun, Tian Tian
* [Permalink](
https://eprint.iacr.org/2025/925)
* [Download](
https://eprint.iacr.org/2025/925.pdf)
### Abstract
In this paper, we introduce SCMAC, a general framework that transforms large-memory stream ciphers into AEAD schemes. It represents an intermediate design paradigm between Encrypt-then-MAC and dedicated single-pass AEAD, partially integrating encryption
and authentication mechanisms while mitigating the risk of state leakage associated with immediate absorption and squeezing. Consequently, this approach harmonizes high performance with enhanced security. Additionally, we propose LOL2.0, an enhanced
version of the blockwise stream cipher design framework LOL. This new framework improves security through modifications to the FSM update and output functions, and increases flexibility in constructing LFSR components. Based on SCMAC}$ and LOL2.0, we
present two AEAD ciphers, LOL2.0-Mini and LOL2.0-Double, which support both stream cipher and AEAD modes. These ciphers are tailored to Beyond 5G/6G environments, offering 256-bit key length and resistance to known cryptanalysis methods, including
differential, linear, and integral attacks. They also provide 128-bit security against forgery attacks in the nonce-respecting setting. Due to their compatibility with AES-NI and SIMD instructions, LOL2.0-Mini and LOL2.0-Double achieve software
performance of 90 Gbps and 144 Gbps in stream cipher mode, respectively. In AEAD mode, they perform at 59 Gbps and 110 Gbps, significantly faster than their predecessor's Encrypt-then-MAC versions.
## 2025/926
* Title: Polocolo: A ZK-Friendly Hash Function Based on S-boxes Using Power Residues (Full Version)
* Authors: Jincheol Ha, Seongha Hwang, Jooyoung Lee, Seungmin Park, Mincheol Son
* [Permalink](
https://eprint.iacr.org/2025/926)
* [Download](
https://eprint.iacr.org/2025/926.pdf)
### Abstract
Conventional hash functions are often inefficient in zero-knowledge proof settings, leading to design of several ZK-friendly hash functions. On the other hand, lookup arguments have recently been incorporated into zero-knowledge protocols, allowing for
more efficient handling of ``ZK-unfriendly'' operations, and hence ZK-friendly hash functions based on lookup tables.
[continued in next message]
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)