Authors are listed in alphabetical order by their last name.
Preprints
-
A Tight Lower Bound on the TdScrypt Trapdoor Memory-Hard Function.
Jeremiah Blocki and Seunghoon Lee
Cryptology ePrint Archive. [Full Version][abstract]
A trapdoor Memory-Hard Function is a function that is memory-hard to evaluate for any party who does not have a trapdoor, but is substantially less expensive to evaluate with the trapdoor. Biryukov and Perin (ASIACRYPT 2017) introduced the first candidate trapdoor Memory-Hard Function called Diodon which modifies a Memory-Hard Function called Scrypt by replacing a hash chain with repeated squaring modulo a composite number $N=pq$. The trapdoor, which consists of the prime factors $p$ and $q$, allows one to compute the function with significantly reduced cumulative memory cost (CMC) $O( n \log n \log^2 N )$ where $n$ denotes the running time parameter, e.g., the length of the hash chain or repeated squaring chain. By contrast, the best-known algorithm to compute Diodon without the trapdoor has the CMC $O(n^2\log N)$. Auerbach et al. (EUROCRYPT 2024) provided the first provable lower bound on the CMC of TdScrypt — a specific instantiation of Diodon. In particular, in idealized models, they proved that the CMC of TdScrypt is $\Omega(\frac{n^2}{\log n}\log N)$ which almost matches the upper bound $O(n^2\log N)$ but is off by a multiplicative $\log n$ factor. In this work, we show how to tighten the analysis of Auerbach et al. (EUROCRYPT 2024) and eliminate the gap. In particular, our results imply that TdScrypt has the CMC at least $\Omega(n^2\log N)$.
-
The Impact of Reversibility on Parallel Pebbling.
Jeremiah Blocki, Blake Holman, and Seunghoon Lee
Cryptology ePrint Archive. [Full Version][abstract]
The (parallel) classical black pebbling game is a helpful abstraction which allows us to analyze the resources (time, space, space-time, cumulative space) necessary to evaluate a function $f$ with a static data-dependency graph $G$ on a (parallel) computer. In particular, the parallel black pebbling game has been used as a tool to quantify the (in)security of Data-Independent Memory-Hard Functions (iMHFs). However, the classical black pebbling game is not suitable to analyze the cost of quantum preimage attack. Thus, Blocki et al. (TCC 2022) introduced the parallel reversible pebbling game as a tool to analyze resource requirements for a quantum computer. While there is an extensive line of work analyzing pebbling complexity in the (parallel) black pebbling game, comparatively little is known about the parallel reversible pebbling game. Our first result is a lower bound of $\Omega\left(N^{1+\sqrt{\frac{ 2-o(1)}{\log N}}} \right)$ on the reversible cumulative pebbling cost for a line graph on $N$ nodes. This yields a separation between classical and reversible pebbling costs demonstrating that the reversibility constraint can increase cumulative pebbling costs (and space-time costs) by a multiplicative factor of $N^{(\sqrt 2 + o(1))/\sqrt{\log N}}$ — the classical pebbling cost (space-time or cumulative) for a line graph is just $\mathcal{O}(N)$. On the positive side, we prove that any classical parallel pebbling can be transformed into a reversible pebbling strategy whilst increasing space-time (resp. cumulative memory) costs by a multiplicative factor of at most $\mathcal{O}\left(N^{\sqrt{\frac{8}{\log N}}}\right)$ (resp. $\mathcal{O}\left(N^{\mathcal{O}(1)/\sqrt[4]{\log N}}\right)$). We also analyze the impact of the reversibility constraint on the cumulative pebbling cost of depth-robust and depth-reducible DAGs exploiting reversibility to improve constant factors in a prior lower bound of Alwen et al. (EUROCRYPT 2017). For depth-reducible DAGs we show that the state-of-the-art recursive pebbling techniques of Alwen et al. (EUROCRYPT 2017) can be converted into a recursive reversible pebbling attack without any asymptotic increases in pebbling costs. Finally, we extend a result of Blocki et al. (ITCS 2020) to show that it is Unique Games hard to approximate the reversible cumulative pebbling cost of a DAG $G$ to within any constant factor.
Publications
-
Differentially Private $L_2$-Heavy Hitters in the Sliding Window Model.
Jeremiah Blocki, Seunghoon Lee, Tamalika Mukherjee, and Samson Zhou
In ICLR 2023. [Full Version][abstract]
The data management of large companies often prioritize more recent data, as a source of higher accuracy prediction than outdated data. For example, the Facebook data policy retains user search histories for $6$ months while the Google data retention policy states that browser information may be stored for up to $9$ months. These policies are captured by the sliding window model, in which only the most recent $W$ statistics form the underlying dataset. In this paper, we consider the problem of privately releasing the $L_2$-heavy hitters in the sliding window model, which include $L_p$-heavy hitters for $p\le 2$ and in some sense are the strongest possible guarantees that can be achieved using polylogarithmic space, but cannot be handled by existing techniques due to the sub-additivity of the $L_2$ norm. Moreover, existing non-private sliding window algorithms use the smooth histogram framework, which has high sensitivity. To overcome these barriers, we introduce the first differentially private algorithm for $L_2$-heavy hitters in the sliding window model by initiating a number of $L_2$-heavy hitter algorithms across the stream with significantly lower threshold. Similarly, we augment the algorithms with an approximate frequency tracking algorithm with significantly higher accuracy. We then use smooth sensitivity and statistical distance arguments to show that we can add noise proportional to an estimation of the $L_2$ norm. To the best of our knowledge, our techniques are the first to privately release statistics that are related to a sub-additive function in the sliding window model, and may be of independent interest to future differentially private algorithmic design in the sliding window model.
-
The Parallel Reversible Pebbling Game: Analyzing the Post-Quantum Security of iMHFs.
Jeremiah Blocki, Blake Holman, and Seunghoon Lee
In TCC 2022. [Full Version][abstract]
The classical (parallel) black pebbling game is a useful abstraction which allows us to analyze the resources (space, space-time, cumulative space) necessary to evaluate a function $f$ with a static data-dependency graph $G$. Of particular interest in the field of cryptography are data-independent memory-hard functions $f_{G,H}$ which are defined by a directed acyclic graph (DAG) $G$ and a cryptographic hash function $H$. The pebbling complexity of the graph $G$ characterizes the amortized cost of evaluating $f_{G,H}$ multiple times as well as the total cost to run a brute-force preimage attack over a fixed domain $\mathcal{X}$, i.e., given $y \in \lbrace 0,1\rbrace^*$ find $x \in \mathcal{X}$ such that $f_{G,H}(x)=y$. While a classical attacker will need to evaluate the function $f_{G,H}$ at least $m=|\mathcal{X}|$ times a quantum attacker running Grover’s algorithm only requires $\mathcal{O}(\sqrt{m})$ blackbox calls to a quantum circuit $C_{G,H}$ evaluating the function $f_{G,H}$. Thus, to analyze the cost of a quantum attack it is crucial to understand the space-time cost (equivalently width times depth) of the quantum circuit $C_{G,H}$. We first observe that a legal black pebbling strategy for the graph $G$ does not necessarily imply the existence of a quantum circuit with comparable complexity — in contrast to the classical setting where any efficient pebbling strategy for $G$ corresponds to an algorithm with comparable complexity evaluating $f_{G,H}$. Motivated by this observation we introduce a new (parallel) reversible pebbling game which captures additional restrictions imposed by the No-Deletion Theorem in Quantum Computing. We apply our new reversible pebbling game to analyze the reversible space-time complexity of several important graphs: Line Graphs, Argon2i-A, Argon2i-B, and DRSample. Specifically, (1) We show that a line graph of size $N$ has reversible space-time complexity at most $\mathcal{O}\left(N^{1+\frac{2}{\sqrt{\log N}}}\right)$. (2) We show that any $(e,d)$-reducible DAG has reversible space-time complexity at most $\mathcal{O}(Ne+dN2^d)$. In particular, this implies that the reversible space-time complexity of Argon2i-A and Argon2i-B are at most $\mathcal{O}(N^2 \log \log N/\sqrt{\log N})$ and $\mathcal{O}(N^2/\sqrt[3]{\log N})$, respectively. (3) We show that the reversible space-time complexity of DRSample is at most $\mathcal{O}(N^2 \log \log N/\log N)$. We also study the cumulative pebbling cost of reversible pebblings extending a (non-reversible) pebbling attack of Alwen and Blocki on depth-reducible graphs.
-
On the Multi-User Security of Short Schnorr Signatures with Preprocessing.
Jeremiah Blocki and Seunghoon Lee
In EUROCRYPT 2022. [Full Version][abstract]
The Schnorr signature scheme is an efficient digital signature scheme with short signature lengths, i.e., $4k$-bit signatures for $k$ bits of security. A Schnorr signature $\sigma$ over a group of size $p\approx 2^{2k}$ consists of a tuple $(s,e)$, where $e \in \lbrace 0,1 \rbrace^{2k}$ is a hash output and $s\in \mathbb{Z}_p$ must be computed using the secret key. While the hash output $e$ requires $2k$ bits to encode, Schnorr proposed that it might be possible to truncate the hash value without adversely impacting security.
In this paper, we prove that short Schnorr signatures of length $3k$ bits provide $k$ bits of multi-user security in the (Shoup’s) generic group model and the programmable random oracle model. We further analyze the multi-user security of key-prefixed short Schnorr signatures against preprocessing attacks, showing that it is possible to obtain secure signatures of length $3k + \log S$ bits. Here, $S$ denotes the size of the hint generated by our preprocessing attacker, e.g., if $S=2^{k/2}$, then we would obtain $3.5k$-bit signatures.
Our techniques easily generalize to several other Fiat-Shamir-based signature schemes, allowing us to establish analogous results for Chaum-Pedersen signatures and Katz-Wang signatures. As a building block, we also analyze the $1$-out-of-$N$ discrete-log problem in the generic group model, with and without preprocessing. -
On Explicit Constructions of Extremely Depth Robust Graphs.
Jeremiah Blocki, Mike Cinkoske, Seunghoon Lee, and Jin Young Son
In STACS 2022. [Full Version][abstract]
A directed acyclic graph $G=(V,E)$ is said to be $(e,d)$-depth robust if for every subset $S \subseteq V$ of $|S| \leq e$ nodes the graph $G-S$ still contains a directed path of length $d$. If the graph is $(e,d)$-depth-robust for any $e,d$ such that $e+d \leq (1-\varepsilon)|V|$ then the graph is said to be $\varepsilon$-extreme depth-robust. In the field of cryptography, (extremely) depth-robust graphs with low indegree have found numerous applications including the design of side-channel resistant Memory-Hard Functions, Proofs of Space and Replication and in the design of Computationally Relaxed Locally Correctable Codes. In these applications, it is desirable to ensure the graphs are locally navigable, i.e., there is an efficient algorithm $\mathsf{GetParents}$ running in time $\mathsf{polylog} |V|$ which takes as input a node $v \in V$ and returns the set of $v$’s parents. We give the first explicit construction of locally navigable $\varepsilon$-extreme depth-robust graphs with indegree $\mathcal{O}(\log |V|)$. Previous constructions of $\varepsilon$-extreme depth-robust graphs either had indegree $\widetilde{\omega}(\log^2 |V|)$ or were not explicit.
-
On the Security of Proofs of Sequential Work in a Post-Quantum World.
Jeremiah Blocki, Seunghoon Lee, and Samson Zhou
In ITC 2021. [Full Version][abstract]
A Proof of Sequential Work (PoSW) allows a prover to convince a resource-bounded verifier that the prover invested a substantial amount of sequential time to perform some underlying computation. PoSWs have many applications including time-stamping, blockchain design, and universally verifiable CPU benchmarks. Mahmoody, Moran, and Vadhan (ITCS 2013) gave the first construction of a PoSW in the random oracle model though the construction relied on expensive depth-robust graphs. In a recent breakthrough, Cohen and Pietrzak (EUROCRYPT 2018) gave an efficient PoSW construction that does not require expensive depth-robust graphs.
In the classical parallel random oracle model, it is straightforward to argue that any successful PoSW attacker must produce a long $\mathcal{H}$-sequence and that any malicious party running in sequential time $T-1$ will fail to produce an $\mathcal{H}$-sequence of length $T$ except with negligible probability. In this paper, we prove that any quantum attacker running in sequential time $T-1$ will fail to produce an $\mathcal{H}$-sequence except with negligible probability – even if the attacker submits a large batch of quantum queries in each round. The proof is substantially more challenging and highlights the power of Zhandry’s recent compressed oracle technique (CRYPTO 2019). We further extend this result to establish post-quantum security of a non-interactive PoSW obtained by applying the Fiat-Shamir transform to Cohen and Pietrzak’s efficient construction (EUROCRYPT 2018). -
Approximating Cumulative Pebbling Cost is Unique Games Hard.
Jeremiah Blocki, Seunghoon Lee, and Samson Zhou
In ITCS 2020. [Full Version][abstract]
The cumulative pebbling complexity of a directed acyclic graph $G$ is defined as $\mathsf{cc}(G) = \min_P \sum_i |P_i|$, where the minimum is taken over all legal (parallel) black pebblings of $G$ and $|P_i|$ denotes the number of pebbles on the graph during round $i$. Intuitively, $\mathsf{cc}(G)$ captures the amortized Space-Time complexity of pebbling $m$ copies of $G$ in parallel. The cumulative pebbling complexity of a graph $G$ is of particular interest in the field of cryptography as $\mathsf{cc}(G)$ is tightly related to the amortized Area-Time complexity of the Data-Independent Memory-Hard Function (iMHF) $f_{G,H}$ [AS15] defined using a constant indegree directed acyclic graph (DAG) $G$ and a random oracle $H(\cdot)$. A secure iMHF should have amortized Space-Time complexity as high as possible, e.g., to deter brute-force password attacker who wants to find $x$ such that $f_{G,H}(x) = h$. Thus, to analyze the (in)security of a candidate iMHF $f_{G,H}$, it is crucial to estimate the value $\mathsf{cc}(G)$ but currently, upper and lower bounds for leading iMHF candidates differ by several orders of magnitude. Blocki and Zhou recently showed that it is $\mathsf{NP}$-Hard to compute $\mathsf{cc}(G)$, but their techniques do not even rule out an efficient $(1+\varepsilon)$-approximation algorithm for any constant $\varepsilon>0$. We show that for any, constant $c > 1$, it is Unique Games hard to approximate $\mathsf{cc}(G)$ to within a factor of $c$.
Along the way, we show the hardness of approximation of the DAG Vertex Deletion problem on DAGs of constant indegree. Namely, we show that for any $k,\varepsilon >0$ and given a DAG $G$ with $N$ nodes and constant indegree, it is Unique Games hard to distinguish between the case that $G$ is $(e_1, d_1)$-reducible with $e_1=N^{1/(1+2\varepsilon)}/k$ and $d_1=k N^{2\varepsilon/(1+2\varepsilon)}$, and the case that $G$ is $(e_2, d_2)$-depth-robust with $e_2 = (1-\varepsilon)k e_1$ and $d_2= 0.9 N^{(1+\varepsilon)/(1+2\varepsilon)}$, which may be of independent interest. Our result generalizes a result of Svensson who proved an analogous result for DAGs with indegree $\mathcal{O}(N)$. -
Data-Independent Memory Hard Functions: New Attacks and Stronger Constructions.
Jeremiah Blocki, Ben Harsha, Siteng Kang, Seunghoon Lee, Lu Xing, and Samson Zhou
In CRYPTO 2019. [Full Version][abstract]
Memory-hard functions (MHFs) are a key cryptographic primitive underlying the design of moderately expensive password hashing algorithms and egalitarian proofs of work. Over the past few years several increasingly stringent goals for an MHF have been proposed including the requirement that the MHF have high sequential space-time (ST) complexity, parallel space-time complexity, amortized area-time (aAT) complexity and sustained space complexity. Data-Independent Memory Hard Functions (iMHFs) are of special interest in the context of password hashing as they naturally resist side-channel attacks. iMHFs can be specified using a directed acyclic graph (DAG) $G$ with $N=2^n$ nodes and low indegree and the complexity of the iMHF can be analyzed using a pebbling game. Recently, Alwen et al. [ABH17] constructed a DAG called DRSample that has aAT complexity at least $\Omega\left( N^2/\log N\right)$. Asymptotically DRSample outperformed all prior iMHF constructions including Argon2i, winner of the password hashing competition (aAT cost $\mathcal{O}\left(N^{1.767}\right)$), though the constants in these bounds are poorly understood. We show that the greedy pebbling strategy of Boneh et al. [BCS16] is particularly effective against DRSample e.g., the aAT cost is $\mathcal{O}\left( N^2/\log N\right)$. In fact, our empirical analysis reverses the prior conclusion of Alwen et al. that DRSample provides stronger resistance to known pebbling attacks for practical values of $N \leq 2^{24}$. We construct a new iMHF candidate (DRSample+BRG) by using the bit-reversal graph to extend DRSample. We then prove that the construction is asymptotically optimal under every MHF criteria, and we empirically demonstrate that our iMHF provides the best resistance to known pebbling attacks. For example, we show that any parallel pebbling attack either has aAT cost $\omega(N^2)$ or requires at least $\Omega(N)$ steps with $\Omega(N/\log N)$ pebbles on the DAG. This makes our construction the first practical iMHF with a strong sustained space-complexity guarantee and immediately implies that any parallel pebbling has aAT complexity $\Omega(N^2/\log N)$. We also prove that any sequential pebbling (including the greedy pebbling attack) has aAT cost $\Omega\left( N^2\right)$ and, if a plausible conjecture holds, any parallel pebbling has aAT cost $\Omega(N^2 \log \log N/\log N)$ — the best possible bound for an iMHF.
We implement our new iMHF and demonstrate that it is just as fast as Argon2. Along the way we propose a simple modification to the Argon2 round function that increases an attacker’s aAT cost by nearly an order of magnitude without increasing running time on a CPU. Finally, we give a pebbling reduction that proves that in the parallel random oracle model (PROM) the cost of evaluating an iMHF like Argon2i or DRSample+BRG is given by the pebbling cost of the underlying DAG. Prior pebbling reductions assumed that the iMHF round function concatenates input labels before hashing and did not apply to practical iMHFs such as Argon2i, DRSample or DRSample+BRG where input labels are instead XORed together.
Manuscript
- A Short Note on Improved Logic Circuits in a Hexagonal Minesweeper. [pdf]
[abstract]
This paper aims to present an advanced version of PP-hardness proof of Minesweeper by Bondt [Bondt12]. The advancement includes improved Minesweeper configurations for ‘logic circuits’ in a hexagonal Minesweeper. To do so, we demonstrate logical uncertainty in Minesweeper, which ironically allows a possibility to make some Boolean operators.
The fact that existing hexagonal logic circuits did not clearly distinguish the true and false signal needs an improved form of a hexagonal wire. We introduce new forms of logic circuits such as NOT, AND, OR gates, a curve and a splitter of wires. Moreover, these new logic circuits complement Bondt’s proof for PP-hardness of Minesweeper by giving a new figure.