Blog

Check our latest news.

What Is Plonk...And Why?

Research
Parasol Team

Zero-knowledge proving systems are no longer theoretical cryptography exercises. On Solana, they are becoming infrastructure.

PLONK, introduced as a universal SNARK with constant proof size, hits a very specific sweet spot for Solana’s execution model: a proof small enough to fit within Solana’s 1KB transaction size limit and lightweight enough to verify within its compute constraints.

This combination turns PLONK into a compression layer for complexity. Instead of forcing every validator to re-execute heavy logic inside a constrained transaction, developers can execute rich off-chain computation - matching engines, risk checks, batch settlements - and submit a succinct proof that the result is correct. Let’s dive deeper and understand what PLONK actually is.


Abstract

We discuss why the original univariate Plonk ([GWC19]) is still very much relevant and try to give insight into how it works, focusing on the core ideas behind the proof system. We assume that the reader is familiar with the notion of an interactive oracle protocol (IOP) or a polynomial IOP.

1 - Introduction

The Plonk was introduced by Gabizon, Williamson and Ciobotaru in 2019 ([GWC19]). It is a non-interactive proof system with a succinct verifier known as a SNARK$^1$. Compared to Groth16 ([Gro16]), which is a SNARK known for its small proof size, Plonk has a larger proof. Over a 256-bit field, a Groth16 proof is 96 bytes, whereas a Plonk proof is 480 bytes. Both proofs are of constant size regardless of the size of computations to prove. However, unlike Groth 16, Plonk is a universal SNARK, i.e. once prover and verifier keys are generated, the same keys can be used to prove the honest execution of any computation as long as the size of the computation does not exceed the bound imposed by the key length. Compared to Sonic$^2$ ([MBKM19]), which was another universal SNARK known at the time, Plonk has a faster prover and a smaller proof.  (For more about the comparison of these three, see [GWC19, Tables 1 and 2].)

Since the publication of Plonk, various techniques, such as custom gates and lookup protocols, have been introduced on top of Plonk to improve the prover performance. Many other proof systems have also been invented since then. Nevertheless, Plonk as it was first introduced is still very much relevant. For example, Solana achieves its speed by limiting the size and computational complexity of a transaction. In particular, a transaction is limited to 1KiB in size. A Plonk proof is within this limit and the complexity of its verification algorithm is also within the Solana's limit. Thus, Plonk can be used as a compression technique to enable a complex transaction that otherwise exceeds 1KiB in size or Solana's computational limit. (Groth16 may also be used but with the downside of being non-universal. Note also, except for the usage of Plonk custom gates, the proof systems and techniques developed since the original Plonk usually come with increased proof sizes and do not fit in Solana's transaction size limit.)

$^1$SNARK stands for a succinct non-interactive argument of knowledge.

$^2$Sonic comes in two versions. We mean the version with a succinct verifier.

2 - Overview of Plonk

In the context of Plonk, a computer program is represented as an arithmetic circuit of fan-in two (often simply referred to as a circuit). In English, a program is expressed in terms of additions and multiplications of two elements in a cryptographically large finite field $\mathbb{F}$ of prime order. Here, the field $\mathbb{F}$ is chosen and fixed once and for all in a one-time setup phase of Plonk.

While a program is represented by a circuit, the execution of a program is captured as the knowledge of a solution of the circuit satisfiability problem that is associated with the circuit and the input and output of the program. In more detail, the wire values of a circuit comprise the input to the program, the output of the program and the intermediate execution trace of a machine, which we call a witness. The input and output are together called a public input$^3$. Given a circuit and a public input, Plonk proves the knowledge of a witness.  (The reader not familiar with the definition of arithmetic circuits is referred to the definition in [GWC19, Section 6].)

An important metric here is the size $n$ of a circuit, which is by definition the number of gates, as it quasi-linearly affects the prover's computational cost. Thus, a circuit designer should write as small a circuit as possible for efficiency. Note also that the speed of a program execution on a machine is not proportional to the size of the corresponding circuit. It is because a circuit is written over a fixed field $\mathbb{F}$. If a computation is over another field or non-arithmetic, there is an overhead due to the conversion of such a computation into arithmetic in $\mathbb{F}$. Although we will not discuss in this article, there are techniques and protocols built on top of Plonk that allow smaller circuit design and address the overhead issue (eg. custom gates and lookup protocols).

As for the construction of Plonk, it is obtained by first constructing a univariate polynomial interactive oracle protocol (PIOP). A univariate PIOP is an IOP with the following changes. Instead of an oracle to a string, a prover is allowed to send an oracle $[p]$ to a univariate polynomial $p(X)$ of bounded degree $\le d$ over a fixed field $\mathbb{F}$, and a verifier can query the oracle for an evaluation $p(x)$ at any point $x \in \mathbb{F}$ it chooses. A prover may send multiple oracles and a verifier may make multiple queries. Since we will only consider univariate polynomials in this note, we shall omit the adjective "univariate" from now on.

After constructing a PIOP, we instantiate polynomial evaluation oracles with the KZG polynomial commitment scheme$^4$, and then apply the Fiat-Shamir transformation to remove the interaction. The resulting non-interactive proof system is Plonk.

In the rest of the article, we focus on the idea behind the PIOP underlying Plonk. The transformation from a PIOP to a non-interactive proof system as described above is standard and not Plonk-specific. Thus, we shall not discuss it in this article. However, that being said, we would like to mention that a so-called linearization trick, an optimization leveraging the additivity of the KZG commitment, is applied in the actual Plonk construction after replacing oracles with commitments and before applying the Fiat-Shamir transformation. An interested reader is referred to the paragraph "Reducing the number of field elements" right before Section 5 of the original paper [GWC19]. Another thing we do not touch upon in this note is how to achieve the zero-knowledge property.

$^3$The input and output of a program are input to both prover and verifier algorithms.

$^4$It is possible to use other polynomial commitment schemes. In that case, proof size and prover and verifier cost change.

3 - The PIOP underlying Plonk (Plonk PIOP)

Let $\mathbb{F}$ be a finite field of prime order and $H$ a multiplicative subgroup of $\mathbb{F}$ of order $n$. We choose a generator $\omega$ of $H$. i.e., $H = \{\omega, \omega^2, ..., \omega^n = 1\}$. Let $C$ be an arithmetic circuit over $\mathbb{F}$ with $n$ gates with public input wires. The gates are numbered from 1 to $n$. By convention, public inputs are assigned to the left wires of the gates 1 to $l$. We write $d$ for the maximum degree of a polynomial that can underlie an oracle. We assume $n, d \ll |\mathbb{F}|$.

Given a public input $x = (x_1, x_2, ..., x_l) \in \mathbb{F}^l$ (i.e., wire values for the input and output wires of $C$), the Plonk PIOP proves the knowledge of the wire values $w$ for the rest of the wires that satisfy the circuit $C$ with the public input $x$. We write the last condition as $C(x,w) = 1$.  In other words, it is a PIOP for a relation

$$R_C := \{(x; w) | [cite_start]C(x,w) = 1\}$$

where $x$ is an instance and $w$ is a witness.

To construct the Plonk PIOP, a prover organize the wire values in a table:

$v_{L,1}$ $v_{R,1}$ $v_{O,1}$
$v_{L,2}$ $v_{R,2}$ $v_{O,2}$
$\vdots$ $\vdots$ $\vdots$
$v_{L,n}$ $v_{R,n}$ $v_{O,n}$

Here, $L$, $R$ and $O$ stand for the left, right and output wires and rows are indexed by gate numbers. Thus, $v_{L,i}$ is the value of the left wire of the $i$-th gate, and so on. By the convention mentioned above, we have $v_{L,i} = x_i$ for $i = 1, 2, ..., l$. Notice that one wire is typically connected to two different gates, so about half of the values in the table are duplicates.  For example, if the output wire of the $i$-th gate is connected to the right input wire of the $j$-th gate, we have

$$v_{O,i} = v_{R,j}$$

At the start of the Plonk PIOP, a prover $\mathcal{P}$ encodes each column as a polynomial. It computes a polynomial $l(X) \in \mathbb{F}[X]$ such that $l(\omega^i) = v_{L,i}$ for all $i$ and $\deg l < n$ by Lagrange interpolation. It similarly computes $r(X)$ and $o(X)$ for the right and the output wire columns. $\mathcal{P}$ then forms oracles $[l]$, $[r]$ and $[o]$ of these polynomials and sends them to a verifier $\mathcal{V}$.  (The oracle $[p]$ replies with a polynomial evaluation $p(x)[cite_start]$ upon verifier's query, where $\mathcal{V}$ can choose any $x \in \mathbb{F}$.) With these oracles, $\mathcal{V}$ needs to check that the values encoded in the polynomial underlying oracles satisfy the circuit with the claimed public input, i.e.,

i. The consistency at gates, i.e., if the $i$-th gate is additive (resp., multiplicative), then $v_{L,i} + v_{R,i} - v_{O,i} = 0$ (resp., $v_{L,i} v_{R,i} - v_{O,i} = 0$) holds.

ii. $\mathcal{P}$ used the honest public input when it computed the polynomials $l(X)$, $r(X)$ and $o(X)$ underlying the oracles.

iii. The consistency of wiring, i.e. duplicated values in the matrix indeed agree. For example, if the output wire of the $i$-th gate is connected to the right input wire of the $j$-th gate, the equality $v_{O,i} = v_{R,j}$ holds.

The conditions (i) and (ii) combined are called gate constraints and the condition (iii) is called a copy constraint. In the rest of the article, we explain how a verifier efficiently checks these constraints.

In the Plonk PIOP, $\mathcal{P}$ and $\mathcal{V}$ first interact to reduce either type of constraint check to a claim that certain polynomials evaluate to 0 over $H$. Then, $\mathcal{P}$ and $\mathcal{V}$ further interact to verify this claim. Since the last part is common in both types of constraints, let us start with this part.

PIOP for zero-check over $H = \{\omega, \omega^2, ..., \omega^n = 1\}$.  The set up is as follows. A prover $\mathcal{P}$ has a polynomial $F(X) \in \mathbb{F}[X]$. A verifier $\mathcal{V}$ has a oracle $[F]$. $\mathcal{P}$ claims that the polynomial underlying the oracle vanishes over $H$, i.e. $F(h) = 0$ for all $h \in H$.

With a little help from the prover, $\mathcal{V}$ can check this claim without making oracle queries at all point $h \in H$.  The main observation is that $F$ vanishes over $H$ if and only if it is divisible by $X^n - 1$ because $X^n - 1 = \prod_{h \in H}(X - h)$, i.e. if and only if there exists a polynomial $Q(X)$ such that

$$F(X) = (X^n - 1)Q(X). [cite_start]\quad (1)$$

Hence, we make $\mathcal{P}$ to compute $Q(X)$ and send the oracle $[Q]$ to $\mathcal{V}$.  Upon receiving it, $\mathcal{V}$ chooses a random point $\mathfrak{z} \in \mathbb{F}$ and queries $[F]$ and $[Q]$ for the values $F(\mathfrak{z})$ and $Q(\mathfrak{z})$, and checks the equality

$$F(\mathfrak{z}) = (\mathfrak{z}^n - 1)Q(\mathfrak{z}).$$

Since $n, d \ll |\mathbb{F}|$ (recall that $d$ is the maximum degree of a polynomial underlying an oracle), if the check passes, the equality (1) holds with overwhelming probability.

Checking the gate constraints. Let $q_L, q_R, q_M, q_O$ and $q_C \in \mathbb{F}[X]$ be the polynomials of degree less than $n$ such that

  • If the $i$-th gate is additive, $q_L(\omega^i) = q_R(\omega^i) = 1$, $q_O(\omega^i) = -1$ and $q_M(\omega^i) = q_C(\omega^i) = 0$.
  • If the $i$-th gate is multiplicative, $q_L(\omega^i) = q_R(\omega^i) = 1$, $q_M(\omega^i) = -1$ and $q_O(\omega^i) = q_C(\omega^i) = 0$.
  • If the $i$-th gate is a public input gate (i.e. if $i = 1, 2, ..., l$), $q_L(\omega^i) = 1$, $q_R(\omega^i) = q_M(\omega^i) = q_O(\omega^i) = 0$ and $q_C(\omega^i) = -x_i$.

Note such polynomials exist uniquely.  With these polynomials, the gate constraint check is equivalent to checking that

$$F(X) := q_L(X)l(X) + q_R(X)r(X) + q_M(X)l(X)r(X) + q_O(X)o(X) + q_C(X)$$

vanishes over $H$.

This can be checked by the PIOP for zero-check above. One difference is that $\mathcal{V}$ does not have an oracle $[F]$ itself this time. However, it can still "simulate" $[F]$. Let us explain what we mean by this. Since $q_L, q_R, q_M$ and $q_O$ are determined by the circuit, we give the evaluation oracles to them as input. $\mathcal{P}$ and $\mathcal{V}$ engage in PIOP for zero-check for $F$. At the end, $\mathcal{V}$ needs an evaluation $F(\mathfrak{z})$. To obtain this, $\mathcal{V}$ queries $[l], [r], [o], [q_L], [q_R], [q_M]$ and $[q_O]$ for the values $l(\mathfrak{z}), r(\mathfrak{z}), o(\mathfrak{z}), q_L(\mathfrak{z}), q_R(\mathfrak{z}), q_M(\mathfrak{z})$ and $q_O(\mathfrak{z})$. $\mathcal{V}$ also computes $q_C(\mathfrak{z})$ by itself.  With these values, it can "simulate" the $[F]$-oracle, by setting

$$F(\mathfrak{z}) := q_L(\mathfrak{z})l(\mathfrak{z}) + q_R(\mathfrak{z})r(\mathfrak{z}) + q_M(\mathfrak{z})l(\mathfrak{z})r(\mathfrak{z}) + q_O(\mathfrak{z})o(\mathfrak{z}) + q_C(\mathfrak{z}).$$

Checking the copy constraint. For simplicity, let us assume for now that the copy constraint is limited within one column, say the left wire column, i.e. the constraint is of the form $v_{L,i} = v_{L,j}$ for a set of pairs $\{(i,j)\}_{i=1,2,...,n}$, where the values $j$ are never repeated. In other words, the map $i \mapsto j$ is a permutation $\sigma : \{1, 2, ..., n\} \rightarrow \{1, 2, ..., n\}$. The verifier's job is to check $v_{L,i} = v_{L,\sigma(i)}$ for all $i = 1, 2, ..., n$ with an oracle $[l]$.  Notice that it is equivalent to checking the equality of sets

$$\{(i, v_{L,i})\}_i = \{(\sigma(i), v_{L,i})\}_i \subset \mathbb{Z} \times \mathbb{F},$$

which is further equivalent to$^5$ checking

$$\{(\omega^i, v_{L,i})\}_i = \{(\omega^{\sigma(i)}, v_{L,i})\}_i \subset \mathbb{F} \times \mathbb{F} \quad (2)$$

$^5$We can alternatively formulate the check as $\{(i, v_{L,i})\} = \{(\sigma(i), v_{L,i})\} \subset \mathbb{F} \times \mathbb{F}$ and proceed. Here, $i$ and $\sigma(i) \in \mathbb{Z}$ are identified with field elements via $\mathbb{Z} \rightarrow \mathbb{Z}/p \cong \mathbb{F}$. However, although it is not clear at this point of discussion, we will end up with a less efficient protocol because a verifier would either need an oracle access to a polynomial $n(X)$ or evaluate $n(X)$ by itself, where $n(X)$ is such that $\deg n(X) < n$ and $n(\omega^i) = i$ for all $i = 1, 2, ..., n$. A similar comment also applies to the general case discussed later where a copy constraint spreads across all wires.

Recall that two sets $S$ and $T \subset \mathbb{F}$ are equal if and only if two polynomials $\prod_{s \in S}(X - s)$ and $\prod_{t \in T}(X - t)$ are identical. Just like a single element of $\mathbb{F}$ was regarded as a "point" on the $X$-axis (i.e. a univariate polynomial of degree 1) in this case, we may regard elements in $\mathbb{F} \times \mathbb{F}$ as a line on the $XY$-plane, i.e. a bivariate polynomial of degree 1.  The equality (2) is now equivalent to a polynomial identity

$$\prod_{i=1}^n(v_{L,i} + \omega^i X + Y) = \prod_{i=1}^n(v_{L,i} + \omega^{\sigma(i)} X + Y) \in \mathbb{F}[X,Y].$$

(One can formally see this from the fact that $\mathbb{F}[X,Y]$ is a unique factorization domain and all coefficients of $Y$ are 1.)

Before explaining how to efficiently check this identity, let us remove the assumption we made earlier that the copy constraint involves only the left wire values.  To do this, let us write

$$V = (V_1, V_2, ..., V_{3n})$$

$$= (v_{L,1}, ..., v_{L,n}, v_{R,1}, ..., v_{R,n}, v_{O,1}, ..., v_{O,n}) \in \mathbb{F}^{3n}$$

As before, a copy constraint (across all wires) is now expressed by a permutation $\sigma : \{1, 2, ..., 3n\} \rightarrow \{1, 2, ..., 3n\}$ defined by a circuit. A verifier needs to check $V_i = V_{\sigma(i)}$ for all $i = 1, 2, ..., 3n$ with oracles $[l], [r]$ and $[o]$.  As before, it is equivalent to checking the equality of sets

$$\{(i, V_i)\}_{i=1}^{3n} = \{(\sigma(i), V_i)\}_{i=1}^{3n} \subset \mathbb{Z} \times \mathbb{F}.$$

However, it is no longer equivalent to

$$\{(\omega^i, V_i)\}_{i=1}^{3n} = \{(\omega^{\sigma(i)}, V_i)\}_{i=1}^{3n} \subset \mathbb{F} \times \mathbb{F} \quad (3)$$

because $\omega$ has order $n$ and hence the first entries in the pairs repeat. A fix is to re-define the way integers $i \in \{1, 2, ..., 3n\}$ are identified with field elements. Instead of $i \mapsto \omega^i$, we agree to use the following injective map. Choose $k_1 \in \mathbb{F}^* \backslash H$ and $k_2 \in \mathbb{F}^* \backslash (H \cup k_1 H)$, i.e. $k_1 H$ and $k_2 H$ are distinct cosets of $H$.  (Such $k_1$ and $k_2$ exist if $3|H| \le |\mathbb{F}^*|$, which is always the case in practice.)

  • If $1 \le i \le n, i \mapsto \omega^i$.
  • If $n + 1 \le i \le 2n, i \mapsto k_1 \omega^i$.
  • If $2n + 1 \le i \le 3n, i \mapsto k_2 \omega^i$.

Let $\sigma_*$ denote the precomposition of this map with $\sigma$. The check (3) is now equivalent to

$$\{\{(\omega^i, V_i)\}_{i=1}^n, \{(k_1\omega^i, V_i)\}_{i=1}^n, \{(k_2\omega^i, V_i)\}_{i=1}^n\}$$

$$= \{\{(\sigma_*(i), V_i)\}_{i=1}^n, \{(\sigma_*(i+n), V_i)\}_{i=1}^n, \{(\sigma_*(i+2n), V_i)\}_{i=1}^n\}.$$

Further writing $V_i$ explicitly with left, right and output wire values and noting that $\omega$ has order $n$, we obtain

$$\{\{(\omega^i, v_{L,i})\}_{i=1}^n, \{(k_1\omega^i, v_{R,i})\}_{i=1}^n, \{(k_2\omega^i, v_{O,i})\}_{i=1}^n\}$$

$$= \{\{(\sigma_*(i), v_{L,i})\}_{i=1}^n, \{(\sigma_*(i+n), v_{R,i})\}_{i=1}^n, \{(\sigma_*(i+2n), v_{O,i})\}_{i=1}^n\}.$$

As before, checking this equality is equivalent to checking the following polynomial identity

$$\prod_{i=1}^n(v_{L,i} + \omega^i X + Y)(v_{R,i} + k_1\omega^i X + Y)(v_{O,i} + k_2\omega^i X + Y)$$

$$= \prod_{i=1}^n(v_{L,i} + \sigma_*(i) X + Y)(v_{R,i} + \sigma_*(i+n) X + Y)(v_{O,i} + \sigma_*(i+2n) X + Y). [cite_start]\quad (4)$$

Sampling random nonces $\beta, \gamma \in \mathbb{F}$, if the evaluation of both sides at $(X,Y) = (\beta, \gamma)$ agree, we may conclude that the above identity holds except with negligible probability by the usual Schwartz-Zippel argument. The problem is how to enable the verifier to perform this efficiently with oracles $[l], [r]$ and $[o]$.

The following protocol (called the grand product argument) reduces this check to a claim that certain polynomials vanish over $H$.  $\mathcal{V}$ sends random nonces $\beta, \gamma \in \mathbb{F}$. Receiving these, $\mathcal{P}$ computes a polynomial $Z(X) \in \mathbb{F}[X]$ of degree less than $n$ such that $Z(\omega) = 1$ and

$$Z(\omega^i) = \prod_{1 \le j < i} \frac{(v_{L,j} + \omega^j \beta + \gamma)(v_{R,j} + k_1 \omega^j \beta + \gamma)(v_{O,j} + k_2 \omega^j \beta + \gamma)}{(v_{L,j} + \sigma_*(j) \beta + \gamma)(v_{R,j} + \sigma_*(j+n) \beta + \gamma)(v_{O,j} + \sigma_*(j+2n) \beta + \gamma)}$$

for $i = 2, 3, ..., n$. $\mathcal{P}$ sends an oracle $[Z]$ to $\mathcal{V}$.  It is now enough for $\mathcal{V}$ to check the following equalities: an equality $Z(\omega) = 1$ and equalities

$$Z(\omega^i)((v_{L,i} + \omega^i \beta + \gamma)(v_{R,i} + k_1 \omega^i \beta + \gamma)(v_{O,i} + k_2 \omega^i \beta + \gamma))$$

$$= Z(\omega^{i+1})(v_{L,i} + \sigma_*(i) \beta + \gamma)(v_{R,i} + \sigma_*(i+n) \beta + \gamma)(v_{O,i} + \sigma_*(i+2n) \beta + \gamma)$$

for $i = 1, 2, ..., n$. Indeed, by taking the product of these equalities over $i = 1, 2, ..., n$, one can immediately see that these equalities imply that the equality (4) holds at $(X,Y) = (\beta, \gamma)$ as desired.  (We use $\omega^n = 1$ here.)

Fortunately, these equalities are equivalent to the following polynomial vanishing claims: i. $L_1(X)(Z(X) - 1)$ vanishes on $H$.

ii. $Z(X)((l(X) + X \beta + \gamma)(r(X) + k_1 X \beta + \gamma)(o(X) + k_2 X \beta + \gamma))$

$- Z(\omega X)(l(X) + s_1(X) \beta + \gamma)(r(X) + s_2(X) \beta + \gamma)(o(X) + s_3(X) \beta + \gamma)$

vanishes on $H$, where $s_1(X), s_2(X), s_3(X)$ are the polynomials that respectively encode $\sigma_*(i), \sigma_*(i+n), \sigma_*(i+2n)$ by Lagrange interpolation.

Since $s_1(X), s_2(X), s_3(X)$ are determined by the circuit, we give oracles $[s_1], [s_2], [s_3]$ to the verifier as part of the input. Since $\mathcal{P}$ provided $\mathcal{V}$ with $[Z], [l], [r]$ and $[o]$, $\mathcal{V}$ has enough oracles to engage in the PIOP for zero-check to verify these two claims.  (Note that $\mathcal{V}$ needs to query $Z(\omega \mathfrak{z})[cite_start]$ in addition to $Z(\mathfrak{z})$ because the second polynomial contains the term $Z(\omega X)$.)

In Plonk, these two polynomial vanishing checks and the one for gate constraints are batched by a random nonce for efficiency. For further details and a comprehensive account, the reader is referred to the original paper.

Final thoughts

For Solana-native applications, and especially for perp DEX infrastructure where latency, determinism, and bounded state transitions matter, PLONK is not just a privacy primitive but a scalability primitive.

For perp DEXes on Solana, this matters at the engine level. Order matching, funding rate updates, liquidation logic, and cross-margin risk calculations are computationally dense and often state-heavy.

Solana enforces transaction size and compute ceilings by design, and complex multi-step flows, especially when interacting with composable Solana programs or EVM environments like Neon.

PLONK-based proofs allow a perp engine to execute sophisticated off-chain state transitions, then commit the outcome on-chain as a succinct, verifiable claim. This enables higher-frequency matching, batched liquidations, cross-program interactions, and even intent-based execution models without bloating on-chain logic or breaking atomicity guarantees.

In practical terms: PLONK lets Solana perp DEXes scale complexity without sacrificing determinism, composability, or throughput. It becomes the cryptographic glue between high-performance off-chain engines and Solana’s parallel, resource-bounded execution model.

References

[FFR24] Faonio A., Fiore, D., Russo, L., Real-world universal zk-SNARKs arenon-malleable, Proceedings of the 2024 ACM SIGSAC Conference on Computer and Communications Security, 2024, 3138 – 3151, Association forComputing Machinery, New York, NY.[

FS26] Fuchsbauer, G., Sefranek, M., Plonk without random oracles, preprint.Available at https://eprint.iacr.org/2026/200.

[GWC19] Gabizon, A., Williamson, Z. J., Ciobotaru, O., Plonk: Permutationsover Lagrange-bases for Oecumenical Noninteractive arguments of Knowledge, Preprint 2019, Available at https://eprint.iacr.org/2019/953.

[Gro16] Groth, J., On the Size of Pairing-based Non-interactive Arguments,Advances in Cryptology – Eurocrypt 2016, Proceedings Part II, 305 – 326,2016.

[LPS25] Lipmaa, H., Parisella, R., Siim, J., On Knowledge-Soundness of Plonkin ROM from Falsifiable Assumptions, Advances in Cryptology – CRYPTO2025, Lecture Notes in Computer Science, vol 16006, 362 – 395, 2025,Springer, Cham.

[Lip26] Lipmaa, H., Plonk is simulation extractable in ROM under falsifiableassumptions, Theory of Cryptography – TCC 2025, Lecture Notes in Computer Science, vol 16271, 3 – 36, 2026, Springer, Cham.

[MBKM19] Maller, M., Bowe, S., Kohlweiss, M., Meiklejohn, S., Sonic: Zeroknowledge snarks from linear-size universal and updatable structured reference strings, Proceedings of the 2019 ACM SIGSAC conference on computerand communications security, 2111–2128, 2019.

[Sef24] Sefranek, M., How (Not) to Simulate PLONK, Security and Cryptography for Networks, Lecture Notes in Computer Science, vol 14973, 96 – 117,2024, Springer, Cham.

More from the blog

Polynomial Commitment Schemes: FRI and KZG for Fast & Succinct Zero-Knowledge Proofs

Research
March 12, 2026

What Is Plonk...And Why?

Research
March 5, 2026

Inside ZK Systems: From Interactive Proofs to SNARKs

Research
February 19, 2026

Transaction Ordering in On-Chain Perpetual Markets

Research
February 5, 2026

Introducing Parasol: The Engine Making Solana the Onchain Perps Machine

Announcements
December 11, 2025

Polynomial Commitment Schemes: FRI and KZG for Fast & Succinct Zero-Knowledge Proofs

Research
March 12, 2026

What Is Plonk...And Why?

Research
March 12, 2026

Get Notified First

Join our mailing list to get launch details, feature releases, and important announcements directly to your inbox - no spam.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.