nabilech.com

The Machinery Behind Zero Knowledge: Pairings, Constraints, and Polynomials

Imagine you work at a bank. A customer walks up to your counter and says: “I know the PIN to this account. Give me access.”

Your job is not to learn the PIN. Your job is to confirm that they know it. The moment they whisper it to you, the secret travels — and now it lives in your head too, which is a problem. You could be coerced. You could make a mistake and repeat it. The whole point of a secret is that fewer people know it.

Now imagine the customer could prove they know the PIN without ever saying it out loud. No hints. No partial reveals. Just a mathematical handshake that ends with you being completely convinced.

That is the promise of zero knowledge proofs. And today we are going to look at the actual machinery that makes that promise work: Rank One Constraint Systems, Bilinear Pairings, Lagrange Interpolation, and how they connect into a functioning proof.

Part 1: The Problem — How Do You Check a Computation You Cannot See?

Before we talk about solutions, we need to be precise about the problem.

Say a prover claims they know values \(x\) and \(y\) such that:

\(\displaystyle z = x^2 \cdot y\)

and \(z = 18\). They want to convince a verifier of this without revealing \(x\) or \(y\).

The verifier cannot just ask “what is \(x\)?” — that defeats the purpose. The verifier also cannot blindly trust the prover. So what can they do?

The answer has three ingredients. First, translate the computation into a structured system of constraints. Second, encode those constraints into polynomials. Third, check the polynomial relationship using encrypted values — without decrypting them. Each of those steps maps to one of the four topics in this post.

Part 2: Rank One Constraint Systems — Flattening a Computation

A program is a mess. It has branches, loops, power operations, additions, multiplications — all tangled together. Before you can do anything cryptographic with a computation, you need to flatten it into a standard form that every step of the proof pipeline can work with.

That standard form is the Rank One Constraint System, or R1CS.

The core rule of an R1CS is simple: every constraint must have exactly one multiplication. Addition is free, multiplication costs exactly one constraint.

Take our earlier example:

\(\displaystyle z = x^2 \cdot y\)

This has two multiplications (\(x \cdot x\) and then \(\cdot y\)), so we split it:

\(\displaystyle v_1 = x \cdot x\)

\(\displaystyle z = v_1 \cdot y\)

Two constraints, each with exactly one multiplication. That is the R1CS discipline.

The Witness Vector

Now the prover needs to demonstrate they actually know values that satisfy these constraints — not just claim it. This is where the witness vector comes in.

The witness is a vector \(\mathbf{a}\) containing every value that appears anywhere in the computation: inputs, outputs, and all intermediate variables. By convention it always starts with \(1\):

\(\displaystyle \mathbf{a} = [1,\ z,\ x,\ y,\ v_1]\)

For \(x = 3,\ y = 2\), a valid witness is:

\(\displaystyle \mathbf{a} = [1,\ 18,\ 3,\ 2,\ 9]\)

You can verify: \(9 = 3 \cdot 3\) and \(18 = 9 \cdot 2\). The witness is the prover’s complete execution trace. Knowing it means knowing every intermediate step, not just the answer.

The Matrix Encoding

Each constraint gets encoded into three matrices — \(\mathbf{L}\), \(\mathbf{R}\), and \(\mathbf{O}\) — which represent the left side, right side, and output of each multiplication. The full system is:

\(\displaystyle \mathbf{L}\mathbf{a} \circ \mathbf{R}\mathbf{a} = \mathbf{O}\mathbf{a}\)

where \(\circ\) is the element-wise (Hadamard) product. Each row of these matrices corresponds to one constraint, and each column corresponds to one element of the witness vector.

Why does this matter? Because this matrix-vector form is what the rest of the ZKP pipeline consumes. R1CS is the universal translator between “a computation someone claims to have done” and “a structured object we can reason about cryptographically.”

Part 3: Lagrange Interpolation — Turning Vectors Into Polynomials

Here is the next question: we have these matrices \(\mathbf{L}\), \(\mathbf{R}\), \(\mathbf{O}\) and a witness vector \(\mathbf{a}\). These are just columns of numbers right now. How do we compress all of that into something a verifier can check with a single operation?

The answer is to convert each column of the matrices into a polynomial, using Lagrange interpolation.

Lagrange interpolation solves a simple geometric problem: given \(n\) points, find the unique polynomial of degree at most \(n – 1\) that passes through all of them.

If you give me two points, there is exactly one line through them. Three points, exactly one parabola (at most degree two). And so on. The remarkable fact — and it is genuinely remarkable — is that this unique polynomial always exists, regardless of the points you hand it.

The Formal Construction

Given \(n\) points \((x_1, y_1), \dots, (x_n, y_n)\) with all \(x_i\) distinct, the Lagrange polynomial is:

\(\displaystyle p(x) = \sum_{i=1}^{n} y_i \cdot \ell_i(x)\)

where each basis polynomial \(\ell_i(x)\) is defined as:

\(\ell_i(x) = \prod_{j=1, j \neq i}^{n} \frac{x – x_j}{x_i – x_j}\)

The key property of \(\ell_i\) is that it equals \(1\) at \(x = x_i\) and \(0\) at every other \(x_j\). So \(p(x_i) = y_i\) for every \(i\) — the polynomial hits every point exactly.

Why This Matters for ZKP

In the R1CS context, we treat each column of \(\mathbf{L}\) as a set of \(y\)-values evaluated at \(x = 1, 2, \dots, n\) (one per constraint). Lagrange interpolation turns that column into a polynomial. Do this for every column of \(\mathbf{L}\), \(\mathbf{R}\), and \(\mathbf{O}\), and then combine them using the witness \(\mathbf{a}\) as coefficients:

\(\displaystyle U(x) = \sum_{i=1}^{m} a_i \cdot L_i(x), \quad V(x) = \sum_{i=1}^{m} a_i \cdot R_i(x), \quad W(x) = \sum_{i=1}^{m} a_i \cdot O_i(x)\)

If the witness truly satisfies all constraints, then at every integer point \(x = 1, \dots, n\):

\(\displaystyle U(x) \cdot V(x) – W(x) = 0\)

Which means \(U(x) \cdot V(x) – W(x)\) is divisible by the vanishing polynomial:

\(\displaystyle Z(x) = (x – 1)(x – 2) \cdots (x – n)\)

So there exists a quotient polynomial \(H(x)\) such that:

\(\displaystyle U(x) \cdot V(x) – W(x) = H(x) \cdot Z(x)\)

This single polynomial equation encodes all \(n\) constraints at once. The verification problem has collapsed from checking \(n\) separate matrix equations into checking one polynomial identity. But the verifier still cannot see the prover’s polynomials directly — otherwise the witness is exposed. This is where the final piece comes in.

Part 4: Bilinear Pairings — Checking Encrypted Multiplications

We have a polynomial identity the prover needs to convince the verifier holds, without showing the polynomials themselves. The prover needs to evaluate their polynomials at some secret point \(\tau\) — a point chosen by a trusted setup — and send the encrypted results to the verifier.

Encryption here means mapping scalars to elliptic curve points. Given a generator \(G\) of an elliptic curve group, the encryption of a scalar \(a\) is simply:

\(\displaystyle E(a) = a \cdot G\)

This is one-way: knowing \(E(a)\) and \(G\) does not let you recover \(a\) without solving the discrete logarithm problem. So the prover can hand over \(E(U(\tau))\), \(E(V(\tau))\), \(E(W(\tau))\) with no risk of leaking the witness.

But now the verifier has a problem. They want to check:

\(\displaystyle U(\tau) \cdot V(\tau) – W(\tau) = H(\tau) \cdot Z(\tau)\)

Multiplication of scalars is easy. But how do you multiply elliptic curve points? You cannot — ordinary elliptic curve arithmetic has no multiplication between points. \(E(a) \cdot E(b) = E(ab)\) simply does not exist in a standard group.

This is exactly the gap that bilinear pairings fill.

What a Bilinear Pairing Actually Is

A bilinear pairing is a map: <div>

\(\displaystyle e: \mathbb{G}_1 \times \mathbb{G}_2 \to \mathbb{G}_T\)

that takes one point from each of two source groups (\(\mathbb{G}_1\) and \(\mathbb{G}_2\)) and maps them into a target group \(\mathbb{G}_T\). The critical property — the one that makes everything work — is bilinearity:

\(\displaystyle e(a \cdot P,\ b \cdot Q) = e(P, Q)^{ab}\)

for any scalars \(a, b\) and any points \(P \in \mathbb{G}_1\), \(Q \in \mathbb{G}_2\).

In plain terms: the pairing of two encrypted values equals the encryption of their product. You can check whether \(ab = c\) without knowing \(a\), \(b\), or \(c\):

\(\displaystyle e(a \cdot G_1,\ b \cdot G_2) = e(G_1, G_2)^{ab} = e(c \cdot G_1,\ G_2)\)

This is the property that the bank customer needed all along. The verifier gets \(E(a)\) and \(E(b)\) — just elliptic curve points — and can confirm \(ab = c\) by computing one pairing on each side and comparing the results in \(\mathbb{G}_T\).

Why One Multiplication Per Constraint

Notice something: the output of a pairing lives in \(\mathbb{G}_T\), not in \(\mathbb{G}_1\) or \(\mathbb{G}_2\). You cannot feed a \(\mathbb{G}_T\) element back into another pairing. This means you get exactly one multiplication per pairing call — which is precisely why the R1CS format, with its exactly-one-multiplication-per-constraint discipline, was designed the way it was. The constraint format and the cryptographic tool were built to fit each other.

The Verification Equation

In the ZKP for an R1CS, the verifier needs to check:

\(\displaystyle e\!\left(\sum_i a_i L_i(\tau) \cdot G_1,\ \sum_i a_i R_i(\tau) \cdot G_2\right) = e\!\left(\sum_i a_i O_i(\tau) \cdot G_1 + H(\tau) Z(\tau) \cdot G_1,\ G_2\right)\)

The prover sends the elliptic curve point encodings of those polynomial evaluations. The verifier runs one pairing comparison. If it holds, the proof passes. The verifier never sees \(\tau\), never sees the witness, never sees the individual constraint values. They see four points and compute two pairings.

Part 5: The Full Picture

Let us retrace the journey with the bank customer in mind.

The customer knows a secret: values \(x\) and \(y\) that satisfy some computation. Here is how the proof is built:

Step 1 — R1CS: The computation is expressed as a system of one-multiplication-per-row constraints, encoded as matrices \(\mathbf{L}\), \(\mathbf{R}\), \(\mathbf{O}\). The customer’s knowledge is captured in a witness vector \(\mathbf{a}\).

Step 2 — Lagrange interpolation: Each column of \(\mathbf{L}\), \(\mathbf{R}\), \(\mathbf{O}\) is lifted into a polynomial. The witness values weight these polynomials into three combined polynomials \(U(x)\), \(V(x)\), \(W(x)\). The customer also computes \(H(x)\) such that \(U \cdot V – W = H \cdot Z\).

Step 3 — Bilinear pairings: The trusted setup provides encrypted powers of \(\tau\). The customer evaluates their polynomials at \(\tau\) inside the encryption, producing a handful of elliptic curve points. The bank teller (verifier) runs the pairing check on those points. The verification either passes or fails — no secret ever crosses the counter.

Closing Thought

What makes this whole construction beautiful is how tightly the pieces interlock. R1CS is not an arbitrary format — it is exactly what bilinear pairings can verify, one multiplication at a time. Lagrange interpolation is not an arbitrary choice — it is the unique, compression-friendly bridge between constraint systems and polynomial identities. Bilinear pairings are not an arbitrary tool — they are the only known efficient way to check encrypted multiplications on elliptic curves.

Every design decision feeds the next. That is the signature of a mature mathematical theory: not a pile of tools, but a machine where nothing is wasted.

Leave a Comment

Your email address will not be published. Required fields are marked *