skip to content
PSYCHO VIRTUAL

Tools for Reed-Solomon Codes (2/5)

/ 8 min read

Tools for Reed-Solomon Codes

At the heart of low-degree testing are a few basic tools and concepts which enable us to transform a code ff or list of codes fif_i into a low-degree polynomial pp and test proximity to the original code. Taken as a whole these tools represent the fundamental mathematical building blocks for working with RS-codes.

Correlated Agreement

This theorem states that if many random linear combination of two or more functions are δ\delta-close to a Reed-Solomon code-word then the original function must be a code-word. Specifically, we want to know whether following holds true:

δ(f0+λf1,RS[F,L,d])θ\delta(f_0 + \lambda \cdot f_1, RS[\mathbb{F}, L, d]) \le \theta

The theorem is laid out in full in the [BSCI20] and stated as follows.

Theorem 1 For RS[F,L,d]RS[\mathbb{F}, L, d] with rate p=kLp = {k \over |L|}. Given the proximity parameter θ(0,1p)\theta \in (0, 1 - \sqrt{p}) and words f0,...,fdiFdf_0,...,f_{d-i} \in \mathbb{F}^{d} for which

{λF:δ(f0+λf1+...+λd1fd1,RS[F,L,d]θ}F>ϵ {|\{\lambda \in \mathbb{F}: \delta(f_0 + \lambda \cdot f_1 + ...+\lambda^{d-1}\cdot f_{d-1}, RS[\mathbb{F}, L, d] \le \theta\}|\over |\mathbb{F}|} > \epsilon

where ϵ\epsilon is the error-bounds. There exists polynomials p0(X),...,pd1(X)p_0(X),...,p_{d-1}(X) belonging to RS[F,L,d]RS[\mathbb{F}, L, d] and set ALA \subseteq L of density AL1θ{|A| \over |L|} \ge 1 - \theta on which f0,...,fd1f_0,...,f_{d-1} coincide with p0,...,pd1p_0,...,p_{d-1}. Specifically:

δ(f0+λf1+...+λd1fd1,RS[F,L,d])θ\delta(f_0 + \lambda \cdot f_1+...+\lambda^{d-1}\cdot f_{d-1}, RS[\mathbb{F}, L, d]) \le \theta

for every value of λF\lambda \in \mathbb{F}.

Correlated Agreement is extremely useful for building reduction arguments. Say we have a list of code-words f0,...,fif_0,...,f_i and a code V:=RS[F,L,d]V := RS[\mathbb{F}, L,d] and would like to verify that they are all close to VV. Correlated agreement enables us to take a random linear combination of fif_i and prove its proximity to VV. If fif_i is close to VV then all the members of fif_i must also be close to VV.

Correlated agreement will power our next technique polynomial folding, and also enable us to efficiently batch polynomials.

:::info See appendix for how we derive the soundness error bound ϵ\epsilon, this derivation is closely linked to the specific decoding regime used. :::

Polynomial Folding

At the heart of protocols like FRI and STIR, is a reduction operation in which a polynomial of size dd is step-wise reduced via random folding a polynomial of size d/kd/k using a folding parameter kk. To achieve this, we “wrap” the polynomial around a random folding challenge α\alpha-supplied by the verifier. This preserves the function’s proximity to the Reed-Solomon code while reducing the degree.

First we give two important facts for a polynomial q^F[X]\hat{q} \in \mathbb{F}[X]:

Fact 1. For every f^F[X]\hat{f} \in \mathbb{F}[X] there exists a unique bivariate polynomial Q^F[X,Y]\hat{Q} \in \mathbb{F}[X, Y] with degX(Q^)=[deg(f^)/deg(q^)]deg_X(\hat{Q}) = [deg(\hat{f})/deg(\hat{q})] and degY(Q^)<deg(q^)deg_Y(\hat{Q}) < deg(\hat{q}) so that f^(Z)=Q^(q^(Z),Z)\hat{f}(Z) = \hat{Q}(\hat{q}(Z), Z).

Note: Q^\hat{Q} can be computed efficiently given f^\hat{f} and q^\hat{q}. Observe, if deg(f^)<tdeg(q^)deg(\hat{f}) < t \cdot deg(\hat{q}) then degX(Q^)<tdeg_X(\hat{Q}) < t.

Fact 2. For every Q^F[X,Y]\hat{Q} \in \mathbb{F}[X, Y] with degX(Q^)<tdeg_X(\hat{Q}) < t and degY(Q^)<deg(q^)deg_Y(\hat{Q}) < deg(\hat{q}), the polynomial f^(Z):=Q^(q^(Z),Z)\hat{f}(Z):= \hat{Q}(\hat{q}(Z), Z) has degree deg(f^)<tdeg(q^)deg(\hat{f}) < t \cdot deg(\hat{q}).

We define the folding of a polynomial as:

Definition 1.2. Given a polynomial f^F<d\hat{f} \in \mathbb{F}^{<d}[X], a folding factor kNk \in \mathbb{N}, and a folding challenge αF\alpha \in \mathbb{F}, we define the polynomial PolyFold(f^,k,r)F<d/k[X]PolyFold(\hat{f}, k, r) \in \mathbb{F}^{<d/k}[X] as follows. Let Q^F[X,Y]\hat{Q} \in \mathbb{F}[X,Y] be the bivariate polynomial derived from f^\hat{f} with q^(X):=Xk\hat{q}(X) := X^k. Then we define PolyFold as:

PolyFold(f^,k,α)(X):=Q^(X,α)PolyFold(\hat{f}, k, \alpha)(X) := \hat{Q}(X, \alpha)

And we define the folding of a function as:

Definition 1.3. Let f:LFf:L \rightarrow \mathbb{F} be a function, a folding factor kNk \in \mathbb{N}, and a folding challenge αF\alpha \in \mathbb{F}. For every xLkx \in L^k, let px^F<k[X]\hat{p_x} \in \mathbb{F}^{<k}[X] be the polynomial where px^(y)=f(y)\hat{p_x}(y) = f(y) for every yLy \in L such that yk=xy^k =x. We then define Fold as

Fold(f,k,α)(x):=px^(α)Fold(f, k, \alpha)(x) := \hat{p_x}(\alpha)

We compute Fold(f,k,α)(x)Fold(f, k, \alpha)(x) by interpolating the kk values {f(y):yL  s.t.  yk=x}\{f(y):y\in L \;s.t. \;y^k =x\} into the polynomial p^x\hat{p}_x and evaluate this polynomial at α\alpha.

Now let’s look a little more closely at the mechanics of folding a polynomial.

First we take a polynomial f^(X)Fd[X]\hat{f}(X) \in F^d[X], and we split it into even and odd terms. Assuming d=2d = 2.

f(X)=fO(X2)+XfE(X2)f(X) = f_O(X^{2}) + X \cdot f_E(X^{2})

Thus for fOf_O and fEf_E, we have:

  • fO(X2)=f(X)f(X)2Xf_O(X^2)= {f(X) - f(-X) \over 2X}
  • fE(X2)=f(X)+f(X)2f_E(X^2)= {f(X) + f(-X) \over 2}

Our goal is then to derive a new polynomial f(X)=fO(X)+αfE(X)f^*(X) = f_O(X) + \alpha \cdot f_E(X)

from f(X)f(X) assuming α\alpha is a random value provided by the verifier.

f(Y)=fO(Y)+αfE(Y)f^*(Y)= f_O(Y) + \alpha \cdot f_E(Y)

First we reduce our domain from L0L_0 to L1L_1 so that L1L0L_1 \subseteq L_0, and then we evaluate both fOf_O and fEf_E on the reduced domain i.e. [fE(y)yL1][f_E(y)|_{y \in L_1}] and [fO(y)yL1][f_O(y)|_{y \in L_1}] multiply by α\alpha and perform a Langrange Interpolation to derive ff^* which is then evaluated on the reduced domain and sent to the verifier as [f(y)yL1][f^*(y)|_{y \in L_1}].

Out-of-Domain Sampling

One of our primary goals when designing any IOP is to reduce query complexity, we want to make queries from the verifier easy and cheap. One way to do this is to work in list-decoding, where instead of working with a single correct output we instead work with a list of outputs, one of which is correct. However, now the codeword sent by the prover doesn’t have a single unique closest code-word. This gives the prover lots of power. To solve this, we allow the verifier to send a random out-of-domain(OOD) sample xF\Lx \in \mathbb{F} \backslash L which forces the prover to choose one of the close polynomials. Thus increasing soundness.

In practice, whenever the prover wishes to show that for function fif_i there exists two points (xi,yi)(x_i,y_i) which has at least one polynomial f^i\hat{f}_i which is close to fif_i and where f^i(xi)=yi\hat{f}_i(x_i)=y_i. Instead of sending fif_i the prover sends fi:LFf_i:L \rightarrow \mathbb{F} where f^i\hat{f}_i is the evalution over LL. The verifier then samples a random field element xiFx_i \leftarrow \mathbb{F} and the prover responds with the following evaluation yi:=f^i(xi)y_i := \hat{f}_i(x_i).

:::info Generally FRI does not use out-of-domain sampling in the main protocol and instead “pushes” this sampling to the arithmetization level. See the DEEP-ALI paper for how this is done in practice. :::

Combining Functions of Varying Degree

In each round of the IOPP, the verifier queries a quotiented function and tests for low degree. Because this is the most computationally expensive part of the protocol we would like to only run this test once. Because of correlated agreement we know that we can produce a random linear combination of the functions f1,..,fif_1,..,f_i and test those. However, we run into a problem when the functions being tested are of different degrees.

To overcome this problem, we can combine functions of varying degrees using geometric sums.

Fact.1.0 Let F\mathbb{F} be a field, rFr \in \mathbb{F} be a field element, and aNa \in \mathbb{N} be a natural number.

i=0ari={(1ra+11r)\mboxforr1a+1\mboxforr=1\sum_{i=0}^{a} r^i = \left\{ \begin{array}{rcl} ({1 - r^{a+1}\over 1 -r}) & \mbox{for} & r \neq 1 \\ a + 1 & \mbox{for} & r = 1 \end{array}\right.

Assuming a proximity error bounded by min{1B(p),1p1/L}min\{1-B^*(p), 1 - p - 1 / |L|\}.

Definition.1.1 Given target degree dNd\in \mathbb{N}, shifting parameters rr, functions f1,..,fm:LFf_1,..,f_m:L \rightarrow \mathbb{F} and degrees 0d1,..,dMd0 \le d_1,..,d_M \le d^* we define Combine(d,r,(f1,d1),..,(fM,dM)):LFCombine(d^*, r, (f_1, d_1),..,(f_M, d_M)):L \rightarrow \mathbb{F} as:

Combine(d,r,(f1,d1),..,(fM,dM)(x):=i=1mrifi(x)(=0ddi(rx))Combine(d^*, r, (f_1, d_1),..,(f_M, d_M)(x) := \sum_{i=1}^{m} r_i \cdot f_i(x) \cdot (\sum_{\ell=0}^{d^*-d_i}(r \cdot x)^\ell )

Note: If we only want to correct degree then we can use DegCord(d,r,f,d)=Combine(d,r,(f,d))DegCord(d^*, r, f, d) = Combine(d^*, r, (f, d))

Quotienting

Quotienting is a method of enforcing constraints on a function . Roughly, if the polynomial corresponding to the RS-code does not satisfy the constraints, then the resulting quotient will be far from low-degree, which can then be detected by a low-degree test.

Because the verifier has query access to all f^i\hat{f}_i on the field F\mathbb{F}, we need some constraints to enforce consistency and ensure that queries are limited to LL, while only have access to evaluations of ff on LL. Without these constraints, the prover would have to send the evaluation of ff on the entire field F\mathbb{F}, which would result in an enormous proof size.

We formally define Quotienting as:

Definition 1.4. For function f:LFf:L \rightarrow \mathbb{F}, set SFS \subseteq \mathbb{F}, and functions Ans,Fill:SFAns, Fill:S \rightarrow \mathbb{F}, let Ans^F<S[X]\hat{Ans} \in \mathbb{F}^{<|S|}[X] be the polynomial with Ans^(x)=Ans(x)\hat{Ans}(x) = Ans(x) for every xSx \in S, and let V^SF<S[X]\hat{V}_S \in \mathbb{F}^{<|S|}[X] be the unique non-zero polynomial with V^S(x)=0\hat{V}_S(x) =0 for every xSx \in S.

The quotient function Quotient(f,S,Ans,Fill):LFQuotient(f, S, Ans, Fill): L \rightarrow \mathbb{F} is defined as:

xL,Quotient(f,S,Ans,Fill)(x):={Fill(x)\mboxforxSf(x)Ans^(x)VS(x)^\mboxforxF\S\forall x \in L, Quotient(f,S, Ans, Fill)(x):=\left\{ \begin{array}{rcl} Fill(x) & \mbox{for} & x \in S \\ f(x) - \hat{Ans}(x)\over{\hat{V_S(x)}} & \mbox{for} & x \in \mathbb{F} \backslash S \end{array}\right.

Definition 1.5. For polynomial f^F<d[X]\hat{f} \in \mathbb{F}^{<d}[X] and set SFS \subseteq F, let VS^F<S[X]\hat{V_S} \in \mathbb{F}^{<|S|}[X] be the unique non-zero polynomial with VS^(x)=0\hat{V_S}(x) = 0 for every xSx \in S.

The polynomial quotient PolyQuotient(f^,S)F<dS[X]PolyQuotient(\hat{f}, S) \in \mathbb{F}^{<d-|S|}[X] is defined as:

PolyQuotient(f^,S)(X):=f^(X)Ans^(X)VS^XPolyQuotient(\hat{f}, S)(X) := {\hat{f}(X) - \hat{Ans}(X)\over{\hat{V_S}{X}}}

Where Ans^F<S[X]\hat{Ans} \in \mathbb{F}^{<|S|}[X] is the unique non-zero polynomial with Ans^(X)=Ans(x)\hat{Ans}(X) = Ans(x) for every xSx \in S.

REF

[ACFY24] Gal Arnon, Alessandro Chiesa, Giacomo Fenzi, and Eylon Yogev. “STIR: Reed–Solomon proximity testing with fewer queries”. Cryptology ePrint Archive, Report 2024/390. 2024. https://eprint.iacr.org/2024/390

[H22] Ulrich Haböck. “A summary on the FRI low degree test”. Cryptology ePrint Archive, Report 2022/1216. 2022. https://eprint.iacr.org/2022/1216.pdf