Skip to content

Conversation

@sidhujag
Copy link
Member

This PR makes statement binding verifier-enforced end-to-end for the LF+ → WE/DPP path by tying the SP1 statement digest (public inputs) to the committed witness via Ajtai prefix exposure, and ensuring the full PlusProof WE gate actually enforces the same binding constraints as native verification.

Key changes

  • Ajtai prefix exposure (identity block) in latticefold commitment scheme so selected cm_f coordinates expose fixed witness slots for cheap equality checks.
    • File: crates/latticefold/src/commitment/commitment_scheme.rs
  • Native verifier enforcement: Dcom/Cm verification supports an expected_prefix and checks cm_f[0..E) matches it (constant-coeff), preventing “same witness, different statement”.
    • Files: crates/latticefold-plus/src/rgchk.rs, crates/latticefold-plus/src/cm.rs
  • WE arithmetization enforcement (full PlusProof): the WE gate now passes real public_inputs into the Dcom gadget and glues them, so the in-circuit constraints include cm_f[0..E) == public_inputs[0..E) instead of silently skipping.
    • File: crates/latticefold-plus/src/we_gate_arith.rs
  • Fail-closed hardening:
    • Error if prefix binding is expected but public_inputs are missing/too short or L != 1 in the SP1 streamed/WE setting.
    • Removes “silent success” footguns.
    • Files: crates/latticefold-plus/src/we_gate_arith.rs, crates/latticefold-plus/src/rgchk.rs
  • Domain separation cleanup: LFP_WE_GATE_DIGEST_V1 is no longer [0;32] (now a stable nonzero label).
    • File: crates/latticefold-plus/src/we_statement.rs

Security impact

  • Closes the attack where a prover could keep the witness fixed and tweak statement public inputs while still satisfying the WE gate.
  • Makes statement binding verifier-enforced in both native and WE/dR1CS.

Test plan

  • Run existing latticefold-plus WE gate tests; specifically the “flip public input” negative test now becomes UNSAT in the full WE gate.
  • Run the SP1 one-proof harness with FLIP_PUBLIC_INPUT0=1 and (optionally) LFP_SKIP_PREFIX_BINDING_CHECK=1 to confirm native can be skipped for debugging but WE still fails.

Oneproof output log:

=========================================================
LF+ SP1 One-Proof (R1LF -> full PlusProof -> WE gate check)
=========================================================
  CHUNK_SIZE=1048576 PAD_COLS=256
  cache open: 418.638µs
[mem] rss=0.00 GiB  tag=after cache open
  chunks=47 ncols=134217728
  stats: num_vars=96659814 num_constraints=48873872 num_public=8 p_bb=2013265921 total_nonzeros=219688827
  build full mats: 7.08169029s (nrows=49283072 ncols=134217728)
[mem] rss=8.85 GiB  tag=after build full mats (A,B,C)
  loaded witness: base=96659814 aux=0 full=96659814
[mem] rss=9.57 GiB  tag=after load witness u64
  map witness u64->F: 664.481122ms
[mem] rss=10.29 GiB  tag=after map witness u64->F
  build f0 (base scalars, padded): 364.09356ms
[mem] rss=11.01 GiB  tag=after build f0 padded
[mem] rss=11.01 GiB  tag=after build r1cs struct
[mem] rss=11.01 GiB  tag=after init Ajtai scheme
[mem] rss=11.02 GiB  tag=after ComR1CS::from_f0_seeded
[mem] rss=11.02 GiB  tag=after matrices_arc
[sp1_default_we_params] SP1/Frog64 hardcoded safe params: decomp_b=12, k=8, l=16, max_bound=1211766595
  bundle_r1lf_digest=0x8140ed4551ea30286f5b81ac642a5c82b337e12864274d62d422a4ad631d096d
  vk_hash=0x004cda927463a9cda648d01028f3de6b4d4ff3135683772508de859c42fe6a08
  committed_values_digest=0xd0abd303eefe48a35a09197c8840467029bde2832f61695991207d60fb6a2354
  public_inputs_len=8 (from witness[1..=l])
  stmt_digest=0xdab0b101744aadbddc1290993180839e63514cf6a18574557bbaa5040f60c9e9
  lock_coin_seed=0x599281fffa1cbb6ccaf518391171f167462bb4f27f790bdf16be5de191e0d840 (j=0)
  setup full LF+: 5.46209352s
[mem] rss=11.02 GiB  tag=after setup full LF+
[mem] rss=11.02 GiB  tag=PlusProverSparseBase::prove_sparse_base (start)
[LF+ streaming_sumcheck] init: 9.31µs (nvars=27, degree=3, mles=5)
[LF+ streaming_sumcheck] round 1/27 done
[mem] rss=11.03 GiB  tag=streaming_sumcheck: fix(start)
[mem] rss=12.88 GiB  tag=streaming_sumcheck: fix(done)
[LF+ streaming_sumcheck] round 27/27 done
[LF+ streaming_sumcheck] totals: rounds=6.307672735s absorb_msgs=17.532508ms get_chal=2.528141ms absorb_chal=10.5µs fix_last=21.98µs final_evals=1.82µs total=6.327820746s
[mem] rss=11.02 GiB  tag=PlusProverSparseBase::prove_sparse_base (after linearize)
[LF+ Mlin::mlin_seeded_base] instance[0] witness=ConstCoeffBase(len=96659814) -> RgInstance::from_f0_seeded
[LF+ RgInstance::from_f0_seeded] start: n(domain)=134217728 prefix_len=96659814 kappa=16 d=64 k=8
[LF+ RgInstance::from_f0_seeded] build digit tables (prefix only): 771.256324ms
[LF+ RgInstance::from_f0_seeded] commit monomial mats (Ajtai seeded): 22.184423701s (kappa×(k*d) = 16×512)
[LF+ RgInstance::from_f0_seeded] split tau: 537.884419ms
[LF+ RgInstance::from_f0_seeded] build m_tau digits: 59.404422ms
[LF+ RgInstance::from_f0_seeded] commit f/tau/m_tau: 12.041949083s
[LF+ RgInstance::from_f0_seeded] total: 36.220507802s
[LF+ Mlin::mlin_seeded_base] build instances: 36.220524272s (L=1, n=134217728, kappa=16, f0_instances=1, ring_instances=0)
[mem] rss=13.92 GiB  tag=cm: prove start
[LF+ Cm::prove] start: n=134217728 nvars=27 Mlen=3 rayon_threads=96
[mem] rss=13.92 GiB  tag=setchk: start
[mem] rss=13.92 GiB  tag=setchk: classified sets
[LF+ setchk] vector_set[0] build_table: 535.646097ms (len=134217728)
[mem] rss=14.92 GiB  tag=setchk: after build mles
[mem] rss=14.92 GiB  tag=setchk: before sumcheck
[mem] rss=14.92 GiB  tag=streaming_sumcheck(base): start
[LF+ streaming_sumcheck] init(base): 20.83µs (nvars=27, degree=3, mles=27)
[mem] rss=14.92 GiB  tag=streaming_sumcheck(base): fix(start)
[mem] rss=22.92 GiB  tag=streaming_sumcheck(base): fix(done)
[LF+ streaming_sumcheck] total(base): 9.764783905s
[mem] rss=22.92 GiB  tag=streaming_sumcheck(base): done
[mem] rss=13.92 GiB  tag=setchk: after sumcheck
[LF+ setchk] sumcheck: 10.403525483s (nvars=27, degree=3, ncols=64, Ms=8, ms=1)
[mem] rss=13.92 GiB  tag=setchk: step3 start
[LF+ setchk] step3(y_mats): 874.738348ms
[LF+ setchk] step3(e): 17.32117663s
[LF+ setchk] step3(b): 508.761727ms
[LF+ setchk] step3(absorb): 312.847848ms
[mem] rss=16.94 GiB  tag=setchk: step3 done
[LF+ setchk] step3(e,b)+absorb: 19.017684803s  total: 29.979221977s
[LF+ Rg::range_check] set_check: 30.223616847s (nvars=27)
[LF+ Rg::range_check] evals+absorb: 31.720357374s
[LF+ Cm::prove] range_check: 31.720381784s
[mem] rss=13.94 GiB  tag=cm: after range_check
[mem] rss=13.94 GiB  tag=cm: build_h start
[mem] rss=13.94 GiB  tag=cm: build_h one inst start
[mem] rss=13.94 GiB  tag=cm: build_h one inst done
[LF+ Cm::prove_base] stream_h active (no h materialization)
[LF+ Cm::prove] build h: 1.073214ms
[mem] rss=13.94 GiB  tag=cm: build_h done
[LF+ Cm::prove] build comh: 3.056784ms
[LF+ Cm::prove] build t(z) streaming: 61.459µs (tensor_len=8388608, padded_to_n=134217728)
[LF+ Cm::prove] build shared m_arcs: 390ns (Mlen=3)
[LF+ Cm::sumchecker_streaming] mtau witness: DigitsMonomial
[LF+ Cm::sumchecker_streaming] build mles: 9.97µs (mles=17)
[LF+ Cm::sumchecker_streaming] build rc powers: 330ns (len=18)
[LF+ streaming_sumcheck] init: 720ns (nvars=27, degree=2, mles=19)
[LF+ streaming_sumcheck] round 1/27 done
[mem] rss=14.27 GiB  tag=streaming_sumcheck: fix(start)
[mem] rss=22.12 GiB  tag=streaming_sumcheck: fix(done)
[LF+ streaming_sumcheck] round 27/27 done
[LF+ streaming_sumcheck] totals: rounds=83.481804939s absorb_msgs=24.733685ms get_chal=4.664908ms absorb_chal=7.141µs fix_last=251.209µs final_evals=7.44µs total=83.511523849s
[LF+ Cm::sumchecker_streaming] streaming sumcheck: 86.149789661s
[LF+ Cm::sumchecker_streaming] build evals structs: 2.13µs
[LF+ Cm::sumchecker_streaming] absorb evals: 2.391678ms
[LF+ Cm::sumchecker_streaming] sumcheck+evals: 86.152319519s (mles=19, L=1, Mlen=3)
[LF+ Cm::sumchecker_streaming] mtau witness: DigitsMonomial
[LF+ Cm::sumchecker_streaming] build mles: 13.9µs (mles=17)
[LF+ Cm::sumchecker_streaming] build rc powers: 489ns (len=18)
[LF+ streaming_sumcheck] init: 840ns (nvars=27, degree=2, mles=19)
[LF+ streaming_sumcheck] round 1/27 done
[mem] rss=14.29 GiB  tag=streaming_sumcheck: fix(start)
[mem] rss=22.15 GiB  tag=streaming_sumcheck: fix(done)
[LF+ streaming_sumcheck] round 27/27 done
[LF+ streaming_sumcheck] totals: rounds=83.817411142s absorb_msgs=24.072533ms get_chal=4.756416ms absorb_chal=7.193µs fix_last=237.379µs final_evals=6.19µs total=83.846544818s
[LF+ Cm::sumchecker_streaming] streaming sumcheck: 86.484520728s
[LF+ Cm::sumchecker_streaming] build evals structs: 2.22µs
[LF+ Cm::sumchecker_streaming] absorb evals: 2.391357ms
[LF+ Cm::sumchecker_streaming] sumcheck+evals: 86.487048014s (mles=19, L=1, Mlen=3)
[LF+ Cm::prove] total: 207.148522398s
[LF+ Mlin::mlin_seeded_base] Cm::prove_base: 207.148574587s
[LF+ Mlin::mlin_seeded_base] total: 243.36911167s
[mem] rss=75.60 GiB  tag=PlusProverSparseBase::prove_sparse_base (after mlin_seeded)
[mem] rss=75.60 GiB  tag=decomp_seeded(one_shot): start
[mem] rss=107.60 GiB  tag=decomp_seeded(one_shot): after decompose_to_packed
[LF+ Decomp::decompose_seeded_base_one_shot] setup+split: 3.379296363s (nvars=27, Mlen=3)
[mem] rss=107.60 GiB  tag=decomp_seeded(one_shot): after eq_weights
[LF+ Decomp::decompose_seeded_base_one_shot] eq_weights: 4.160109ms (nvars=27)
[LF+ Decomp::decompose_seeded_base_one_shot] fv both: 1.805631954s
[LF+ Decomp::decompose_seeded_base_one_shot] mats: 3.649843981s (Mlen=3)
[mem] rss=107.60 GiB  tag=decomp_seeded(one_shot): after v0/v1 mats
[LF+ Decomp::decompose_seeded_base_one_shot] compute v0/v1: 5.459777573s
[mem] rss=107.60 GiB  tag=decomp_seeded(one_shot): after compute v0/v1
[LF+ Decomp::decompose_seeded_base_one_shot] commitments C0/C1: 84.994673255s
[LF+ Decomp::decompose_seeded_base_one_shot] total: 93.833814421s
[mem] rss=107.60 GiB  tag=decomp_seeded(one_shot): done
[mem] rss=11.61 GiB  tag=PlusProverSparseBase::prove_sparse_base (after decompose_seeded)
  PlusProverSparseBase::prove_sparse_base: 350.008824581s
[mem] rss=11.61 GiB  tag=after prove_sparse_base
  PlusVerifier::verify(record trace): 432.679281ms
[mem] rss=11.61 GiB  tag=after verify(record)
  WE gate build_dr1cs: 17.883257657s
[mem] rss=28.16 GiB  tag=after WE build_dr1cs
  WE gate dr1cs sat check: 44.250489ms
[mem] rss=28.16 GiB  tag=after WE sat check
  armer/decap: idx=51315077 ell_rs=95412140 lambda_small=BigInt([15716972453630898308]) ok=true (arm_time=72.037139874s)
  Π_decomp local verify (non-trace): 21.77µs
  OK: WE gate DR1CS satisfied

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9a73d43d7e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 231 to 233
for lp in &proof.lproof {
lp.verify(&mut self.transcript);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Absorb statement public inputs before verifying

When public inputs are non-empty, the prover now absorbs them into the transcript before any Fiat–Shamir challenges are sampled, but PlusVerifier::verify starts verifying lproof immediately on the current transcript. This means a typical verifier created with a fresh transcript will derive different challenges and reject an otherwise valid proof unless the caller manually pre-absorbs the same public inputs. To avoid this regression, the verifier should either accept public_inputs and absorb them before lproof, or document/enforce that callers must do so.

Useful? React with 👍 / 👎.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jagdeep sidhu added 2 commits January 20, 2026 20:10
When public inputs are non-empty, the prover now absorbs them into the transcript before any Fiat–Shamir challenges are sampled, but PlusVerifier::verify starts verifying lproof immediately on the current transcript. This means a typical verifier created with a fresh transcript will derive different challenges and reject an otherwise valid proof unless the caller manually pre-absorbs the same public inputs. To avoid this regression, the verifier should either accept public_inputs and absorb them before lproof, or document/enforce that callers must do so.
@sidhujag
Copy link
Member Author

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 84bbf1e30e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +1702 to +1704
const EXPOSE_MAX: usize = 8;
let expose_rows = EXPOSE_MAX.min(kappa);
if expose_rows > 0 {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Use actual exposed-prefix length in WE gate binding

The Dcom-prefix binding in the WE-gate hard-codes EXPOSE_MAX = 8 and uses min(kappa, 8) to decide how many cm_f coordinates to equate with public_inputs. This ignores the actual expose_rows configured in AjtaiCommitmentScheme::seeded_with_exposed_prefix. If a proof is built with a different exposure size (e.g., expose_rows = 4 with kappa = 16, or expose_rows = 0 for a non-binding configuration), the WE-gate will still require 8 public inputs and enforce equality on cm_f[0..8), causing otherwise valid proofs to fail verification or to enforce the wrong binding. The binding length should come from the Ajtai exposure configuration (or be passed in via params) instead of being fixed at 8.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants