Skip to content

Conversation

@xqft
Copy link
Contributor

@xqft xqft commented Nov 20, 2025

Motivation

a ZisK backend was added for ethrex-prover in lambdaclass/ethrex#5392. This PR adds supports to use it with ethrex-replay

@xqft xqft changed the title Add support for ZisK backend feat: add support for ZisK backend Nov 20, 2025
@xqft xqft marked this pull request as ready for review November 20, 2025 16:51
Copilot AI review requested due to automatic review settings November 20, 2025 16:51
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds support for the ZisK backend to ethrex-replay by updating dependencies and adding the necessary feature flag and backend support.

Key Changes

  • Added ZisK feature flag and backend variant support
  • Updated all ethrex dependencies from tag v7.0.0 to branch add_zisk_zkvm_backend
  • Added Default::default() parameter to execution_witness_from_rpc_chain_config calls
  • Updated error messages for backend features

Reviewed Changes

Copilot reviewed 6 out of 10 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
Cargo.toml Added zisk feature flag for ZisK backend support
src/cli.rs Added ZisK backend handling in backend() function with error messages
src/run.rs Added default parameter to witness conversion function calls
src/rpc/mod.rs Updated trie node handling to decode nodes and use mutable reference
src/rpc/db.rs Removed duplicate node insertion in trie construction
.DS_Store macOS system file that should not be committed
.github/.DS_Store macOS system file that should not be committed
src/.DS_Store macOS system file that should not be committed
diff Temporary diff artifact that should not be committed
Cargo.lock Auto-generated dependency lock file updates

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@ilitteri ilitteri changed the title feat: add support for ZisK backend feat(l1): add support for ZisK backend Nov 20, 2025
@github-actions github-actions bot added the L1 label Nov 20, 2025
Comment on lines 886 to 895
let first_block_number = cache.get_first_block_number()?;
let Some(parent_block_header) = cache.blocks.iter().find_map(|b| {
if b.header.number == first_block_number - 1 {
Some(b.header.clone())
} else {
None
}
}) else {
eyre::bail!("No parent block header");
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this doesn't make much sense.
The first_block_number is the first number that appears in cache.blocks, so the parent block header is never going to be in cache.blocks because you are looking for the parent of the first block, which is not going to be there. It is actually going to be in cache.witness.headers.
In these PRs I made changes that I think would work though:
#48
lambdaclass/ethrex#5416
I was planning on merging those after this is merged. The simplest thing to do for now IMO is to comment replay_no_zkvm (or make it work but it's going to change later)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This approach should be taken instead, as it's done in run.rs

let initial_state_root = db
        .headers
        .iter()
        .map(|h| {
            BlockHeader::decode(h).map_err(|_| eyre::Error::msg("Failed to decode block header"))
        })
        .collect::<Result<Vec<_>, _>>()?
        .into_iter()
        .find(|h| h.number == first_block_number - 1)
        .map(|h| h.state_root)
        .ok_or_else(|| eyre::eyre!("Initial state root not found"))?;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch! fixed, thank you!!

src/cli.rs Outdated
rpc_url: Some(rpc_url.clone()),
cached: false,
network: None,
cache_dir: PathBuf::default(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should use the default cache dir that we always use, otherwise it's not going to look into the specific folder we designed for that. Maybe we should have a constant for DEFAULT_CACHE_DIR or something like that.
cache_dir: PathBuf::from("./replay_cache"),

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed!

let hash = H256::from_slice(&Keccak256::digest(root));
state_nodes.insert(hash, root.clone());
hash
H256::from_slice(&Keccak256::digest(root))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we aren't inserting the root in the state_nodes as before. I don't know if it's critical but above we are doing proof.iter().skip(1) so the root should be inserted here, right?
I think the person who wrote this code tried not to hash the root twice I guess but I believe that if we remove this insert we are not going to have the root in the state_nodes.

Maybe we can change the proof.iter().skip(1) for proof.iter() and leave the rest of the code as is, hashing the root twice but we couldn't care less, right?

Copy link
Contributor

@JereSalo JereSalo Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't tell if the reason is that but I recently tried to execute a block gathering the data with eth_getProof and I got Inconsisntent Trie. So that could be a possible cause.

Edit: Oh, It failed when trying to apply account updates it seems because of the known bug of missing nodes with eth_getProof, now I'm not sure if what I commented is a problem or not haha

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what led me to changed this in the first place, solved!

Copy link
Contributor

@JereSalo JereSalo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left comments, some things should be fixed. The Zisk integration works fine though

Additional Comment:

  • I know it's pretty basic but it would be good if we had brief docs on how to generate input for running a block and then feeding that into a zkvm. Saying that we should get the elf from replay releases I guess, right?

@xqft
Copy link
Contributor Author

xqft commented Nov 26, 2025

@JereSalo yes that's a good idea. added docs!

src/run.rs Outdated
execution_witness,
chain_config,
block.header.number,
Default::default(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is just for compiling I believe but we shouldn't use the default state root here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed!

Copy link
Contributor

@JereSalo JereSalo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😃

@ilitteri ilitteri enabled auto-merge November 26, 2025 16:13
@ilitteri ilitteri added this pull request to the merge queue Nov 26, 2025
Merged via the queue into main with commit 6ea9480 Nov 26, 2025
17 checks passed
@ilitteri ilitteri deleted the add_support_for_zisk_zkvm branch November 26, 2025 16:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants