-
Notifications
You must be signed in to change notification settings - Fork 1
feat(l1): add support for ZisK backend #45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds support for the ZisK backend to ethrex-replay by updating dependencies and adding the necessary feature flag and backend support.
Key Changes
- Added ZisK feature flag and backend variant support
- Updated all ethrex dependencies from tag
v7.0.0to branchadd_zisk_zkvm_backend - Added
Default::default()parameter toexecution_witness_from_rpc_chain_configcalls - Updated error messages for backend features
Reviewed Changes
Copilot reviewed 6 out of 10 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| Cargo.toml | Added zisk feature flag for ZisK backend support |
| src/cli.rs | Added ZisK backend handling in backend() function with error messages |
| src/run.rs | Added default parameter to witness conversion function calls |
| src/rpc/mod.rs | Updated trie node handling to decode nodes and use mutable reference |
| src/rpc/db.rs | Removed duplicate node insertion in trie construction |
| .DS_Store | macOS system file that should not be committed |
| .github/.DS_Store | macOS system file that should not be committed |
| src/.DS_Store | macOS system file that should not be committed |
| diff | Temporary diff artifact that should not be committed |
| Cargo.lock | Auto-generated dependency lock file updates |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| let first_block_number = cache.get_first_block_number()?; | ||
| let Some(parent_block_header) = cache.blocks.iter().find_map(|b| { | ||
| if b.header.number == first_block_number - 1 { | ||
| Some(b.header.clone()) | ||
| } else { | ||
| None | ||
| } | ||
| }) else { | ||
| eyre::bail!("No parent block header"); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this doesn't make much sense.
The first_block_number is the first number that appears in cache.blocks, so the parent block header is never going to be in cache.blocks because you are looking for the parent of the first block, which is not going to be there. It is actually going to be in cache.witness.headers.
In these PRs I made changes that I think would work though:
#48
lambdaclass/ethrex#5416
I was planning on merging those after this is merged. The simplest thing to do for now IMO is to comment replay_no_zkvm (or make it work but it's going to change later)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach should be taken instead, as it's done in run.rs
let initial_state_root = db
.headers
.iter()
.map(|h| {
BlockHeader::decode(h).map_err(|_| eyre::Error::msg("Failed to decode block header"))
})
.collect::<Result<Vec<_>, _>>()?
.into_iter()
.find(|h| h.number == first_block_number - 1)
.map(|h| h.state_root)
.ok_or_else(|| eyre::eyre!("Initial state root not found"))?;There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch! fixed, thank you!!
src/cli.rs
Outdated
| rpc_url: Some(rpc_url.clone()), | ||
| cached: false, | ||
| network: None, | ||
| cache_dir: PathBuf::default(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should use the default cache dir that we always use, otherwise it's not going to look into the specific folder we designed for that. Maybe we should have a constant for DEFAULT_CACHE_DIR or something like that.
cache_dir: PathBuf::from("./replay_cache"),
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed!
| let hash = H256::from_slice(&Keccak256::digest(root)); | ||
| state_nodes.insert(hash, root.clone()); | ||
| hash | ||
| H256::from_slice(&Keccak256::digest(root)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we aren't inserting the root in the state_nodes as before. I don't know if it's critical but above we are doing proof.iter().skip(1) so the root should be inserted here, right?
I think the person who wrote this code tried not to hash the root twice I guess but I believe that if we remove this insert we are not going to have the root in the state_nodes.
Maybe we can change the proof.iter().skip(1) for proof.iter() and leave the rest of the code as is, hashing the root twice but we couldn't care less, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't tell if the reason is that but I recently tried to execute a block gathering the data with eth_getProof and I got Inconsisntent Trie. So that could be a possible cause.
Edit: Oh, It failed when trying to apply account updates it seems because of the known bug of missing nodes with eth_getProof, now I'm not sure if what I commented is a problem or not haha
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what led me to changed this in the first place, solved!
JereSalo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left comments, some things should be fixed. The Zisk integration works fine though
Additional Comment:
- I know it's pretty basic but it would be good if we had brief docs on how to generate input for running a block and then feeding that into a zkvm. Saying that we should get the elf from replay releases I guess, right?
|
@JereSalo yes that's a good idea. added docs! |
src/run.rs
Outdated
| execution_witness, | ||
| chain_config, | ||
| block.header.number, | ||
| Default::default(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one is just for compiling I believe but we shouldn't use the default state root here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed!
JereSalo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😃
…hrex-replay into add_support_for_zisk_zkvm
Motivation
a ZisK backend was added for ethrex-prover in lambdaclass/ethrex#5392. This PR adds supports to use it with ethrex-replay