State-root and batch submission to L1 #232
Replies: 7 comments
-
OverviewThe current batch-submitter is responsible for making the L2 data (transactions) and state commitments (merkle roots) available. It does this by sending transactions to a particular L1 contract. Validating nodes can listen for events emitted from this L1 contract to locate the data. This data can then be executed, and the results of the execution can be compared against the state commitments. When the computed state does not equal the proposed state, a fault proof can be executed to punish the sequencer. Existing ImplementationsThere are currently 2 implementations: The Typescript implementation is currently the production ready implementation, but work is ongoing to switch to the Go rewrite. General Considerations
How it worksThe batch-submitter is split up into 2 components. One is responsible for submitting transaction batches while the other is responsible for submitting state root batches. It is recommended that these components use different private keys as to not run into nonce issues when both components are sending transactions concurrently. Transaction Batch Submitter
Considerations for 1.0
State Root Batch Submitter
Considerations for 1.0
Questions
|
Beta Was this translation helpful? Give feedback.
-
Comments on the OP:
I think this is one of the most important parts to decide on and I like both of the suggestions. I think a Note also that committing
nit: the sequencing window is a parameter that has to do with ordering rights; the frequency on which the dispute game finalizes new state roots to L1 can be different. See Karl's writeup here. |
Beta Was this translation helpful? Give feedback.
-
comments on @tynes 's post: WRT the multisig/fund security, what is the concrete security improvement you're looking for? Is the idea to run multiple batch submitters, which have to agree, but are run in different environments so that it is harder to hack all of them than any 1? Also seems like it would increase gas costs without the threshold crypto. If we just want to manually top up the batch submitter less often, I think there are ways we could do that separately from 1.0.
I think this is a strong yes; we should do it however is best. The point of squash merge will be to avoid forwards compatibility issues.
See nit in previous comment for why the sequencer window is not the right word here, but we can absolutely take this as a given. It's actually not even decided that it will be a state root (vs. a full blockhash or something else -- this is the |
Beta Was this translation helpful? Give feedback.
-
SSZ is both a serialization and merkleization format, but they are separate things. We don't need any of the serialization logic, just the merkleization. I'm maintaining 2 implementations (Python and Go), and there are many others out there, used by eth2 clients in production: ethereum/consensus-specs#2138 And there are spec-tests for that I would propose some type structure like: class OptimismCommitment(Container):
chain_head: ChainHead
withdrawal_tree_root: Bytes32 # Binary merkle tree accumulator root, matching type List[OptimismWithdrawal, 2**20]
class ChainHead(Container):
state_root: Bytes32 # MPT root
latest_payload: ExecutionPayload # SSZ type matching an execution-layer block
class OptimismWithdrawal(Container):
amount: uint256
other_field: foobar
... This would look like:
So the The After a certain amount of withdrawals it makes sense to clear the tree in some way (e.g. move it to a deeper tree for more costly retrieval), to keep the tree-depth low, and thus the call-data low (cheaper withdrawals). If we had KZG or other fancy crypto on L1 it could have been a 48 byte withdrawal proof (flat vector commitment over all withdrawals), but 640 bytes + 20 hash calls seems nice compared to a MPT proof to the L2 account and then storage slot. And maybe there are other accumulators / parameters (10 deep binary tree and more frequent moves of old withdrawals to deeper tree?) to reduce cost further. |
Beta Was this translation helpful? Give feedback.
-
@tynes Thanks for the writeup 🙏
Is there any reason the batch submitter and the state root submitted need to be coupled? Can't they be fully independant components? (I thought they were before reading this!)
If submitted as calldata that can be used in proofs later, no (unless you have a proof handy, but then you're not relying solely on JSON-RPC). If they're stored in EVM storage, then yes (just We could add a JSON-RPC call to get the proofs on the L2 nodes, which can then be verified with a JSON-RPC call to L1. This will depend on how much we feel like we need to gas-golf state submissions. Might be okay to store everything in storage.
Could you speak more to the potential problems here?
How much of an issue is it to require a backwards scan to find the latest submitted batch? The calldata of the batch submission could include some kind of running total? I'm` also wondering how this works in a multi-sequencer world.
If you're talking about the dispute game, we'd "bisect" the execution trace (i.e. a record of all execution instructions) and the intermediate hash would be a hash of the geth program state.
If the fault proof program hardcodes the sequencer's L1 addresses (or reads them on the L1 chain), then it can process a L1 block, filter for transactions sent to the EOA from the sequencer's authorized addresses, and process these as batch submissions. What's unsafe here? This does however preclude a stateless batch submitter. @protolambda Love that design, it's very clean!
This could be an EIP we push for post-1.0. |
Beta Was this translation helpful? Give feedback.
-
WRT output (prev known as state root) submission vs. sequencer batch submission -- the more I think about these, the more I notice differences in requirements which should be considered. Notes on those just to share:
|
Beta Was this translation helpful? Give feedback.
-
One more note on relationship between this and cannon: one input to the l2 minigeth will be the previous output commitment. Currently, cannon expects a vanilla blockhash, merklized as in L1, in One last consideration is whether app devs will want to read other blockdata, such as ENS, and they will want to verify it in solidity. This could be an argument for including the L1 merklization, though they probably read just MPT stateroot in the above case. |
Beta Was this translation helpful? Give feedback.
-
Sequencer interactions with L1 in the 1.0 spec need to be integrated with existing infra well and with minimal risk. The systems team already implemented similar Go functionality, and so we should integrate their insights/work in the reference 1.0 spec and implementation.
This relates to the following milestones / features:
Before we can complete the above state/batch submission design, we need to address the dependencies (not part of deposit rollup):
ExecutionPayload
(eth2 SSZ type of eth1 block structure with nice merkle properties), possibly mixing in other useful commitments to shorten proof length (e.g.hash(hash_tree_root(exec_payload), withdrawal_accumulator_root)
@tynes can you help outline what the system team has already implemented, what you think is critical, and what we can improve in the 1.0 upgrade?
Beta Was this translation helpful? Give feedback.
All reactions