L2 Block Timestamps #251
Replies: 11 comments 9 replies
-
Ideas on handling short L1 block times:
The idea would be something like the following:
Note: we lose the fact that L2 blocks are always an integer multiple of the block time after the genesis. |
Beta Was this translation helpful? Give feedback.
-
I do think lazy block production is ideal, but we would likely have to implement a consensus engine for that. My main question here is what sort of protocol level guarantees can we give to application devs? If the timestamps are based on config, then we can alter that config arbitrarily and the security guarantees go out the window. On L1, miners can alter the timestamp but it has to be greater than the previous timestamp and within some threshold of |
Beta Was this translation helpful? Give feedback.
-
Let's consider the proposal where the first L2 block of epoch N has the same timestamp as L1 block N (and let's assume for simplicity's sake that both of these timestamps are always multiple of 2). The fundamental problem, as I understand it, is that, from the point of view of the sequencer, when it attempts to build an L2 block, there is uncertainty as to whether that block's timestamp (2s past the timestamp of the previous block) will land before or after the next L1 block's timestamp. @trianglesphere is that accurate? I think this changes with the merge, where the block time becomes a fixed 12 secondes. So basically we can have 6 L2 blocks per L1 block at T+0, T+2, ...., T+10 (where T is the timestamp of the L1 block). @protolambda Are there any caveats here that we need to be aware of? Block production can most definitely be delayed, but is it also possible that in the canonical L1 chain a block gets skipped? I think the issue I see with this design is a strong need for synchronicity between the L2 and L1 chain. If the last block we saw had number N and timestamp T, then we will need to see L1 block N+1 at time T+12, or we can't produce the first L2 block of epoch N+1. Even missing out on updates for a few seconds are T+12 becomes problematic for latency and throughput. We can "fix" this problem by keeping a buffer of L1 blocks. So if the last L1 block has number N, we could add new sequence transactions to a block in epoch N-X (e.g. X = 10). This lets us preserve latency/throughput for up to I don't like this too much though, as I expect it to enable some weird arbitrages, especially once we add the ability to read from the L1 chain on L2 (and this ability is sort of kinda present from the start, with proofs against the latest L1 block root). Also I'm not sure what we gain by setting such a relationship — it's neither intuitive nor particularly useful, but see next section for more thoughts on this. On the other hand, the alternative design were the first block at epoch N has the timestamp of L1 block N-1 is just weird. If we're going to do that, we might as well disregard an explicit relationship between L1 and L2 timestamps. I think we could instead set boundaries on the relationships, such as:
This design lets us have variable length epochs, which lets us survive temporary loss of connectivity to L1. You'd still get "weird arbitrages" sometimes when you have long epochs, but those would happen only when L1 connectivity is lost, which should be fairly seldom. Am I missing something else that we absolutely need to consider? Just a quick note because I was very temporarily confused by this: this discussion is completely distinct from any notion of "sequencer window". The sequencer window is for deriving the L2 chain from L1. Here we're talking about the sequencer building the blocks before posting batches to L1. |
Beta Was this translation helpful? Give feedback.
-
Sequencer window yes, but we do need to consider the verifier. The timestamps created by the sequencer need to be sanity checked and accepted by the verifiers. |
Beta Was this translation helpful? Give feedback.
-
Yes. This is a problem (but not insurmountable) with the sequencer forcing constant 2s block times. To me, here are the requirements:
The crux of the problem is that L1 block times have variability (and TBH I am incredibly distrustful of relying on shared timestamps in distributed systems). I like a solution similar to your proposed solution in point 3, with the addition that every L1 block is included in the L2 chain (which results in an uneven block time). The other solution is to keep the 2s block time and roll multiple deposits together (but now deposits can be included in a block that has a wrong height), but it is fully deterministic. The hard problem there is that the validity of the deposit tx is dependent on more state that in previously was |
Beta Was this translation helpful? Give feedback.
-
Summary of today's discussions: Some concerns with the current design, where the L1 timestamp in the L1 attributes deposit is
A proposal that would assuage these issues: remove the tight coupling between L1 timestamps and L2 timestamps. This allows the number of L2 blocks to fluctuate (instead of being fixed to 6 per epoch when L1 does not skip slots). We would impose two constraints:
There are also things to be cautious about with this approach:
So now the question is whether we want to go ahead with this design or if there are other issues with it that we haven't properly considered. |
Beta Was this translation helpful? Give feedback.
-
New proposal, slightly modified from the above to allow for 1s L1 blocks. I renamed Timestamps / Epoch Size Proposal
The goal of flexible epoch size is to deal with temporary loss of connection to L1 and natural latency issues. The flexibility is given by the two constraints. The sequencer is allowed to "run ahead" or "drift ahead" of L1 (i.e. create L2 block whose timestamp exceed the expected timestamp of the next epoch's L1 block), but within the limit given by In practice, the concrete algorithm we will implement is that the sequencer will start a new epoch as soon as it is sees a new L1 block, hence trying to keep the drift as close to zero as possible. The drift will only increase when we lose connection to L1. Upon resuming connection, we will have multiple L1 blocks to process, which will all (except the last one) be mapped to an epoch with a single L2 block, hence reducing the drift.
Note that there is no point in waiting for "confirmations" from the L1 chain: since the L2 chain is derived from L1, any reorg on L1 would cause a reorg on L2, such that the mapping between L1 blocks and L2 epochs (and in particular, the deposits at the start of the epoch) is always preserved. To process a withdrawal emitted in a given L2 epoch, economic bridges should however wait for a number of L1 confirmations starting from the L1 block corresponding to the epoch. Post-merge, we expect every L1 block to be spaced by 12s, though it is possible for blocks to be skipped and hence for the inter-block distance to be larger. The constraints outlined above do work with larger L1 blocks, and also with shorter L1 blocks, should we need to support them. (1)
(2) The |
Beta Was this translation helpful? Give feedback.
-
Feedback from conversation:
Decisions:
Open questions:
|
Beta Was this translation helpful? Give feedback.
-
Here is my proposal from this morning to modify the maximum permissible L2 block time to handle the case that L1 block times are less than L2 block times. Timestamps / Epoch Size Proposal
Notes
|
Beta Was this translation helpful? Give feedback.
-
Took me a while to figure out, but the key differences are:
I do think I'd like to see
I have a slight pushback on this, shouldn't we only derive things for which we have complete sequencing windows? But beyond that, this is a strictly better proposal than the previous one, and doesn't add implementation complexity. Great job! I think we have a winner 🔥🔥🔥 |
Beta Was this translation helpful? Give feedback.
-
I'd just be happy for us to state, in the spec, that the given conditions do imply
In the current process, the absence of batch data for a specific epoch just implies that all the blocks for that epoch are empty. Since we are letting epoch size float, the question is how we should determine the size of "empty epochs". Is this right?
Alright, so untangling this a bit. Explaining the terms:
Interpretation of the definition:
Since
Imho, it would be cleaner to make it an inclusive bound and to redefine it as:
|
Beta Was this translation helpful? Give feedback.
-
The current design of the system is to have an L2 block every
Config.Blocktime
seconds. Note: the target block time is 2 seconds. This results in a steady stream of L2 blocks (even if they are empty). There is another way to do this where L2 blocks are only produced when there are non-empty blocks, but that has some issue with timestamps.When looking to produce L2 blocks we create block every 2s in the following (half open) range)
I would like to switch it to the following:
The downside to the second approach is that now a L2 block depends on a block after its L1 Info block for validity. The upside to the second approach is that timestamps are more natural: in the first the timestamp of the L2 block that includes the deposit will be less than the timestamp of the L1 block that the deposit was included in.
Note: neither approach handles the case in which the difference in L1 block times is less than the L2 block time.
To handle this the specs use the following formula:
This is similar to the first approach in that the L2 time trails the L1 time. Note that it can collapse to an empty range if there are multiple L1 blocks in a row with a too short block interval
@norswap's proposal
Beta Was this translation helpful? Give feedback.
All reactions