This repository has been archived by the owner on Aug 2, 2024. It is now read-only.
EIGENDA #237
samnotmissing
started this conversation in
Ideas
EIGENDA
#237
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
hey,
thank you so much @abdelhamidbakhta for letting me introduce eigenda in madara community call #4 !
note that im not a member of the eigenlabs team but as Madara is exploring different DA's solution, thought it could make sense to complete the introduction of EigenDA !
1-what is eigenlayer ?
2-what is eigenda ?
3-why it could make sense for starknet to use it ?
1-EigenLayer
it's a mechanism designed to allow the ethereum trust network to be more flexibly used in order to build general distributed systems.
this mechanism is represented by restaking, eigenlayer is the restaking collective, you stake your $eth to commit to produce ethereum blocks but you can now restake by using the same stake in order to secure other services/middlewares.
in reality, restake means opting into additional slashing conditions in order to get additional yield and at the same time amortize the capital cost of staking.
eigenlayer is not a new chain or something, it's just a series of smart contracts deployed on ethereum.
services building on eigenlayer writes their contracts (registration contract, slashing contract and payment contract) within the evm that communicates to eigenlayer smart contracts and drop the software that restakers will have to run.
2-EigenDA
eigenda is the first service built on top of eigenlayer. it is a pure data availability service secured by $eth restakers that are making a credible commitment that they are storing data. how these da guarantees are ordered is not up to eigenda but up to its motherchain (e.g ethereum).
now let's look at the desirable characteristics of a da layer :
-hyperscale : considering n nodes in system and C bandwith, system data rate should be nC/2 (the more nodes you get participating the more scalability you get).
-low cost : cost incurred to the entire da layer should be only 2 times the cost of a single node downloading and storing it
-low latency : time to confirm data availability must be at native network latency which would be much smaller than block latency
-verifiable : light nodes with very little computational and networking ability should be able to verify whether the da layer is complying or not
-customizability : should allow applications to permissionlessly customize safety/liveness tradeoffs, staking token modalities, erasure code, tokens in which fees are paid to the da layer and so on ..
and it turns out, eigenda achieves all the five properties ..
just to put some context, eigend da testnet (see more in pic 1) run with 100 nodes and so the hyperscale system throughput you get is 100 x 0.3 MB / 2 gives you 15 MB/S but it is completely possible to scale it to GB if not TB in the future as more and more nodes opt in ..
eigenda supports zk proof and more specifically kzg polynomial commitment so you dont have to wait as you could in systems using fraud proof to ensure that the validity of erasure coding is correct .. kzg make it easier and cheaper to verify that the eigenda system is running correctly and you could still have home stakers running the eigenda software.
one can also imagine L3s that will want to launch their own token, turns out this token could be used on eigenda to do a dual staking model, how does it look like ? on eigenlayer, nodes restake eth, and they're able to participate as a validator for the eigenda process.
in the same way, there'll be a special contract on which people can stake their $L3 token, when they do so, they are participating in the eigenda protocol, and they communicate their interest to participate in the data availability for x L3. then, they download and run the eigenda software off-chain. the restaker runs it or they ask an operator to actually run it. what the operator does is they are basically downloading and running this version of eigenda when they're downloading portions of data and submitting certificates.
these certificates are then aggregated by the manual rollup sequencer (madara ?), which will aggregate all of this and put a certificate onto ethereum contracts. when eigenda’s contracts verify that both the $L3 token nodes have received their portion of the data and the ethereum restakers have received their portion of the data, we get an attestation on ethereum that the activity happened.
the rollup can then move ahead with this new state update.
3-synergies with Starknet
looks like gaming on starknet is a huge topic and eigenda shares the vision that the next generation of games, social, ai etc will be run on blockchains.
that being said, we are limited by the data rate of ethereum as you can see it on this pic (from eigenlabs).
note that even if the compression of data improve so much that you transform 200 bites to let's say 4 bites, you still get 83 kb / 4b aka 20k txs/sec .. the compression is optimistic and could eventually work for financial txs but for deeper things such as social txs, not sure it could be possible to compress it that much and obviously way more txs/s will be needed ..
in fact, sequencer would take txs data, splim them and encode them by using erasure codes and kzg polynomial commitment and then publish it to eigenda.
by doing that, eigenda nodes would just download and store a small portion of the data (making it lighweight to attract more restakers to opt in), and sending a signatures receipts that the data has been made available to the roll up nodes.
even if some eigenda nodes go offline, you can still reconstruct all the data and get the security of the entire system.
roll up nodes could then post a hash of the data and a single aggregate signature saying the data was published onto ethereum contracts but as soon as roll up nodes receive the certificate (500 ms), this can give a huge pre confirmation level to the users.
note that what is posted onto ethereum contracts could represent about 200 bites instead of eventually 1 GB.
we could already say that, by design, eigenda is the second most secure place to publish data because it is inheriting and run by the same decentralized set of nodes and stake of ethereum (still depend how much restakers opt in).
however, this might not be enough that's why slashing is useful : for example, in eigenda, if an eigenlayer operator/restaker is engaging in sending back attestations to store data but actually is not storing the data, then there is a slashing contract (based on proof-of-custody) that will slash this operator.
how does it work considering the cost of publishing data to eigenda ?
the problem with ethereum is you dont know exactly the cost of publishing data depending on the network activity, kinda like running an airline company without having control on jet fuel ..
to solve this problem, eigenda gives you the opportunity to get native blockspace reservation like you come and say “hey i need this crazy amount of bandwidth" and eigenda will give it to you for the next year and so you will know exactly how your data publishing cost will look like !
hope that makes sense and looking forward to talk about it !
some technical docs can be found here if needed :
https://docs.eigenlayer.xyz/developers/technical-docs
Beta Was this translation helpful? Give feedback.
All reactions