-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Smash Processor #613
Merged
Merged
Smash Processor #613
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Resolves some bad assumptions made regarding keys being unique or not.
Abstract, to be done for the transactions, the batches, the cosigns, the slash reports, everything. It has a minimal API itself, intending to be as clear as possible.
Also defines primitives for the processor.
Adds the task to mark blocks safe to scan, and outlines the task to report blocks.
…g the block safe to scan/report
Also splits crate into modules.
Scan now only handles External outputs, with an associated essay going over why. Scan directly creates the InInstruction (prior planned to be done in Report), and Eventuality is declared to end up yielding the outputs. That will require making the Eventuality flow two-stage. One stage to evaluate existing Eventualities and yield outputs, and one stage to incorporate new Eventualities before advancing the scan window.
I'm unsure where else it'll be used within the processor, yet it's generally useful and I don't want to make a dedicated crate yet.
Has fetched blocks checked to be the indexed blocks. Has scanned outputs be sorted, meaning they aren't subject to implicit order/may be non-deterministic (such as if handled by a threadpool).
It's still unclear how we'll handle refunding failed InInstructions at this time. Presumably, extending the InInstruction channel with the associated output ID?
Effectively necessary for networks on which we utilize account abstraction in order to know what key to associate the received coins with.
Prevents the potential case of the substrate task and the scan task writing to the same storage slot at once.
Lets the Ethereum processor track the first key set as soon as it's set.
Offers a simpler API to the coordinator.
The caller is paid a fixed fee per unit of gas spent. That arguably incentivizes the publisher to raise the gas used by internal calls, yet this doesn't effect the user UX as they'll have flatly paid the worst-case fee already. It does pose a risk where callers are arguably incentivized to cause transaction failures which consume all the gas, not just increased gas, yet: 1) Modern smart contracts don't error by consuming all the gas 2) This is presumably infeasible 3) Even if it was feasible, the gas fees gained presumably exceed the gas fees spent causing the failure The benefit to only paying the callers for the gas used, not the gas alotted, is it allows Serai to build up a buffer. While this should be minor, a few cents on every transaction at best, if we ever do have any costs slip through the cracks, it ideally is sufficient to handle those.
Also introduces the fee logic, despite it being stubbed.
Avoids the risk of the gas used by the contract exceeding the gas presumed to be used (causing an insolvency).
… out of synchrony
Saves a few thousand gas.
…ction ID Uses the ID of the transfer event associated with the top-level transfer.
Prevents a consensus split where some nodes would drop transfers if their node didn't think the Router was deployed, and some would handle them.
This technically has a TOCTOU where we sync an Epoch's metadata (signifying we did sync to that point), then check if the Router was deployed, yet at that very moment the node resets to genesis. By ensuring the Router is deployed, we avoid this (and don't need to track the deployment block in-contract). Also uses a JoinSet to sync the 32 blocks in parallel.
Generally I'd squash and merge, yet this is a massive PR which rewrites the entire processor(s) and multiple of the contained commits have alternative implementations I'd hate to see get lost. Accordingly, rebasing. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This smashes the processor into several crates. It resolves several issues, as documented in #569. It also finishes the Ethereum processor, properly supporting tokens and fees for relayers.
Almost completely untested. The existing tests have yet to be moved/adjusted, the coordinator isn't operable, and the e2e tests definitely won't work. I plan to step through the coordinator next before rebuilding the e2e stack.