Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Smash Processor #613

Merged
merged 179 commits into from
Sep 20, 2024
Merged

Smash Processor #613

merged 179 commits into from
Sep 20, 2024

Conversation

kayabaNerve
Copy link
Member

This smashes the processor into several crates. It resolves several issues, as documented in #569. It also finishes the Ethereum processor, properly supporting tokens and fees for relayers.

Almost completely untested. The existing tests have yet to be moved/adjusted, the coordinator isn't operable, and the e2e tests definitely won't work. I plan to step through the coordinator next before rebuilding the e2e stack.

Resolves some bad assumptions made regarding keys being unique or not.
Abstract, to be done for the transactions, the batches, the cosigns, the slash
reports, everything. It has a minimal API itself, intending to be as clear as
possible.
Also defines primitives for the processor.
Adds the task to mark blocks safe to scan, and outlines the task to report
blocks.
Also splits crate into modules.
Scan now only handles External outputs, with an associated essay going over
why. Scan directly creates the InInstruction (prior planned to be done in
Report), and Eventuality is declared to end up yielding the outputs.

That will require making the Eventuality flow two-stage. One stage to evaluate
existing Eventualities and yield outputs, and one stage to incorporate new
Eventualities before advancing the scan window.
I'm unsure where else it'll be used within the processor, yet it's generally
useful and I don't want to make a dedicated crate yet.
Has fetched blocks checked to be the indexed blocks. Has scanned outputs be
sorted, meaning they aren't subject to implicit order/may be non-deterministic
(such as if handled by a threadpool).
It's still unclear how we'll handle refunding failed InInstructions at this
time. Presumably, extending the InInstruction channel with the associated
output ID?
Effectively necessary for networks on which we utilize account abstraction in
order to know what key to associate the received coins with.
Prevents the potential case of the substrate task and the scan task writing to
the same storage slot at once.
Lets the Ethereum processor track the first key set as soon as it's set.
The caller is paid a fixed fee per unit of gas spent. That arguably
incentivizes the publisher to raise the gas used by internal calls, yet this
doesn't effect the user UX as they'll have flatly paid the worst-case fee
already. It does pose a risk where callers are arguably incentivized to cause
transaction failures which consume all the gas, not just increased gas, yet:

1) Modern smart contracts don't error by consuming all the gas
2) This is presumably infeasible
3) Even if it was feasible, the gas fees gained presumably exceed the gas fees
   spent causing the failure

The benefit to only paying the callers for the gas used, not the gas alotted,
is it allows Serai to build up a buffer. While this should be minor, a few
cents on every transaction at best, if we ever do have any costs slip through
the cracks, it ideally is sufficient to handle those.
Also introduces the fee logic, despite it being stubbed.
Avoids the risk of the gas used by the contract exceeding the gas presumed to
be used (causing an insolvency).
…ction ID

Uses the ID of the transfer event associated with the top-level transfer.
Prevents a consensus split where some nodes would drop transfers if their node
didn't think the Router was deployed, and some would handle them.
This technically has a TOCTOU where we sync an Epoch's metadata (signifying we
did sync to that point), then check if the Router was deployed, yet at that
very moment the node resets to genesis. By ensuring the Router is deployed, we
avoid this (and don't need to track the deployment block in-contract).

Also uses a JoinSet to sync the 32 blocks in parallel.
@kayabaNerve
Copy link
Member Author

Generally I'd squash and merge, yet this is a massive PR which rewrites the entire processor(s) and multiple of the contained commits have alternative implementations I'd hate to see get lost. Accordingly, rebasing.

@kayabaNerve kayabaNerve merged commit 2c8af04 into next Sep 20, 2024
9 of 22 checks passed
@kayabaNerve kayabaNerve deleted the processor-smash branch September 20, 2024 06:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant