Transfer Batching #903
Replies: 3 comments 4 replies
-
Note: If we can get the costs for a single transfer down to what they are now (~500K for full flow, including getting router reimbursed) I am willing to abandon this entire discussion. I just do not know if that is feasible without destroying readability |
Beta Was this translation helpful? Give feedback.
-
Where's that spreadsheet you had that showed how much this reduced costs by? @LayneHaber |
Beta Was this translation helpful? Give feedback.
-
Update: By combining If we were to batch (and keep these improvements), we would be able to drop the cost much further, but batching introduces significant complexity to the protocol: Charging fees Submitting the batch While a decent tradeoff could be made by finding a combination of the periodic schedule and full batches, it makes charging fees much less predictable. Archiving this discussion. |
Beta Was this translation helpful? Give feedback.
-
Summary
Moving to Nomad comes with a large increase in gas costs on a per-transfer basis. We can reduce these costs by creating a batch of transfers and sending a merkle root representing this batch across chains. This will heavily alter the data passed across nomad, and merits separate
ConnextRouter
/ConnextMessage
contracts rather than using the sameBridgeRouter
/BridgeMessage
contracts.Motivation
The current cost for a transfer is ~540K gas, which is already expensive, and using nomad for a single transfer increases the gas to ~810K (not including the nomad-specific transactions). This is too expensive of a tax to add on each transfer, and it is important to bring this cost down.
Proposed Solution
Batching the transactions that are handled by nomad has the advantage of amortizing the costs to
dispatch
andhandle
a message as well as the cost of minting / burning assets (as it is done on a per-asset basis within the batch instead of on a per-transfer basis).You can batch these transfers in a couple of ways:
1. Merkle Tree: Leaves are generated from the hash of the transfer data and inserted into a tree. At some point when the batch is "full" (as defined by reaching the maximum number of assets permitted in the batch), the root is ready to be
dispatch
ed through the nomad system.2. Onion Hash: The onion accumulates one transfer at a time, with each layer concatenating to the one before it:
onion_n = Hash(onion_n-1 + message)
. When you are peeling layers off the hash, there is a strong LIFO ordering requirement.In both of these models, you use the nomad messaging system to send a 32 byte hash to the destination domain as well as information about the assets included in that hash when the batch is "full" (as defined by reaching the maximum number of assets permitted in the batch). The
TokenRegistry
would permit theConnextRouter
to mint the batch amounts for each of the transactions when the data is reconciled, andprocess
can prove the inclusion of a given transfer in the data. This flow is illustrated using merkle trees below:In a few points, this boils down to:
User calls
xcall
to enqueue a crosschain transfer.At some point when the “batch is full” (heuristic can be flexible — time-based, asset-capped, etc.) the NXTP routers will call
dispatch
, which will kick off the sending via nomad. The message sent should include:to
destinationDomain
originDomain
callData
(for destination domain calls)sender
(origin domainmsg.sender
)amount, asset
to mint/account for with each batchrecipient
type
to indicate if this is an NXTP transfer or notWhen the message is sent across the nomad channel, the bridge should call
reconcile
which pushes the batch merkle root to theConnext
At any point after
xcall
is called, routers can callexecute
.reconcile
(i.e. before the root is onchain), they are providing “fast-liquidity” and to be reimbursed they will have toprocess
the transactionreconcile
routers are acting as relayers to swap into the adopted asset and execute any calldata on the destination domainOnce
reconcile
is called (via thehandle
method on theBridgeRouter
), then the router can callprocess
to reveal the leaf information and get reimbursed if they provided fast liquidityBridge{Router,Message}
vs.Connext{Router,Message}
In an ideal world, the
Connext.sol
contract would be responsible for only:call
swhile the
Bridge{Router,Message}
contract would be responsible for:However, overriding the existing
Bridge{Router,Message}
contracts with the new message structure would fundamentally break the existing user flows. A good compromise is to turn the existingConnext.sol
into a connext-specific router contract with minting permissions, which are managed via theTokenRegistry
.In this model the
ConnextRouter
is responsible for:while nomad components retain responsibility for:
Eventually we could move towards deprecating the
BridgeRouter
in favor of a consolidated interface.Merkle vs. Onion
While onion hash does contain its size better (no tree stored onchain, can always be a 32 byte hash), the strong ordering requirement means one transaction can block an entire batch. This is a likely pain point, so these costs justify using the more expensive and flexible merkle tree.
Test Cases
Note: Not complete list!
Outstanding Questions
TokenId[3], amount[3]
. Is there a better interface for this? Find relevant code here and here.TokenRegistry
to allow both theBridgeRouter
andConnextRouter
to mint?Tasks
Piggybacking off of v0 implementation found here.
Connext.sol
to a nomad routerConnext.sol
TokenRegistry
forConnext.sol
. Should be able to have bothConnext.sol
andBridgeRouter.sol
as mintersBridgeMessage.sol
, rename toConnextMessage.sol
Beta Was this translation helpful? Give feedback.
All reactions