Replies: 3 comments 7 replies
-
We want to keep in mind that current execution unit limits on transactions are a bottleneck/pain point for smart contracts, and from what I understand they are that small not because the nodes have limited CPU capacity overall, but because they have a limited time window to complete script validation. Speculative execution allows to spread out that work instead. |
Beta Was this translation helpful? Give feedback.
-
I describe next some ideas on how to ensure that the fees of (most) of the transactions included in IBs are collected.
Assume txs have a (unique) designated fee paying input. By hashing this UTXO we get the tx's type.
IB producers avoid re-including txs seen on valid and somewhat on-time (but not necessarily serialized) IBs received earlier.
The idea is that the tx issuer is responsible for the creation of conflicting txs, and thus he/she should anyway pay Given these three ideas, and for appropriately set parameters we get that:
We have not covered the case of maliciously crafted IBs that contain invalid txs. This case can be dealt with by adding a collateral from the side of the IB producer. I'll post more details in a subsequent post. Note, that the design of concurrency safe smart-contracts is orthogonal to this solution. |
Beta Was this translation helpful? Give feedback.
-
@WhatisRT Any other issues you see with this design? Otherwise, I'll try to write it down and get comments from a wider set of parties. |
Beta Was this translation helpful? Give feedback.
-
I thought I'd make a thread here about what I proposed on our last call. First, an overview: The Leios paper mentions the issue of speculative validation on page 8, specifically pointing to the bottleneck in account based ledgers and how it can be alleviated in UTxO based ledgers. My main concern is that this optimization adds a non-trivial amount of complexity to the ledger and that it increases the potential for DOS and resource attacks.
With the current smart contract design on Cardano, there is a collateral mechanism that results in the following guarantee: If a transaction passes all "basic" (e.g. all the crypto, accounting, etc) checks, it is guaranteed to be a valid transaction and we can include it in a block and collect its fees. This is a DOS prevention mechanism, since it means that you can only get the node to run the basic checks on a transaction for free. If you run a script for free, you could easily make very small but expensive to verify transactions which amplify the attack potential. For the basic checks however, given a fixed ledger state, the work necessary to do them scales linearly with the size of the transaction.
This guarantee goes away with speculative transaction validation, or more specifically speculative script validation. If we run scripts as part of input block validation, we run the risk of not collecting the fees in the case of conflicts.
It's a bit difficult to construct a proper attack out of this, since this depends on the details of IB validation, which we don't really have AFAIK. But in the simplest scenario, an attacker could make a single output, make different transactions spending it for every stake pool, and send them out. If they all get included in input blocks, say
n
in total, the attacker gets a multiplier of
n` on work done for the paid fees. This is just a resource attack, but depending on the IB validation logic, there might be DOS attacks.The way I see it, not doing that validation as part of IB processing introduces only a CPU bottleneck (if implemented correctly, otherwise there might also be a memory bottleneck), since it just comes down to delayed compute. It is well-known that CPU usage of Nodes is currently very low, so it might be very interesting to estimate how much slowdown this would actually cause and to compare this with other parameters. To do this, we could estimate our peak transaction throughput/peak transactions per pipeline and get numbers on how long the current ledger needs to validate mainnet transactions of one full pipeline. Additionally, we would need to know how much of that time is being spent on script validation.
I don't necessarily think that we need to do this soon, but once it's time to come up with a proper implementation plan I think we should definitely know these things.
Beta Was this translation helpful? Give feedback.
All reactions