-
Notifications
You must be signed in to change notification settings - Fork 327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CIP-???? | Modules in UPLC #946
base: master
Are you sure you want to change the base?
Conversation
Thanks @rjmh - I'll change the review status to |
Hi Robert,Actually, it's pretty complete. We were hoping to get some feedback from the community. There are many possible variations, but choosing between them could benefit from community input.JohnSkickat från min iPhone10 dec. 2024 kl. 16:56 skrev Robert Phair ***@***.***>:
Thanks @rjmh - I'll change the review status to Draft (as formerly reflected in the title) and please let us know when you think it's ready for review and we can mark it Triage for introduction at the following CIP meeting & start tagging more Plutus representatives to go over it ***@***.*** @MicroProofs @michele-nuzzi you may be interested in an advance look).
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Yes @rphair this is ready for review |
The motivation for these fees is to deter DDoS attacks based on | ||
supplying very large Plutus scripts that are costly to deserialize, | ||
but run fast and so incur low execution unit fees. While these fees | ||
are likely to be reasonable for moderate use of the module system, in | ||
the longer term they could become prohibitive for more complex | ||
applications. It may be necessary to revisit this design decision in | ||
the future. To be successful, the DDoS defence just needs fees to | ||
become *sufficiently* expensive per byte as the total size of | ||
reference scripts grows; they do not need to grow without bound. So | ||
there is scope for rethinking here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be necessary to revisit this design decision in the future.
I don't think this can be left for "future work". I really think it should be updated if necessary when this CIP gets implemented. The reason for this is I don't think DApps should be treated as standalone applications. I think the following example perfectly exemplifies why:
Right now, all stablecoins are not fungible despite them all effectively being the the US dollar. You can't repay a loan in DJED using USDM. If DApps were composable, you could compose a DEX with the lending/borrowing DApp to convert the USDM to DJED in the same transaction where you make the loan payment. DApp composability makes stablecoins fungible!
This isn't possible on account style blockchains because each DApp is individually too expensive. On Cardano, you can compose 10 different DApps in the same transaction. I think this module approach would be huge, but only if it doesn't interfere with DApp composability. AFAIU that means lazy loading is 100% a requirement and users should be able to compose 4-5 DApps in a single transaction even with this module approach. Otherwise, this CIP could end up seriously handicapping the potential of Cardano's DeFi.
I was personally frustrated when I saw there was a hard-cap on the reference script size; if people want to pay up to fit more DApps into the transaction, let them! I'm fine with the cost being exponential after a certain point (ideally after 4-5 DApps in the transaction), but the hard limit doesn't make sense to me as long as the user pays for it. The adr linked to doesn't give any justification for the hard limit aside from "further increase the resilience". This CIP could easily exacerbate the issues with the reference script fee calculation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree it's going to be necessary. I just don't think it's a prerequisite... so modules should not be held up waiting for this. They'll be useful even without a change to reference script fees--just not as useful. I realise there are other factors to consider in fee-setting, but adding modules should raise the priority of fixing those fees considerably.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The limit/exponential fees design was pretty rushed and I think it'd be a good idea to rethink it in the context of this CIP. The issue is that a linear fee is definitely not good enough: a factor that's big enough to prevent an attack is too expensive for the regular use case. This is why it makes sense to either have a hard cap on the size or something superlinear (to allow the common use case to be cheap while making the attack expensive enough). I don't know if there are any other options, but in the context of this use case it might be worth exploring the superlinear option without a cap. I wouldn't be surprised if there are some good polynomials around that make pricing much more reasonable. There's no reason it needs to be exponential.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO this CIP should not be concerned about the fees, since that is an orthogonal issue to the technical implementation of the plutus modules.
Please, anyone who is frustrated with current fee calculation and or limits for reference scripts, I encourage you to create a separate CIP that analyzes the cost and performance of deserialization of plutus scripts with a proposal of an adjusted or a completely different model for the fee calculation while providing a sufficient protection against the DDoS attack associated with reference scripts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that this should be discussed in a separate CIP, but it's not an orthogonal issue at all. Different choices for the fee structure have an influence on whether certain variants of this proposal will be economical. So having the ability to discuss the interplay of those two at the same time may lead to a better outcome overall.
As a rough estimate, if we assume a legitimate use case with 10 max-size (16kb) modules the multiplier already goes up to 1.2^6 = 2.98
. That might make some severe optimizations necessary for this use case to be economical and it's easy to imagine it just never becoming economical.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(I'd be willing to help or provide input to anyone who wants to tackle this issue, but I'm pretty busy working on Leios so I can't really justify taking the lead on this)
CIP-plutus-modules/README.md
Outdated
for use as a reference script. This limits script code size, which in | ||
turn limits the use of libraries in scripts, and ultimately limits the | ||
sophistication of Cardano apps, compared to competing blockchains. It | ||
is the aspect of Cardano that script developers complain about most. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is the aspect of Cardano that script developers complain about most.
Seems a bit arbitrary as a statement 😅 ... I have seldom heard people complaining about that. Rather, people complain about the script size which they often max out in their on-chain scripts without even bringing in dependencies.
See also:
- https://cardano-foundation.github.io/state-of-the-developer-ecosystem/2024/#what-do-you-think-is-the-biggest-pain-point-of-cardanos-developer-ecosystem
- https://cardano-foundation.github.io/state-of-the-developer-ecosystem/2023/#what-do-you-think-is-the-most-painful-point-of-cardanos-developer-ecosystem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks--I took this from a meeting, but the claim seems to be exaggerated. I will weaken the language. Sounds like you agree that complaints about the script size limit are common though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree here. Prior to the introduction of reference scripts, complaints about size were common, now with the withdraw-zero trick / other forwarding logic scripts, and reference scripts, script size is not really an issue, in-fact most dApps happily accept increased script size for reduced ex-units (more aggressive inlining / manual recursion unrolling / lookup tables).
I do agree that regardless of whether or not script size restraints are still a pain point, modules are still valuable.
CIP-plutus-modules/README.md
Outdated
the others provide supporting code of one sort or another. Thus the | ||
software engineering benefits of a module system are already | ||
available; other languages compiled to UPLC could provide a module | ||
system in a similar way. The *disadvantage* of this approach is that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thus the software engineering benefits of a module system are already
available; other languages compiled to UPLC could provide a module
system in a similar way.
I don't think there's a single Plutus language framework today that doesn't support modules.
- https://aiken-lang.org/language-tour/modules
- https://www.hyperion-bt.org/helios-book/lang/modules.html
- Opshin piggybacks on Python's module system, Plu-ts on TypeScript's, Scalus on Scala's and Plutarch on Haskell's.
Although for all those languages, the concept of modules exists at compile-time only, whereas I believe this CIP is about bringing this concept at runtime to have dynamic resolution. Perhaps a parallel/analogy with statically linked vs dynamically linked dependencies is worth highlighting to make that clearer? Today, every module is very much statically bundled with scripts unless work is explicitly done to split them in separate validators.
(edit: now read the sections further down and I see that (1) this points is made indeed and (2) that the approach suggested in this CIP is still closer to a static linking done by the ledger prior to execution -- so, semi-dynamic 😅 ?).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the choice of terminology can be a bit confusing and could be made more precise. The term "static/dynamic linking" is being used to refer to two different things:
- You are saying: static linking = status quo where each script is a monolith, (semi-)dynamic linking = what this CIP proposes
- whereas there's a subsection "Static vs Dynamic Linking" in the CIP, where static linking = a module specifies its dependency hashes, and dynamic linking = it doesn't specify them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will make it clear that many languages already support modules, not just Plutus/Haskell. But with the limitation that all the code ends up in one script, and so is subject to the script size limit.
lookupArg (ScriptArg hash) = do | ||
script <- lookup hash preimages | ||
go script | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. This suggests that either the module resolution happens at compile time (which would void the benefits of having modules to begin with) or, actually done by the ledger itself executing scripts. So my understanding leans towards the later, which leads to the follow-up question: are you suggesting that the ledger becomes aware of scripts dependencies? And if so, by which means shall transaction communicate this intent to the ledger?
At the moment, scripts are fundamentally already parameterized by a single parameter (two or three in PlutusV1 & PlutusV2); A validator has a signature that's roughly Data -> Validator
. So I don't find it completely unreasonable to ask the ledger to now also apply some dependencies to the scripts in addition to the datum/redeemer & script context. Though it's unclear at this point how to signal that and how is this being cost (will keep reading 👀).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are you suggesting that the ledger becomes aware of scripts dependencies? And if so, by which means shall transaction communicate this intent to the ledger?
Yes. a serialised script is deserialised into either a complete script with no dependency, or a script plus a list of dependencies, and in the latter case the ledger will need to retrieve those dependencies and link them together to form a complete script.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly. I clarified that this happens during phase 2 verification, and that scripts on the chain are represented in this form, with dependencies just in the form of hashes.
CIP-plutus-modules/README.md
Outdated
The goal of this variation is to eliminate the cost of evaluating | ||
scripts, by converting them directly to values. Since UPLC runs on the | ||
CEK machine, this means converting them directly into the `CekValue` type, | ||
*without* any CEK machine execution. To make this possible, the syntax |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd argue that it doesn't eliminate the cost of evaluating scripts, but rather, it becomes someone's else problem 😄! That someone here being, the ledger/node indirectly which now has to do more (un-budgeted) work for free. I believe one of the fundamental design choice of Plutus was to have most of the decoding / conversion operations happen as part of the CEK evaluation so that they can be properly cost and paid for.
Otherwise, I'd argue that instead of providing Data
arguments to scripts, we might as well provide pre-computed sum-of-products. But that means the cost of decoding the script context is now not paid for by execution units so has to be acknowledged through different means.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(to be clear, I am not against the idea! It seems like a reasonable ask to me, but I recall past conversations with the Plutus core team about it and why it is generally not deem as a viable option).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an inexpensive operation that takes in the worst case linear time (and in some variants it is probably always constant time), so I think it's reasonable to consider it covered by the reference script fee, which is already an over-estimation of the script deserialization cost.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, it's linear time in the size of the top-level of scripts--one traversal over the code which need not descend inside values at the top level of a module. So reasonable to cover it from the reference script fee.
transitions. The conversion can be done *once* for a whole | ||
transaction, sharing the cost between several scripts if they share |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The conversion can be done once for a whole transaction
That's a good point, and also strengthen the idea that more of these transformations would be better off happening in the ledger as pre-processing instead of directly within the CEK evaluation.
Although in that particular case, it probably depends on the redeemer value too. If we assume a partial resolution like what you mention in Lazy Loading, then the traversal could likely yield different applications for the same script based on which redeemer is being used. Though, for the same inputs, this is certainly a reasonable expectation. It's unclear to me whether there would many "cache hit" in practice.
Another important point that supports this thought is how developers end up often structuring their scripts by mutualizing similar chunks of logic under validator purposes that execute only once per transaction. So a typical structure we see on-chain are trivial spending validators that defer their validation to a single withdraw validator; then forcing a 0-Ada withdrawal on a registered stake credential. Since validators have access to the entire transaction script context, it's always possible to have a validator guarding the 0-Ada withdrawal to execute and validate each input in a single pass; rather than re-doing work for every single input.
See for details: https://github.com/Anastasia-Labs/design-patterns/blob/main/stake-validator/STAKE-VALIDATOR.md#stake-validator-design-pattern
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Different redeemers may indeed result in different modules being required to be present - but I don't think this poses any problem, does it?
Your second point I think is the same as the "Merkelized Validators" discussed in the related work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The design patterns repo has a separate readme specifically for the withdraw zero trick,
https://github.com/Anastasia-Labs/design-patterns/blob/main/stake-validator/STAKE-VALIDATOR-TRICK.md
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a section on "Merkelized validators" that discusses this; I have added links to the stake-validator trick directly to that section. I also made the discussion there a little more explicit: it's a great trick for sharing work between validators, which is useful with-or-without the modules discussed in this CIP--so it's not replaced by this CIP; but as a way of implementing modules it is intricate and unsatisfactory.
Re "cache hits", they will occur when different modules in the dependency tree depend in turn on the same module. So a module containing basic definitions for an application, and used in many parts of it, would fall into that category. So would a commonly-used library that many modules (in the same application) might depend on. I'm expecting to see quite a lot of this.
Where 'lazy loading' is concerned, note that it is the particular transaction that decides which dependencies to supply. Yes indeed, the dependencies needed will vary depending on the redeemer value. That's what we want to take advantage of--that in a particular transaction, we know what the redeemer value is, and so we can decide to omit modules that are not going to be needed. Dangling pointers ftw! (As long as they're not going to be used).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@colll78 Is the "Merkelized Validators" like "memoization on chain"?
using the SoP extension (CIP-85) as `constr 0 x1...xn`, but the only | ||
way to select the `i`th component is using | ||
``` | ||
case t of (constr 0 x1...xi...xn) -> xi | ||
``` | ||
which takes time linear in the size of the tuple to execute, because | ||
all `n` components need to be extracted from the tuple and passed to | ||
the case branch (represented by a function). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Such Tuples could also be represented as pairs of pairs and bring this cost down to log2(size) steps ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that would be logn case
terms, cheaper in terms of execution units (at least for long tuples) but bigger in script size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logarithmic is better than linear, but it's also the cost of accessing variables in the environment (which is logarithmic in the size of the environment). So the advantage of putting the module exports into one tuple instead of bunging them all into the environment would disappear. Much better to bite the bullet and put in explicit projections, getting constant time access.
Currently, the definition of “script” used by the ledger is (approximately): | ||
``` | ||
newtype Script = Script ShortByteString | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's worth mentioning that we cannot actually publish arbitrary CEK Term as scripts but only UPLC Program (which are wrapped Term with versioning metadata).
The ledger enforces that all published scripts (in reference or witness) have this Program envelope. So it might be worth defining a new type of envelope for Modules. This would also allow to distinguish modules on-chain from actual validators scripts which may be handy shall we need to apply further restriction from the ledger regarding those (since as outlined below, it is incumbent upon the ledger to manage those dependencies and pre-process them on the behalf of validators.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this CIP is proposing publishing CEk terms as scripts. As to distinguishing validators vs. modules, the Script
data type defined in "Subvariation: Unboxed modules" allows for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, the CEK values exist only during phase 2 validation; they are never stored on the chain. And as Ziyang says, the 'unboxed modules' subvariation does distinguish module scripts from validators, primarily because (in that variation) they are subject to different syntactic restrictions. So if the deserializer is going to check those, then it needs to know what kind of script it is deserializing.
the `Script` type accordingly | ||
``` | ||
data Script = ValidatorScript CompiledCode [ScriptArg] | ||
| ModuleScript CompiledCode [ScriptArg] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah! This seems to echo my previous comment about making a distinction (which distinction shall prevail onto the serialisation to be any useful IMO).
Currently each script on-chain is tagged with a specific ledger language version - V1, V2, V3 or native script - and this version tag is a component of the script hash. | ||
A logical approach, therefore, is to continue doing so for module scripts, and require that a validator script and all modules it references must use the same ledger language version; failure to do so leads to a phase-1 error. | ||
|
||
A different approach is to distinguish between validator scripts and module scripts by applying version tags only to validator scripts. | ||
Module scripts are untagged and can be linked to any validator script. | ||
This makes module scripts more reusable, which is advantageous because in most cases, a UPLC program has the same semantics regardless of the ledger language version. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure that the second approach is sound; because the version not only defines the interface to the validator, but also:
- Which Plutus builtins are actually available
- The semantic of some of those builtins
- The costing functions of those builtins
For example, in Plutus V1/V2, cons_bytestring(256, bytes)
is equivalent to cons_bytestring(0, bytes)
(the runtime performs a free modulo 255), but in PlutusV3, it results in an out-of-bound error. That's the case for a few other builtins which have subtle semantic changes. (Technically, the semantic is bound to the Program version -- 1.0.0 vs 1.1.0 --, but this one is tightly coupled to the language version and I am taking a slight shortcut here).
So I'd argue that to keep everyone's life easier, enforcing the same "language version" across modules and validators is a fairly reasonable ask.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is the point I made in the next paragraph. I think we'll most likely go with the first approach, i.e., tagged modules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer that option too--allowing different language versions here would impose a constraint on all future language versions, which feels error-prone and uncomfortable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are the semantic changes of builtin functions all documented in the changelog or anywhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are the semantic changes of builtin functions all documented in the changelog or anywhere?
See Table 4.6 in Section 4.3 (page 27) of the Plutus Core specification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I'm strongly against this, I really think it's adding a lot of extra complexity and risk.
Currently each script on-chain is tagged with a specific ledger language version - V1, V2, V3 or native script - and this version tag is a component of the script hash. | ||
A logical approach, therefore, is to continue doing so for module scripts, and require that a validator script and all modules it references must use the same ledger language version; failure to do so leads to a phase-1 error. | ||
|
||
A different approach is to distinguish between validator scripts and module scripts by applying version tags only to validator scripts. | ||
Module scripts are untagged and can be linked to any validator script. | ||
This makes module scripts more reusable, which is advantageous because in most cases, a UPLC program has the same semantics regardless of the ledger language version. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are the semantic changes of builtin functions all documented in the changelog or anywhere?
CIP-plutus-modules/README.md
Outdated
Note that, on Ethereum, a proxy contract can be updated without | ||
changing its contract address---thanks to mutable state. On Cardano, a | ||
script address *is* the hash of its code; of course, changing the code | ||
will change the script address. It is very hard to see how that could | ||
possibly be changed without a fundamental redesign of Cardano. So the | ||
methods discussed below are different in nature from the Ethereum one: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The exact same thing is true on Cardano. You can easily create proxy contracts that can be updated without changing its contract address.
mkProxyContract :: ClosedTerm (PAsData PCurrencySymbol :--> PScriptContext :--> PUnit)
mkProxyContract = plam $ \protocolParamsCS ctx -> P.do
ctxF <- pletFields @'["txInfo", "redeemer", "scriptInfo"] ctx
infoF <- pletFields @'["inputs", "referenceInputs", "outputs", "signatories", "wdrl"] ctxF.txInfo
referenceInputs <- plet $ pfromData infoF.referenceInputs
-- Extract protocol parameter UTxO
ptraceInfo "Extracting protocol parameter UTxO"
let paramUTxO =
pfield @"resolved" #$
pmustFind @PBuiltinList
# plam (\txIn ->
let resolvedIn = pfield @"resolved" # txIn
in phasDataCS # protocolParamsCS # (pfield @"value" # resolvedIn)
)
# referenceInputs
POutputDatum ((pfield @"outputDatum" #) -> paramDat') <- pmatch $ pfield @"datum" # paramUTxO
forwardToScriptHash <- plet $ punsafeCoerce @_ @_ @(PAsData PByteString) (pto paramDat')
let invokedScripts =
pmap @PBuiltinList
# plam (\wdrlPair ->
let cred = pfstBuiltin # wdrlPair
in punsafeCoerce @_ @_ @(PAsData PByteString) $ phead #$ psndBuiltin #$ pasConstr # pforgetData cred
)
# pto (pfromData infoF.wdrl)
pif (pelem # forwardToScriptHash # invokedScripts) (pconstant ()) perror
The above script is a proxy contract which is parameterized by a state token (an NFT) which authenticates a UTxO that contains the script hash that this proxy forwards validation to (via the withdraw-zero trick). If that UTxO lives at a user's wallet, they can update the proxy contract by spending it back to the same address and changing the datum to be a different script hash. If the UTxO lives at a script, then the script logic will validate any update.
That being said, I would caution that this section on upgradability should be removed altogether.
DApp upgradability is already a security nightmare, it’s very hard to support it without completely sacrificing decentralization. You need to use an onchain governance protocol, like Agora, except these protocols are very experimental on Cardano, so much so that even the creators of Agora do not use it for governance of their protocol.
I think the advice in the CIP regarding how upgradability can be achieved is quite dangerous given how many exploits “upgrade keys” being compromised has led to in Ethereum / Solana, and generally out of scope of this proposal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks--so I think you're saying two things:
-
Upgrade-in-place can be done more simply than suggested here, by using proxy scripts that delegate verification to a script hash kept in a state UTxO,
-
Upgrade is a minefield, and should be avoided altogether.
Where (1) is concerned, simple is good! I wonder though if it doesn't require you to buy in to the "one verifier to rule them all" approach, where one staking validator checks all the spending and minting in the transaction. I realise that's a popular approach, but not the only possible approach.
Where (2) is concerned, I have a lot of sympathy with that view, but at the same time I don't expect all libraries to be bug-free when they are first released, so it's a natural question to ask "what should I do if a library I am depending on receives a bug fix update?" It seems a little unrealistic just to ignore the problem altogether. Of course, "get it right first time" is good advice, but hard to follow consistently.
Maybe you're saying essentially: upgrading to new versions is a problem that already exists, and existing 'solutions' are equally applicable once modules are introduced--so this CIP need not address the problem specifically. Even if, by supporting larger code drawn from multiple sources, it's likely to make the problem worse.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you're saying essentially: upgrading to new versions is a problem that already exists, and existing 'solutions' are equally applicable once modules are introduced--so this CIP need not address the problem specifically. Even if, by supporting larger code drawn from multiple sources, it's likely to make the problem worse.
Yes. I think smart contract upgrades is an incredibly complex problem and must be handled with extreme care. I think that this CIP should not even cover it, because the suggestions here can lead people to underestimate the severity of the problem and the care with which they must handle it.
what should I do if a library I am depending on receives a bug fix update?
Say that you upgrade to the new "bug fix version" what if it introduces a hidden backdoor or introduces other vulnerabilities? Importantly, the users who signed transactions and agreed to put their funds into your dApp, agreed for their funds to be secured by scripts to which they sent their funds, they did not consent for their funds to be secured by the new "bug fix script" which could introduce a backdoor to steal all their funds. Any introduction of non-manual upgradability leads to the possibility of a backdoor to drain liquidity (i.e. "upgrade" the library to a malicious contract that always succeeds if the transaction is signed by the malicious actor's pub key hash). That's why you should prefer either:
- manual migration for upgrades - users must migrate their liquidity themselves to the new scripts.
- Onchain Token Governance DAO based upgrades - use an onchain governance protocol, like Agora, except these protocols are very experimental on Cardano, so much so that even the creators of Agora do not use it for governance actions of their protocol.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, thanks... I'm persuaded.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I i understand correctly, this approach is re-using lambda abstraction & application to "link" uplc scripts (modules) together.
To me this resembles ML's functors (correct me if I am wrong). Unlike ML's functors however, this re-uses existing syntax (lambdas) which makes it hard to distinquish at the script level which are the usual arguments to the function and which are these new "module" arguments. Maybe the Tag/VTag can help on this.
Currently, plutus scripts expect a number of "usual" arguments (3 arguments for v1/v2 and 1 for v3); these arguments are constructed and passed to the script automatically by the node. Also, for better or worse we don't impose a syntactic restriction of the script (like the value script variation does).
After reading all variations, I prefer the first approach without lazy loading. It is is the simplest and most forward to implement (less need of modifications). Here are my remarks:
-
On lazy loading: at first it seems beneficial, but using an arbitrary "builtin unit" can be surprising and difficult to debug for the users (or maybe even hazardous). Wouldn't it be almost equally efficient for users that want to skip loading some modules, to instead pass in the transaction as a module argument, a tiny archetypical reference script that contains ""builtin unit"? This way it is more explicit to what happens and less prone to errors if somebody forgot to supply a module argument (reference script).
-
On Value scripts: I understand the peformance gains here, but I don't like the restriction on the script syntax. We haven't had such a restriction until now, and imposing such a restriction might confuse or frustrate the "plutus language implementors", e.g. aiken, scalus, plutarch folks. I don't actually know how do they generate their plutus code so I am worried about that.
-
On tuples of modules: i don't see any benefits over the "value scripts" approach, only drawbacks. It also adds a new syntactic restriction:
places an additional syntactic restriction on script code: it must be of the form λMods.e, and all occurrences of Mods in e must be of the form proj i Mods for some i.
How can you enforce that syntactic restriction cheaply (without traversing the whole script). The other variations of "tuples of modules" have also traversal costs as you pointed out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was quite a difficult CIP to review because it's so long (over 13,000 words!) and contains so many ideas. My brain was a bit numb towards the end so I may not have thought about the later parts of the document as carefully as I might; also I may have asked some questions near the start that are answered later.
Anyway, this is all very thoroughly thought through and I'm sure that the ideas discussed here will be very useful and we'll implement something along these lines. As a CIP, I think it's fine to merge it without deciding exactly which variation (or even subsubvariation) we should adopt: it'll probably need experimentation and a lot of thought about the tradeoffs between simplicty/efficiency/implementation difficulty before we decide on exactly what to do. There's certainly plenty to think about though. Thanks for the work you've put into this!
|
||
Cardano scripts are currently subject to a fairly tight size limit; | ||
even when they are supplied as a reference input, that UTxO must be | ||
created by a single transaction, which is subject to the overall |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm nitpicking, but I found this sentence is a little confusing because it's conflating the input and the script (and it also mentions a UTxO, which is the input).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reworded this a little.
contracts to be implemented; conversely, on Cardano, it is rather | ||
impractical to implement higher-level abstractions as libraries, | ||
because doing so will likely exceed the script size limit. This is not | ||
just a theoretical problem: complaints about the script size limit are |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They've already complained on lines 21 and 22! Maybe it's worth emphasising this point though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the Abstract! Everything in the Abstract is repeated later... just in this case, 25 lines later. Surely that's OK?
blockchain. Ideally it should be possible to define a useful library | ||
in any of these languages, and then use it from all of them. A | ||
secondary goal is thus to define a module system which permits this, | ||
by supporting cross-language calls. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good point. One can imagine that some commonly used library code might be provided in a highly optimised form, or perhaps has been formally verified in some way, and such code might be produced from some special source language that might be different from the langauge used to develop the main contract code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's probably out of scope for the CIP, but I find myself wondering how one would cope with external libraries when developing a contract. This might requries some extra tooling for example, but I imagine that the community would find their own ways of dealing with the issue, and the gains from being able to use preexisting library code might well outweigh any extra inconvenience in the development process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We took a look at WASM components, which address this issue for WASM... and includes an IDL to enable different languages to talk to each other. I think that will eventually be needed for Cardano too, but there's a lot to consider in that design, and it shouldn't hold up the basic mechanism for modules. So I think it is out of scope for this CIP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, you would definitely need an IDL of some kind.
secondary goal is thus to define a module system which permits this, | ||
by supporting cross-language calls. | ||
|
||
Note that many languages targetting UPLC already support modules. In |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I take the point, but I'm not sure that this is entirely relevant to the issue at hand. Do we imagine that the Plutus module system would interact in some way with a module system used by a higher-level language? I suppose that it might be possible to arrange this in some langauges and it might simplify the process of interacting with external libraries during contract development.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe...
The point really is that we don't need to consider the software engineering purposes of modules, just focus on the low-level mechanism.
CIP-plutus-modules/README.md
Outdated
|
||
#### Variation: Lazy Loading | ||
|
||
With this design, if any script hash is missing from the `preimages`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"This design" sounds as if it's talking about lazy loading, but in fact it's referring to the design in the previous section.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reworded as "With the design above..."
If a script execution *does* try to use a module which was not | ||
provided, it will encounter a run-time type error and fail (unless the | ||
module value was `builtin unit`, in which case the script will behave | ||
as though the module had been provided). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this true? If modules are allowed to be arbitrary terms then you could supply let _ = error in ()
and the script would fail, but if that's replaced with ()
then the script might suceed.
I'm not sure that the script would necessarily produce a run-time type error if it tries to use a supposedly unused module which has been replaced by builtin unit
. Surely the script could use that module to perform some computation but discard the result of the computation without ever using it. Maybe that's OK as long as everything is pure, but the presence of the side-effecting error
in UPLC complicates things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If modules are allowed to be arbitrary terms then you could supply let _ = error in () and the script would fail, but if that's replaced with () then the script might suceed.
This is a good point! I hadn't though that this might convert failures to successes but indeed it can. This doesn't occur with the value scripts proposal, which is maybe a point in its favour.
I'm not sure that the script would necessarily produce a run-time type error if it tries to use a supposedly unused module which has been replaced by builtin unit.
Well, I think the claim is that either it produces the same result as it would have if we hadn't replaced it or it gives an error. And I think that's pretty convincing? All you can do with a unit value is:
- Do things which don't depend on what it is at all (semantics are the same)
- Do things that rely on it being unit (semantics are the same!)
- Do things that rely on it being non-unit (should fail when given unit)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is actually OK. The balancer will always first check that the transaction verifies with ALL the modules present--and in your example, this step will fail. Only if the first verification succeeds do we start trying to drop modules. So we only need to worry about the case where dropping a module causes a failure.
To take advantage of 'lazy loading', it's necessary to identify | ||
reference scripts that are *dynamically* unused, when the scripts in a | ||
transaction run. The best place to do that is in a transaction | ||
balancer, which needs to run the scripts anyway, both to check that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't come across the term "transaction balancer" before, and Google isn't being very helpful. To be clear, this is something that is run off the chain prior to transaction submission, no? If that's the case, I'm not sure if it's necessary to go into this level of detail in the CIP. Since on-chain scripts are deterministic it seems pretty clear that you can run them in some kind of instrumented evaluator to determine which parts of the AST are actually required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is basically the code for building transactions, e.g., in cardano-api there is Cardano.Api.Fees.makeTransactionBodyAutoBalance
. Yes it runs off-chain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, indeed, it's an off-chain step that is part of preparing a transaction for submission. Should transaction balancing be addressed in the CIP? I'd agree that it's not necessary to do so, in that it doesn't affect the chain itself, but I've included this discussion in an effort to be helpful--after all, somebody is going to have to implement script dropping if lazy loading is to be valuable, and--as I think the discussion shows--it's not totally straightforward to do that. So no harm in describing some options.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The term that people mostly use for this is coin selection. Though technically this just refers to picking UTxO entries, I don't know if people also use it to mean transaction balancing more generally.
|
||
#### `ScriptHash` allowed in terms? | ||
|
||
An alternative design would allow UPLC terms to contain `ScriptHash`es |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We looked into something a bit like this in connection with Merklising PLC ASTs. It was a non-starter because hashes are quite large (maybe 32 bytes) and theyr'e incompressible, so once you've got a few hashes in your script you've used up quite a lot of the size allowance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah... interesting. That strengthens the arguments against it. I will add a note to that effect.
CIP-plutus-modules/README.md
Outdated
``` | ||
fix (λx. fix (λy.e)) ---> fix (λx. e[x/y]) | ||
``` | ||
Both these rules require adjusting deBruin numbers in the UPLC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"de Bruijn"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
CIP-plutus-modules/README.md
Outdated
Phil Wadler. | ||
|
||
## Copyright | ||
This CIP is licensed under [CC-BY-4.0]](https://creativecommons.org/licenses/by/4.0/legalcode). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's an extra ]
here: it's not rendering as a proper link.
…does not require traversing code at run time
Exactly.
True, but given that UPLC is untyped, and we don't even distinguish between an integer and a boolean, is it really so important to distinguish between modules and non-modules? To the extent that we bind them to variables with a different syntax? We're only talking about the lowest-level language here; higher-level languages that compile to UPLC are free to make this distinction, for example by providing a more conventional "import" declaration. The advantage of reusing lambda in the implementation is that we keep the CEK machine simpler--and perhaps even unmodified!
See below.
That tiny reference script would have a different hash. That's the key thing the lazy loading variation does: it lets us substitute () even though the hash is wrong. Without lazy loading, you have to get the hash right--which means, short of a successful attack on the hash, you have to include the redundant module in the transaction. So I think lazy loading really is worth a lot for simple, cheap transactions. Notice that, provided we're running code compiled from a typed language, passing () will cause a run-time type error if the module is actually used--unless the type was already (), in which case we passed the correct value. Yes, untyped scripts can detect is a module is missing, and perhaps take action to handle that case, but is this really a problem?
True. In order to make use of the new feature, language implementors would need to generate modules in this specific form. But it is a new feature; there can be no existing code that is broken by this. How much should we worry about it?
It makes accessing a module slightly cheaper, probably. Because projecting a module out from the tuple can be constant time, while accessing a module from the environment is log time in the size of the environment. By keeping the environment smaller, it also speeds up all other variable accesses slightly. (Notice that accessing a variable in the environment really is log time in entire environment size, even for very local variables. OK, very local variables might be faster than that, depending on the exact environment size, but in the worst case even the most local variable takes log time to access). So there are benefits, but probably not huge ones. It's also a prerequisite for the later variations, such as global module environment and unboxed modules. Those variations have a larger performance impact.
I understand that some syntactic restrictions can be checked "for free" during deserialization, depending a little bit on the ingenuity of the person writing the deserializer. I believe this is one of them... |
Oh I see. I forgot that we are going to hash the script together with its linked dependencies. Then lazy loading makes sense. |
all references to external terms in one place, where they can easily | ||
be found and resolved. Thus we need only change the definition of a | ||
`Script`; instead of simply some code, it becomes the application of | ||
code to zero or more arguments, given by hashes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great summary
must provide the pre-image in the usual way. Note that arguments are | ||
mapped to a `Script`, not a `CompleteScript`, so the result of looking | ||
up a hash may contain further dependencies, which need to be resolved | ||
recursively. A transaction must provide witnesses for *all* the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Scripts referring to themselves is a good thing to worry about. I think it's fine, for a reason that's usually annoying: data dependencies. For a script to depend on itself, it must contain a reference to its own hash, which you can't get before creating the full script. So you essentially need to find a fixed-point of a complicated function involving a hash function, which I think we generally assume is hard. But maybe someone should verify that, since here I think you're right that it is a risk. Alternatively, we just ban cyclic references ,which wouldn't be too hard.
The question of who pays is indeed important. I think the current thought is that just looking up witnesses and creating applications is cheap enough that the ledger can do it, but maybe not.
``` | ||
Converting a syntactic value to a CekValue does require traversing it, | ||
but the traversal stops at λs and delays, so will normally traverse | ||
only the top levels of a term. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have to consider the malicious case. Which is that someone can force a traversal of the whole term. So any costing must be robust against such a case, i.e. must be linear in the size of the term.
However, I think we could potentially fuse this with deserialization, which should at least share the traversal work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting... so deserialize directly to a CekValue. Well, why not? That would mean deserialization would need to take the constructed environment (mapping deBruijn variables to module values) as an argument. But circular programs are our friend!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep. It would be tricky, but in principle I think we shouldn't need more than one pass over the program.
|
||
|
||
|
||
Note that this recursive definition of `scriptValues` could potentially allow an |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aha, here's another example of Kenneth's worry. I think the hash function attack is probably impossible but it might be prudent to be robust against it nonetheless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes--why take the risk? "Probably impossible", even with the resources of a nation state, but on the other hand, not too hard to defend against.
top-level of scripts. A simpler approach would be to charge a cost | ||
proportional to the aggregated size of all scripts, including | ||
reference scripts--although this risks penalizing complex scripts with | ||
a simple API. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reference scripts are no different from scripts included in the transaction so far as the conversion to values goes, so I don't see that we should charge for them differently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I meant here is not that we might treat reference scripts differently from other scripts, but rather that we might charge (all scripts) in proportion to the top-level size, which we can of course determine. Taking the total size of scripts is an overestimate, in many cases a large one. On the other hand, I suspect the difference in fees for this step is down in the noise.
`CompleteScript` and placed on the chain, with en empty list of | ||
`ScriptArg`s, as a reference script in a UTxO, allowing it to be used | ||
with any implementations of `B` and `C`--the calling script must pass | ||
implementations of `B` and `C` to the lambda expression, and can |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where does it get those implementations from? What you describe here doesn't seem to me to make use of the ledger mechanisms we're adding for pulling in extra scripts, so how do you get them? Or are you suggesting that you supply a "statically-linked" wrapper which then calls the "dynamically-linked" interior function with its particular choice of dependencies? That doesn't seem much different to just static linking to me...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is different. When you submit a transaction, you have to fix versions of B and C, of course. So you need a script in your transaction which takes A, B and C as script arguments, and passes B and C to A. The point is, you can put A on the chain once, and use it with different implementations of B and C. That's not static linking, is it?
If a script execution *does* try to use a module which was not | ||
provided, it will encounter a run-time type error and fail (unless the | ||
module value was `builtin unit`, in which case the script will behave | ||
as though the module had been provided). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If modules are allowed to be arbitrary terms then you could supply let _ = error in () and the script would fail, but if that's replaced with () then the script might suceed.
This is a good point! I hadn't though that this might convert failures to successes but indeed it can. This doesn't occur with the value scripts proposal, which is maybe a point in its favour.
I'm not sure that the script would necessarily produce a run-time type error if it tries to use a supposedly unused module which has been replaced by builtin unit.
Well, I think the claim is that either it produces the same result as it would have if we hadn't replaced it or it gives an error. And I think that's pretty convincing? All you can do with a unit value is:
- Do things which don't depend on what it is at all (semantics are the same)
- Do things that rely on it being unit (semantics are the same!)
- Do things that rely on it being non-unit (should fail when given unit)
transactions when they are verified on the chain. Thus a zero cost is | ||
required for the balancer to return accurate costs for script | ||
verification on the chain. On the other hand, if these operations *do* | ||
reach the chain, then they should have a *high* cost, to deter attacks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
they should just not be part of the specified syntax of UPLC and be invalid on chain, easy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If modules are allowed to be arbitrary terms then you could supply let _ = error in () and the script would fail, but if that's replaced with () then the script might suceed.
This is a good point! I hadn't though that this might convert failures to successes but indeed it can. This doesn't occur with the value scripts proposal, which is maybe a point in its favour.
No. In this case the original attempt to verify the transaction, with all modules present, will fail. The point of lazy loading is only to preserve success--there's no need even to try dropping reference inputs from transactions which already fail.
I'm not sure that the script would necessarily produce a run-time type error if it tries to use a supposedly unused module which has been replaced by builtin unit.
Well, I think the claim is that either it produces the same result as it would have if we hadn't replaced it or it gives an error. And I think that's pretty convincing? All you can do with a unit value is:
Do things which don't depend on what it is at all (semantics are the same)
Do things that rely on it being unit (semantics are the same!)
Do things that rely on it being non-unit (should fail when given unit)
In untyped code, you can check whether you have unit (I assume), and do something else if you do. This would enable untyped scripts to detect that a module was missing, and handle that error somehow. Not sure if that's useful, but it seems legitimate at least. UPLC compiled from typed code will not be able to do this, of course.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
they should just not be part of the specified syntax of UPLC and be invalid on chain, easy
You mean, include them in the type, but don't deserialize them? Seems a little brittle: a future developer might notice the missing case in the deserializer, and add it, not realising that its absence was essential to defend against an attack. Or am I just jaded?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. In this case the original attempt to verify the transaction, with all modules present, will fail. The point of lazy loading is only to preserve success--there's no need even to try dropping reference inputs from transactions which already fail.
You're not thinking about attackers. If the version with the modules present fails but the version with some moduels present doesn't, then that's potentially an attack. We really do not want to let the person running the script make it succeed when it would have failed!
You mean, include them in the type, but don't deserialize them? Seems a little brittle: a future developer might notice the missing case in the deserializer, and add it, not realising that its absence was essential to defend against an attack.
We have a specification for a reason :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. In this case the original attempt to verify the transaction, with all modules present, will fail. The point of lazy loading is only to preserve success--there's no need even to try dropping reference inputs from transactions which already fail.
You're not thinking about attackers. If the version with the modules present fails but the version with some moduels present doesn't, then that's potentially an attack. We really do not want to let the person running the script make it succeed when it would have failed!
Hmm. We're considering a case in which a module fails when evaluated, causing any script which imports it to fail. Note that this can't happen with value scripts. But in that case the vulnerability would kick in if somebody put a script on the chain that DOES import the module in question, and so can never succeed, and then an attacker USED that script, but left out the offending module. Suppose the script is a spending verifier on a UTxO. Then the vulnerable situation is where someone creates a UTxO with a spending verifier that can never succeed, and does so in this slightly obscure manner, but an attacker can spend the UTxO anyway. OK, it's a vulnerability. You really have to work to fall victim to it though! And it is interesting that the value scripts idea fixes it...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have to work to fall victim to it... if you use this feature in the way you are envisaging. Which people won't. IME it's just better not to have such semantic loopholes at all, if we can possibly avoid it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, that's an argument in favour of the value scripts variation.
|
||
#### Variation: Explicit lambdas | ||
|
||
This variation lifts some of the restrictions of the 'value scripts' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a variation? I thought this was just the original design
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original design didn't require syntactic lambdas at the top-level, to bind the script arguments. This variation does. At least, that was my understanding of the original design.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see. And the advantage is that we can rapidly pass the arguments via a pre-constructed environment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, exactly.
But in the low-level AST we don't have any binders (names or indices). To solve this we could choose:
I would say (2) is the best, but actually I am not in favor of the "tuples of modules" in the first place. |
First, this CIP is incredibly long for the concept to be introduced. Each paragraph could use an abstract on its own. That being said, modules would be fantastic for compatibility between languages, but a nightmare for execution optimizations. as @KtorZ also mentioned, most of the languages support modules, but at compile time, not runtime. run-time modules are already possible via "withdraw 0" scripts, do allow a greater degree of control for the developer without introducing changes in the ledger and the CEK machine, I would say with much more security than how this CIP would be implemented. For how it is proposed now, I don't think this level of complexity in the ledger is justifiable compared to the benefit. I would love instead to see a proposal for modules at compile time, maybe handled off-chain, with extra care for compiler optimizations. One of the most pressing issues for UPLC modules is that there is no way to specify or easily detect shared dependencies. Take the example of the Z combinator for recursion ( and even there both plu-ts and aiken implement recursion differently for efficiency ), there needs to be a way for compilers to know that two or modules use the same dependencies. TL;DR. runtime modules bad; Compile time modules good; insights on shared dependencies better. |
Co-authored-by: Michael Peyton Jones <[email protected]>
I see. Good point. Maybe a traversal of the code is needed, then--or one might drop the restriction. It's really needed only in the "global module environment" case, which requires a traversal of the code anyway. With a local module environment, referring to the entire tuple of modules is a bit weird, but not actively harmful. |
I'm trying to imagine how to provide the Z combinator, for example, as a 'withdraw 0' script. Is that really possible? The CIP is not supposed to introduce security problems, of course. What's worrying you in particular?
So, in this CIP, a shared dependency is detectable because both uses refer to the same |
withdraw 0 would be used differently than strict modules. you can only use withdraw 0 to assert something so, an example, you can have a withdraw 0 that tells you "is element x present in the inputs", where the requesting script would check the redeemer passed to the withdraw 0, make sure the element to look up is the same, and the inputs are obviously the same being the same tx. In this sense, there is no need to share a z combinator. |
Right, seems to me it's solving a different problem. If you want to provide a library of functions, say a fixpoint operator, or functions on a tree datatype, for example, then the withdraw 0 trick is not the way to do it. That's the kind of application this CIP has in mind. I don't see it as replacing the withdraw 0 trick at all--they are useful for different purposes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rjmh this could well be ready to merge, but obviously we can't do this without a candidate CIP number which I am sure will be provided at the next CIP meeting (https://hackmd.io/@cip-editors/103).
Since none of @michaelpj @kwxm 's latest reviews suggest fundamental changes I think it would then be merged at the meeting 2 weeks after that... especially after #946 (review) I'll recommend this to the other editors unless a critical flaw is posted by current reviewers or one of our usual Plutus auditors.
Since the last 2 commits clean up a few things I assume that most of the not-yet-resolved dialogue around these & other reviews are about nuances of language and applications that don't affect the versatility or practicality of this CIP itself (e.g. @michele-nuzzi #946 (comment)).
Again, please correct me if I'm wrong so that we might postpone the merge a bit longer if so. As it is, the subject matter experts will have most of January to keep recommending fine adjustments as necessary.
This is not true. You can use the withdraw zero design pattern for much more than assertion. You can use it to run any arbitrary computation such that the result of that computation can be consumed by any other script in that transaction. See https://github.com/Anastasia-Labs/design-patterns/blob/main/merkelized-validators/merkelized-validators.md This means you can for example use it for a function that consumes a list of transaction inputs, folds over them and returns the result. Effectively, each withdraw-zero script execution can be used to call a single function from a module. The difference between this design pattern and the CIP proposed here is that with the design pattern there is significantly more ex-unit overhead due to all unnecessary computation that has to be done from the fact that the function call must itself be a valid plutus program and due to how the script which calls the module function must traverse the transaction redeemers map to obtain the result. I think that these obscure design patterns which were born out of desperation to perform a given action (i.e. withdraw zero trick) via essentially abusing nuances of the ledges should definitely be replaced by built in support for such operations, thus why I proposed the Observers CIP to replace the withdraw zero design pattern by providing native support for the intended use. Likewise, I do believe some form of modules is necessary to replace the ugly module trick that is currently used. That being said, I do agree that the approach proposed in this CIP seems very complex and seems like a lot of work, and I wonder if there may be a simpler approach that would require less development time to reach production. If this CIP is indeed the route we must go (i.e. to make the module system as robust and efficient as possible so that it doesn't need to be reworked later), then I hope we can see lower hanging fruit like |
I think it seems complex mainly because there are so many possible variations--once the choice is made it will be much simpler. The "main specification" is actually very simple indeed--it's hard to see how it could be any simpler. But it does suffer from some built-in inefficiencies, which are addressed in the variations. There's a spectrum of possible choices, ranging from very-simple-but-could-be-costly, to more-complex-but-likely-a-lot-more-efficient. Implementation complexity is certainly one of the factors to take into account in choosing between them. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rjmh this was reviewed at the CIP meeting yesterday with great interest as a big step forward for Cardano. We agreed that the writing & design seem practical and we look forward to promotion after a long period of expert review.
@lehins @WhatisRT we agreed to wait on confirming this as a CIP candidate (i.e. "assigning a number") until you can review the impact that this would have on the Ledger: to first establish that what is proposed doesn't have any Ledger related difficulties that can't be addressed.
I think we already have confirmation from @zliu41 @kwxm that this is admissible from the Plutus side, so if & when Ledger provides consent then we will plan to assign a number at the following CIP meeting (next one in less than 2 weeks).
In the meantime, and thereafter, we will look for Plutus expert input to see if some of the complexities about when & how this would be released — originally posted in the Implementation Plan (see below) — can be narrowed down with respect to the timing of Plutus releases, etc.
- [ ] end-to-end testing | ||
- [ ] release at the hard fork introducing the Dijkstra era | ||
|
||
### Implementation Plan |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would be by far the longest Implementation Plan of any CIP because this section was intended to be a "checklist" of one-line items. We've never had one that contained decision forks because the more complicated CIPs have so far addressed possible design alternatives in earlier sections. This would also work to move most of this material:
- to the Specification (since it provides some definitions & relationships)
- to the Motivation (since it explains how some components will be used)
- to the Rationale (since the writing here discusses design alternatives & contingencies that they entail)
If it's clear enough, we might still have some "forks" in the Implementation Plan by the time this is merged as Proposed... but they should be concise enough to reduce to check-box items as currently formatted in the Acceptance Criteria.
I believe this will make this CIP more usable as a reference to the wider developer community. We would try to avoid large numbers of people needing to read & understand the overall specification in great detail just to find out whether (or when) this CIP is on the way to becoming Active
. cc @KtorZ
|
||
This function is to be called by the code building transactions (e.g., `Cardano.Api.Fees.makeTransactionBodyAutoBalance`) to determine which modules are necessary to include in a transaction. | ||
|
||
## Categories |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This reasoning is helpful but it will be distracting as a major section (especially at the end, because it is so general) and Categories
has never been in demand as an "optional" section. Interaction with Ledger, as I understand it, is based on your chosen implementation so I am guessing it would have a better home in Rationale. Also:
- The category of
Plutus
vs.Ledger
could also have a brief mention in the Abstract just to help readers understand the CIP's scope immediately. - The detail of the interaction with Ledger would also be welcome early in the Specification: e.g. so readers don't have to read through the whole spec to verify that the Ledger is not being essentially changed.
I definitely need to read this more thoroughly, but from a birds eye perspective I don't see any issues. The CIP mentions at the end that:
While that's definitely technically incorrect, does that mean the intent would be to have all non-boilerplate parts of this be part of Plutus? To me it would make more sense for It is already possible to have something like modules with the current Plutus if you really want to, but it's a bit hacky and has all sorts of annoying restrictions so I think this is a fantastic proposal. I'll probably have a more in-depth look in the next few days. |
I did get a chance to discuss this CIP with @zliu41 in person a month ago, which in my opinion was a very productive conversation. I am not going to comment on any details that are needed from the Plutus perspective, since that is not my area of expertise, but from the ledger perspective this CIP looks sensible and desireable. That being said, I do need to point out a very important detail that this CIP has missed, in particular that the data Script =
CompleteScript CompleteScript
| ScriptWithArgs { head :: CompleteScript, args :: [Arg] } First of all here are a couple of points about this definition that aren't terribly important for the discussion:
The most important point is that the current version of the CIP would only allow modules of a specific plutus version to only work with scripts of the same version. So, if I were to create a reference script module for PlutusV4 it would only ever work with PlutusV4 scripts, because today there is no way of adding a script to the chain without a plutus version. In my opinion it would be a significant limitation, because that would force developers to add the same binary version of a module for every plutus version to the UTXO. I'd suggest we change ledger in such a way that would not only allow us to support plutus version agnostic modules, but would promote type safe development. In order to do that we would have to defined in ledger a new type of a script newtype PlutusModule = PlutusModule PlutusBinary [ScriptHash]
data PlutusScript DijkstraEra
= DijkstraPlutusV1 !(Plutus 'PlutusV1)
| DijkstraPlutusV2 !(Plutus 'PlutusV2)
| DijkstraPlutusV3 !(Plutus 'PlutusV3)
| DijkstraPlutusV4 !(Plutus 'PlutusV4) [PlutusModule] Note that this would allow for the same For reference current definitions of aforementioned types: newtype PlutusBinary = PlutusBinary {unPlutusBinary :: ShortByteString}
newtype Plutus (l :: Language) = Plutus
{ plutusBinary :: PlutusBinary
}
data Language
= PlutusV1
| PlutusV2
| PlutusV3 Furthermore we would need to change current definition of In other words I highly recommend making a distinction between module scripts and the top level plutus scripts that lock pieces of transaction and expect a single PlutusContext argument. I can't think of any case when a module would be used as a standalone script anyways. To sum it up, I really like the proposal and I don't see anything at the moment that would prevent us from implementing it. We can work the details when we get to implementing it. |
@lehins The version topic is discussed in section "Plutus Ledger Language Versions". The majority opinion from those I discussed it with is to keep it simple - requiring that the versions match. The main argument is that there are not that many language versions, and doing so is safer - a builtin may have slightly different semantics in different language versions (though iirc there are only two such cases so far and the differences are very minor). Also, if you (and @WhatisRT) haven't reviewed the "Implementation Plan" section, please do. This is the section that discusses the changes to ledger and cardano-api.
@WhatisRT In some of the variants, I believe the implementation of this function will need to call some internal CEK machine functions and other internal functions, so I think it's better to leave it on the plutus side, unless it can be implemented via the plugin |
Ah, I see now that linking is discussed in the implementation plan, and I don't think I have a strong preference for either interface for linking but there might be a minor performance tradeoff. Linking everything at once could benefit from sharing making the common case more efficient, while linking one script at a time means we could link lazily, potentially saving linking costs in the event of a phase 2 failure. Linking everything at once also means that the Ledger may need to be careful to only provide scripts it actually needs to execute. When linking lazily this is automatically taken care of. |
Cardano scripts are limited in complexity by the fact that each script must be supplied in one transaction, whether the script is supplied in the same transaction in which it is used, or pre-loaded onto the chain for use as a reference script. This limits script code size, which in turn limits the use of libraries in scripts, and ultimately limits the sophistication of Cardano apps, compared to competing blockchains. It is the aspect of Cardano that script developers complain about most.
This CIP addresses this problem directly, by allowing reference inputs to supply 'modules', which can be used from other scripts (including other modules), thus allowing the code of a script to be spread across many reference inputs. The 'main specification' requires no changes to UPLC, PTLC, PIR or Plinth; only a 'dependency resolution' step before scripts are run. Many variations are described for better performance, including some requiring changes to the CEK machine itself.
Higher performance variations will be more expensive to implement; the final choice of variations should take implementation cost into account, and (in some cases) may require extensive benchmarking.
(latest revision rendered from branch)