Skip to content

Commit

Permalink
Increment validator and corresponding mutations (#1715)
Browse files Browse the repository at this point in the history
This PR includes many changes and they are all related to incremental
commits and needed changes to make this possible.

:snowflake:  On-chain parts of the code
:snowflake: Rewrite of deposit and initial script to aiken (this was
needed to try to cut down on the tx sizes and try to publish scripts
together)
:snowflake: Publish scripts separately (In the end all needed scripts
didn't fit into the publishing transaction so we publish separately)
:snowflake: Changes to the TUI client so we can commit and recover

---

<!-- Consider each and tick it off one way or the other -->
* [x] CHANGELOG updated or not needed
* [x] Documentation updated or not needed
* [x] Haddocks updated or not needed
* [ ] No new TODOs introduced or explained herafter
     - some todo's left so we can revisit/improve on some code
  • Loading branch information
v0d1ch authored Dec 25, 2024
2 parents 9b24239 + c29a27a commit 1b447f3
Show file tree
Hide file tree
Showing 81 changed files with 18,771 additions and 21,621 deletions.
13 changes: 6 additions & 7 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,11 @@ changes.

## [0.20.0] - UNRELEASED

- Bump docusaurus version
- **BETA** hydra-node now supports incremental commits in beta mode. We would like to test out this feature
with the community members building on Hydra. This feature means you can commit funds to a Head while it is running.
TODO: Implement missing spec changes.

- **IMPORTANT - Do not release this version**
- Incremental commits - off-chain changes to make the incremental commits possible.
Important to note is that on-chain security is not implemented and hydra-node in this
state is not releasable!
Missing off-chain items to implement as a series of next PR's:
- Revisit types related to observations/posting transactions and make sure the fields are named appropriatelly
- **BREAKING** hydra-node accepts multiple `hydra-scripts-tx-id` as a comma-seperated list, as the outcome of changes in the Hydra scripts publishing.

- Tested with `cardano-node 10.1.2` and `cardano-cli 10.1.1.0`.

Expand All @@ -41,6 +38,8 @@ changes.
- Overall this results in transactions still to be submitted once per client,
but requires signifanctly less book-keeping on the client-side.

- Bump docusaurus version

- Add blockfrost support to `hydra-chain-observer`, to follow the chain via Blockfrost API.

- Fix `bench-e2e single` benchmarks and only use `--output-directory` to keep
Expand Down
4 changes: 3 additions & 1 deletion demo/seed-devnet.sh
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,9 @@ function publishReferenceScripts() {
hnode publish-scripts \
--testnet-magic ${NETWORK_ID} \
--node-socket ${DEVNET_DIR}/node.socket \
--cardano-signing-key devnet/credentials/faucet.sk
--cardano-signing-key devnet/credentials/faucet.sk \
| tr '\n' ',' \
| head -c -1
}

function queryPParams() {
Expand Down
4 changes: 2 additions & 2 deletions docs/benchmarks/profiling.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Here, isolate the transaction for `5` parties by altering the function to `maybe

## Compiling a script for profiling

The `collectCom` transaction utilizes the `vCommit` and `vHead` validator scripts. To enable profiling, add the following directive to the modules [`Hydra.Contract.Commit`](/haddock/hydra-plutus/Hydra-Contract-Commit.html) and [`Hydra.Contract.Head`](/haddock/hydra-plutus/Hydra-Contract-Head.html):
The `collectCom` transaction utilizes the `vCommit` and `vHead` validator scripts. To enable profiling, add the following directive to the modules [`Hydra.Contract.Commit`](pathname:///haddock/hydra-plutus/Hydra-Contract-Commit.html) and [`Hydra.Contract.Head`](pathname:///haddock/hydra-plutus/Hydra-Contract-Head.html):

```
{-# OPTIONS_GHC -fplugin-opt PlutusTx.Plugin:profile-all #-}
Expand All @@ -48,7 +48,7 @@ The `collectCom` transaction utilizes the `vCommit` and `vHead` validator script
## Acquiring an executable script

You can achieve this using
[`prepareTxScripts`](/haddock/hydra-tx/Hydra-Ledger-Cardano-Evaluate.html#v:prepareTxScripts).
[`prepareTxScripts`](pathname:///haddock/hydra-tx/Hydra-Ledger-Cardano-Evaluate.html#v:prepareTxScripts).
To acquire and save the fully applied scripts from the transaction onto disk, run:

```haskell
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/dev/architecture/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ See also [this ADR](/adr/27) for a past discussion on making the network compone

### Current network stack

See [haddocks](/haddock/hydra-node/Hydra-Node-Network.html)
See [haddocks](pathname:///haddock/hydra-node/Hydra-Node-Network.html)

- Hydra nodes form a network of pairwise connected *peers* using point-to-point (eg, TCP) connections that are expected to remain active at all times:
- Nodes use [Ouroboros](https://github.com/input-output-hk/ouroboros-network/) as the underlying network abstraction, which manages connections with peers via a reliable point-to-point stream-based communication framework known as a `Snocket`
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/dev/commit_to_a_Head.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ users can request a recover by providing a `TxId` of the deposit transaction
which initially locked the funds.

::::info
Users can also request to see pending deposits. See our api [documentation](/api-reference/#operation-publish-/commits).
Users can also request to see pending deposits. See our api [documentation](/api-reference).
::::

Any Head participant can request to recover the deposit not only the one which initially deposited the funds.
Expand Down
121 changes: 121 additions & 0 deletions docs/docs/dev/incremental-commits-and-decommits.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
# Incremental commits and decommits

These two new addons to the initial Hydra Head protocol deserve more
explanation so our users are aware of how they work _under the hood_ to bring
more clarity to these processes.

For now these two new additions run sequentially so we are doing one thing at a
time, at least for now, while we will think about batching certain actions in
the future if the need for that arises.

It is only possible to either commit or decommit - we don't allow snapshots with both
fields specified for simplicity. This restriction might be lifted later on - once we
are sure this simpler version works nicely.

## Incremental Commits

Incremental Commits allow us to take some `UTxO` from L1 and make it available
on L2 for transacting inside of a running Hydra Head.

The process for incremental commits is pretty much the same as when
_committing_ before the Head is in the `Open` state. In fact we can open a Head
without committing some funds and then _top-up_ our L2 funds by doing incremental
commits.

The process of incrementally committing a `UTxO` starts by sending a `HTTP` request to
the hydra-node API endpoint:

```bash

curl -X POST <IP>:<PORT>/commit --data @commit.json
```

:::info

Note that the commit transaction, which is sent to the hydra-node API, only needs
to specify the transaction inputs present in L1 that we want to make available
on L2. It will ignore any specified outputs and instead the owner of
incremented `UTxO` on L2 is the same one that owned the funds on L1.

:::

Hydra node will accept a plain `UTxO` encoded as JSON in the `POST` request
body or a _blueprint_ transaction together with the `UTxO` used to resolve it's
inputs.

_Blueprint_ transaction is just like a recipe that describes which transaction
inputs should be made available on L2 network ignoring any specified outputs.
It goes together with a `UTxO` used to resolve the transaction inputs. It's
purpose is to prove that one can spend specified transaction inputs.

Successfull API response includes a _deposit_ transaction that needs to be
signed and submitted by the user in order to kick-off the deposit process.

This process just locks the specified `UTxO` at a deposit script address which
will then, later on, after confirmed snapshot, be unlocked by the _increment_
transaction which will actually make this `UTxO` available on L2.

The deposit transaction contains a deadline - time window in which we expect
the hydra-node to be able to observe this deposit and issue a _increment_
transaction that will do the heavy lifting and bring the specified input on L2.

Currently, _contestation period_ value is used to specify a deposit deadline
but this should be made available as a separate argument to hydra-node since it
heavily depends on the network we are running on.

Once a hydra-node observes a deposit transaction it will record the deposit as
pending into the local state. There can be many pending deposits but the new
Snapshot will include them one by one.

When this new Snapshot is acknowledged by all parties _increment_ transaction
will be posted by the leader.

:::info
Note that any node that posts increment transaction will also pay the fees even if
the deposit will not be owned by them on L2.
:::

Upon observing increment transaction we remove deposit from the local pending deposits
and the process can start again.

:::note

Since we can potentially request many deposits, the leader will increment only
one of them. While others are stuck in the pending state any new transaction on
L2 will take next pending deposit and try to include it in a snapshot.

:::

## Incremental Decommits

Incremental decommits allow us to take some L2 `UTxO` and bring it to the L1
while the Head protocol is running.

Head participant (or any other user that can send requests to the hydra-node
API endpoint) requests inclusion of some UTxO from L1 by sending a `POST`
`HTTP` request which contains in the request body a decommit transaction
encoded as _TextEnvelope_ JSON value.

```bash
curl -X POST <IP>:<PORT>/decommit --data @decommit-tx.json
```

This transaction needs to be signed by the owner of the funds on L2.

:::info

What we call a decommit transaction is the one that user supplies in the API
endpoint. The decrement transaction is the transaction that hydra-node posts
after it checks that decommit transaction applies and the one that actually
makes some UTxO available on L1.

:::

Hydra node accepts this transaction and checks if it can be cleanly applied to
the local `UTxO` set. After this check hydra-node will issue a `ReqDec` message
signalling to other parties that we want to produce a new `Snapshot` that
contains the same `UTxO` to decommit. Once a snapshot is signed, hydra-node
posts a _decrement_ transaction that will take specified output and make it
available on L1.


2 changes: 1 addition & 1 deletion docs/docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ const config = {
baseUrl: "/head-protocol/",
// Note: This gives warnings about the haddocks; but actually they are
// present. If you are concerned, please check the links manually!
onBrokenLinks: "warn",
onBrokenLinks: "throw",
onBrokenMarkdownLinks: "warn",
favicon: "img/hydra.png",
organizationName: "Input Output",
Expand Down
1 change: 1 addition & 0 deletions docs/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ module.exports = {
label: "Specification",
},
"dev/protocol",
"dev/incremental-commits-and-decommits",
{
type: "doc",
id: "dev/commit_to_a_Head",
Expand Down
6 changes: 3 additions & 3 deletions flake.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

12 changes: 2 additions & 10 deletions hydra-cluster/src/Hydra/Cluster/Faucet.hs
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ import Control.Exception (IOException)
import Control.Monad.Class.MonadThrow (Handler (Handler), catches)
import Control.Tracer (Tracer, traceWith)
import GHC.IO.Exception (IOErrorType (ResourceExhausted), IOException (ioe_type))
import Hydra.Chain.CardanoClient (queryProtocolParameters)
import Hydra.Chain.ScriptRegistry (
publishHydraScripts,
)
Expand Down Expand Up @@ -150,15 +149,8 @@ createOutputAtAddress ::
createOutputAtAddress node@RunningNode{networkId, nodeSocket} atAddress datum val = do
(faucetVk, faucetSk) <- keysFor Faucet
utxo <- findFaucetUTxO node 0
pparams <- queryProtocolParameters networkId nodeSocket QueryTip
let collateralTxIns = mempty
let output =
mkTxOutAutoBalance
pparams
atAddress
val
datum
ReferenceScriptNone
let output = TxOut atAddress val datum ReferenceScriptNone
buildTransaction
networkId
nodeSocket
Expand Down Expand Up @@ -205,7 +197,7 @@ retryOnExceptions tracer action =
--
-- The key of the given Actor is used to pay for fees in required transactions,
-- it is expected to have sufficient funds.
publishHydraScriptsAs :: RunningNode -> Actor -> IO TxId
publishHydraScriptsAs :: RunningNode -> Actor -> IO [TxId]
publishHydraScriptsAs RunningNode{networkId, nodeSocket} actor = do
(_, sk) <- keysFor actor
publishHydraScripts networkId nodeSocket sk
11 changes: 9 additions & 2 deletions hydra-cluster/src/Hydra/Cluster/Options.hs
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
{-# LANGUAGE OverloadedStrings #-}

module Hydra.Cluster.Options where

import Data.ByteString.Char8 qualified as BSC
import Data.List qualified as List
import Hydra.Cardano.Api (AsType (AsTxId), TxId, deserialiseFromRawBytesHex)
import Hydra.Cluster.Fixture (KnownNetwork (..))
import Hydra.Prelude
Expand All @@ -17,7 +20,7 @@ data Options = Options
deriving stock (Show, Eq, Generic)
deriving anyclass (ToJSON, FromJSON)

data PublishOrReuse = Publish | Reuse TxId
data PublishOrReuse = Publish | Reuse [TxId]
deriving stock (Show, Eq, Generic)
deriving anyclass (ToJSON, FromJSON)

Expand Down Expand Up @@ -73,13 +76,17 @@ parseOptions =
<> help "Publish hydra scripts before running the scenario."
)
<|> option
(eitherReader $ bimap show Reuse . deserialiseFromRawBytesHex AsTxId . BSC.pack)
(eitherReader $ bimap show Reuse . parseTxIds)
( long "hydra-scripts-tx-id"
<> metavar "TXID"
<> help
"Use the hydra scripts already published in given transaction id. \
\See --publish-hydra-scripts or hydra-node publish-scripts"
)
where
parseTxIds str =
let parsed = fmap (deserialiseFromRawBytesHex AsTxId . BSC.pack) (List.lines str)
in if null (lefts parsed) then Right (rights parsed) else Left ("Invalid TxId" :: String)

parseUseMithril =
flag
Expand Down
Loading

0 comments on commit 1b447f3

Please sign in to comment.