This repository contains the smart contracts for Swarm's storage incentives.
In order to distribute to upload content to the Swarm network, batches of postage stamps are purchased by nodes. These stamps are then attached to content that is divided into 4kb chunks and then uploaded to the Swarm network. In order to distribute the proceeds from the sales of these batches, a Schelling Co-ordination Game is implemented using the smart contracts contained in this repository, in order to identify nodes storing the canonical subset of valid chunks that fall within the radius of responsibility of each node in a neighbourhood at the time of their application. Correct identification of this hash qualifies a node to apply to receive a reward comprising value arising from expired batches.
Each storage node seeking to benefit from storage incentive rewards should stake at least the minimum stake by sending BZZ to the staking contract. This stake permits each node to participate in the Schelling game. At this stage, to keep things simple, a stake is not withdrawable. It is expected over time that neighbourhood stakes will find a homeostasis at an amount proportional to a node's expected future returns minus their running costs.
For each round of the storage rewards redistribution process, a node is chosen at random from the participants proportional to their stake density to be that round's truth teller. Nodes that agree with this "truth" are qualified to receive the entire reward for that round if they are chosen by a second random selection procedure, wherein the probability of their selection is similarly weighted by the density of their stake. Over time, all else being equal, a node will hence receive reward relative to the proportional size of their stake if they are fully participant in the Swarm protocols.
A node must have staked at least two rounds prior to their application. If a stake is updated, a node may not participate until the next two rounds have elapsed.
Every N blocks, at the end of the previous reveal phase, the redistribution contract selects a random round anchor which determines which neighbourhood may participate in the current round. Eligibility is determined by calculating the proximity of a node's overlay address to the round anchor. A node is eligible if their proximity order to the anchor is less than or equal to the canonical storage depth that they use to calculate their commit hash.
If eligible to participate, a node will use the chunks in its reserve to calculate a reserve commitment. This is the keccack256 hash of the first m chunk addresses when transformed using the standard hmac keyed hash function where the round anchor is used as the key. The reserve commitment should be the same for each node in a neighbourhood and represents their ability to access a full canonical reserve of chunks at the time that the anchor was selected. The nodes then combine this with a unique reveal nonce, their overlay and their current storage depth, defined as the maximum proximity order between their address and that of the furthest chunk that still falls within the node's fixed size reserve. The keccack256 hash of the concatenation of these values is known as a commit hash. This is then submitted to the blockchain during that round's commit phase.
If nodes in the neighbourhood's pull sync protocols are running as they should, each node in the neighbourhood will calculate the same reserve commitment hash and storage depth. However, since the commit hash calculation also includes a random reveal nonce in before it is hashed, each node's reserve commitment hash and storage depth is kept private during the commit phase. Once the commit phase is over, the reveal phase begins, and each participating node is expected to send another transaction to the redistribution contract with the corresponding pre-image of the hash, comprising the reserve commitment, storage depth and reveal nonce.
If the revealed reserve commitment, storage depth and reveal nonce values are found to correctly re-hash to the commit hash the node has submitted, the node is included in a procedure to select the node that will be the beneficiary of the rewards from this round. If the reveal values do not has to the submitted pre-image, the node's overlay is frozen for a number of rounds proportional to the reported storage depth, and that overlay is prevented from being able to participate until this period has elapsed.
Once the reveal phase is over, the claim phase begins. A random seed is chosen using the block.difficulty (= block.prevrandao in post-merge chains) as a source of randomness. Based on this, a node is selected as the truth teller for this round, with a probability proportional to its stake density, then, from the nodes that agree with this "truth", a beneficiary of this round's rewards is selected, with a probability similarly proportional to its stake density.
The entire amount of the total of the postage batch proceeds that have expired during this round are withdrawn from the postage stamp contract and transferred to the winner. Nodes that have revealed reserve commitments or storage depths that do not agree with the truth teller are frozen for a period longer but similarly proportioned to the truthy depth and will have to wait until unfrozen to participate again.
When the claim is submitted, the cardinality of the truthy set of applicants is used as to provide a signal to change the price of storage. If the redundancy per neighbourhood is at the desired amount (4), no action is taken and the price remains static. If it is lower, this indicates an overdemand for storage and an undersupply of storage nodes - the price per chunk per block is increased to cause batches to expire more quickly and to attract more storage nodes to the network. Conversely, if more than 4 nodes per neighbourhood apply with a truthy reserve commitment, this indicates an oversupply of storage nodes and the price is decreased to ensure the efficiency of service provision.
As the seed is chosen, the anchor for the next round is revealed. Once it has noticed it is within the neighbourhood, a node may begin calculating its reserve commitment in preparation for the upcoming commit phase, and so the cycle repeats.
Nodes will be expected to submit inclusion proofs during the claim period, which prove...
Nodes will be expected to submit inclusion proofs during the claim period, which prove inclusion of ...
Relinquish admin rights...
...
This project includes the following smart contracts and their metadata:
-
- Redistribution
- Staking Registry
- Price Oracle
- Postage Stamps
- HitchensOrderStatisticsTreeLib
- Test Token
-
- Chain ID: Chain ID of the blockchain.
- Network ID: Network ID.
- ABI: Interface to communicate with smart contracts.
- Bytecode: Compiled object code that is executed during communication with smart contract.
- Address: Address of the deployed contract on the blockchain.
- Block: Block height in which the transaction is mined.
- URL: URL for analyzing the transaction.
- Script for deploying all and individual contracts
- Script assigning roles/permissions for smart contracts
- Redistributor role
- Price Oracle role
- Price Updater role
To set up the project, you will need yarn
and node
.
The project has been tested with the latest node LTS (Erbium). A .nvmrc
file is also provided.
To get started with this project, follow these steps:
- Clone the repo.
- Run
yarn install
at the root of the repo to install all dependencies. - Add a
.env
file in your root directory, where you'll store your sensitive information for deployment. An example file.env.example
is provided for reference.
- Unit Tests
- Run
yarn hardhat test
to run all the tests. - Run
yarn hardhat coverage
to see the coverage of smart contracts.
- Run
All deployments and Tests are fully dependant on Hardhat Deploy library https://github.com/wighawag/hardhat-deploy and follow best practices used there
Feel free to use public RPCs but if you want extra security and speed, feel free to use Infura, Alchemy or any other private RPC and add full path with your KEY to .env file
- Run
yarn hardhat compile
to get all the contracts compiled. - Run
yarn hardhat test
to run all the tests. - Configure
.env
file- Set your
WALLET_SECRET
in the.env
file. - Set your
INFURA_TOKEN
in the.env
file.
- Set your
- To deploy all contracts and set roles:
- Mainnet:
yarn hardhat deploy --network mainnet
- Testnet:
yarn hardhat deploy --network testnet
- Mainnet:
Note can also use npx instead of yarn, so it would be 'yarn hardhat compile'. For fastest typing you can install https://hardhat.org/hardhat-runner/docs/guides/command-line-completion and then just run 'hh compile' 'hh test'
Note: After successfully deploying to mainnet or testnet the mainnet_deployed.json and testnet_deployed.json will be automatically updated and those changes should be committed as bee node is picking them up as data that is used in nodes. This is done utilizing codegen/generate_src.sh script that is activated as github action, more on this at the bottom in Releasing section
Note: WALLET_SECRET
can be Mnemonic or Private Key.
-
Run
yarn hardhat deploy
to deploy all contracts on hardhat environment(network). -
To deploy on Ganache (or other networks):
- Add network configuration in your hardhat.config.ts.
ganache: { url: 'http://localhost:8545', chainId: 1337, },
- To run:
yarn hardhat deploy --network ganache
- Add network configuration in your hardhat.config.ts.
- Make a new RC tag and commit to generate new ABI for Bee Node creation
- Make a cluster with minimal required number of nodes (10) all point to that tag
- Latest tag should have all new contracts deployed for SI
- We reuse and share testnet token sBZZ with proper testnet and S3, easier for setup and config
- To have that we need to copy from deployments/testnet directory TestToken.json so "hardhat deploy" reuses it and doesn't create new contract, with this it will insert this address in all other contract deployments
- We use swarm network 333 ID
- This testnet will probably not be continuosly running
- Regullar RC tagging for testent
- We have continuos running testnet with deployed contracts on sepolia
- We just deploy changes/new contracts that will go to mainet
- We use swarm net id 10
- As we already have running nodes there, we need to upgrade few/half of those to new node with latest contracts tag so we can have simulation of node upgrades and network working with different node versions
- Make necessary changes to hardhat.config.ts.
- List of available configs can be found here.
- Run script
yarn hardhat run <script> --network <network>
- Network: Configure network name
- Script: Configure script name and path
To run hardhat task put in CLI
npx hardhat (hh) contracts --target main hh compare --source main --target test
There are 4 tasks currently copyBatch, signatures, contracts and compare
To release a new rc version, tag the commit with the -rcX
suffix, where X
is the release candidate number.
For example, to release v0.9.1-rc1
, execute the following command: git tag v0.9.1-rc1 && git push origin v0.9.1-rc1
.
This will generate Golang source code for the smart contracts and publish it to the ethersphere/go-storage-incentives-abi
repository.
It'll also generate .env file with the bytecodes and publish it to the ethersphere/docker-setup-contracts
repository.
The values for the Golang source code and .env file are taken from the testnet_deployed.json file, (see the Deployment section).
To release a new stable version, tag the commit without the -rcX
suffix.
For example, to release v0.9.1
, execute the following command: git tag v0.9.1 && git push origin v0.9.1
.
This will generate Golang source code for the smart contracts and publish it to the ethersphere/go-storage-incentives-abi
repository.
The values for the Golang source code file are taken from the mainnet_deployed.json file (see the Deployment section).