diff --git a/docs/berkeley-upgrade/appendix.mdx b/docs/berkeley-upgrade/appendix.mdx new file mode 100644 index 000000000..5e26fd33b --- /dev/null +++ b/docs/berkeley-upgrade/appendix.mdx @@ -0,0 +1,82 @@ +--- +title: Appendix +sidebar_label: Appendix +hide_title: true +description: Berkeley Upgrade Appendix +keywords: + - Berkeley + - upgrade + - appendix +--- + +# Appendix + +## Migration from o1labs/client-sdk to mina-signer + +The signing library `o1labs/client-sdk` was deprecated some time ago and will stop working after the Mina mainnet upgrade. All users should upgrade to use the [mina-signer](https://www.npmjs.com/package/mina-signer) library. + +Below you will find an example of how to use the `mina-signer` library. Please keep in mind the following: + +1. Make sure to adjust the `nonce` to the correct nonce on the account you want to use as "sender" +1. Update the `url` variable with an existing Mina Node GraphQL endpoint + +```javascript +import { Client } from 'mina-signer'; + +// create the client and define the keypair + +const client = new Client({ network: 'testnet' }); // Mind the `network` client configuration option + +const senderPrivateKey = 'EKFd1Gx...'; // Sender's private key +const senderPublicKey = 'B62qrDM...'; // Sender's public key, perhaps derived from the private key using `client.derivePublicKey(senderPrivateKey)`; + +// define and sign payment + +let payment = { + from: senderPublicKey, + to: 'B62qkBw...', // Recipient public key + amount: 100, + nonce: 1, + fee: 1000000, +}; + +const signedPayment = client.signPayment(payment, senderPrivateKey); + +// send payment to graphql endpoint + +const url = 'https://qanet.minaprotocol.network/graphql'; + +const sendPaymentMutationQuery = ` +mutation SendPayment($input: SendPaymentInput!, $signature: SignatureInput!) { + sendPayment(input: $input, signature: $signature) { + payment { + hash + } + } +} +`; +const graphQlVariables = { + input: signedPayment.data, + signature: signedPayment.signature, +}; +const body = JSON.stringify({ + query: sendPaymentMutationQuery, + variables: graphQlVariables, + operationName: 'SendPayment', +}); + +const paymentResponse = await fetch(url, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body +}); + +const paymentResponseJson = await paymentResponse.json(); +if (paymentResponse.ok) { + console.log(`Transaction hash: ${paymentResponseJson.data.sendPayment.payment.hash}`); +} else { + console.error(JSON.stringify(paymentResponseJson)); +} + + +``` diff --git a/docs/berkeley-upgrade/archive-migration/appendix.mdx b/docs/berkeley-upgrade/archive-migration/appendix.mdx new file mode 100644 index 000000000..53525c8cf --- /dev/null +++ b/docs/berkeley-upgrade/archive-migration/appendix.mdx @@ -0,0 +1,137 @@ +--- +title: Appendix +sidebar_label: Appendix +hide_title: true +description: archive node schema changes between Mainnet and Berkeley +keywords: + - Berkeley + - upgrade + - archive migration + - appendix + - mina archive node + - archive node +--- + +# Appendix + +## Archive node schema changes + +If you are using the Archive Node database directly for your system integrations, then you should understand all the changes that might impact your applications. The most important change is that the `balances` table in the Berkeley schema will no longer exist. In the new schema, it is replaced with the table `accounts_accessed` - from an application semantics point of view, the data in `accounts_accessed` is still the same. + +In the Berkeley protocol, accounts can now have the same public key but a different token_id. This means accounts are identified by both their public key and token_id, not just the public key. Consequently, the foreign key for the account in all tables is account_identifier_id instead of public_key_id. + +### Schema differences +- **Removed Types** + - The options `create_token`, `create_account`, and `mint_tokens` have been removed from the user_command_type enumeration. +- Indexes Dropped + - We've removed several indexes from tables, this may affect how you search and organize data: + - `idx_public_keys_id` + - `idx_public_keys_value` + - `idx_snarked_ledger_hashes_value` + - `idx_blocks_id` + - `idx_blocks_state_hash` +- **Table Removed** + - The `balances` table is no longer available. +- **New Tables Added** + - We've introduced the following new tables: + - `tokens` + - `token_symbols` + - `account_identifiers` + - `voting_for` + - `protocol_versions` + - `accounts_accessed` + - `accounts_created` + - `zkapp_commands` + - `blocks_zkapp_commands` + - `zkapp_field` + - `zkapp_field_array` + - `zkapp_states_nullable` + - `zkapp_states` + - `zkapp_action_states` + - `zkapp_events` + - `zkapp_verification_key_hashes` + - `zkapp_verification_keys` + - `zkapp_permissions` + - `zkapp_timing_info` + - `zkapp_uris` + - `zkapp_updates` + - `zkapp_balance_bounds` + - `zkapp_nonce_bounds` + - `zkapp_account_precondition` + - `zkapp_accounts` + - `zkapp_token_id_bounds` + - `zkapp_length_bounds` + - `zkapp_amount_bounds` + - `zkapp_global_slot_bounds` + - `zkapp_epoch_ledger` + - `zkapp_epoch_data` + - `zkapp_network_precondition` + - `zkapp_fee_payer_body` + - `zkapp_account_update_body` + - `zkapp_account_update` + - `zkapp_account_update_failures` +- **Updated Tables** + - The following tables have been updated + - `timing_info` + - `user_commands` + - `internal_commands` + - `epoch_data` + - `blocks` + - `blocks_user_commands` + - `blocks_internal_commands` + +### Differences per table +- **`timing_info`** + - Removed columns: + - `token` + - `initial_balance` +- **`user_commands`** + - Removed columns: + - `fee_token` + - `token` +- **`internal_commands`** + - Removed columns: + - `token` + - Renamed column + - `command_type` to `type` +- **`epoch_data`** + - Added columns: + - `total_currency` + - `start_checkpoint` + - `lock_checkpoint` + - `epoch_length` +- **`blocks`** + - Added columns: + - `last_vrf_output` + - `min_window_density` + - `sub_window_densities` + - `total_currency` + - `global_slot_since_hard_fork` + - `global_slot_since_genesis` + - `protocol_version_id` + - `proposed_protocol_version_id` + - Removed column: + - `global_slot` +- **`blocks_user_commands`** + - Removed columns: + - `fee_payer_account_creation_fee_paid` + - `receiver_account_creation_fee_paid` + - `created_token` + - `fee_payer_balance` + - `source_balance` + - `receiver_balance` + - Added index: + - `idx_blocks_user_commands_sequence_no` +- **`blocks_internal_commands`** + - Removed columns: + - `receiver_account_creation_fee_paid` + - `receiver_balance` + - Added indexes: + - `idx_blocks_internal_commands_sequence_no` + - `idx_blocks_internal_commands_secondary_sequence_no` + +### Rosetta API new operations + +The Berkeley upgrade introduces two new operation types: +- `zkapp_fee_payer_dec` +- `zkapp_balance_change` diff --git a/docs/berkeley-upgrade/archive-migration/archive-migration-installation.mdx b/docs/berkeley-upgrade/archive-migration/archive-migration-installation.mdx new file mode 100644 index 000000000..0a9398a87 --- /dev/null +++ b/docs/berkeley-upgrade/archive-migration/archive-migration-installation.mdx @@ -0,0 +1,151 @@ +--- +title: Installing the archive migration package +sidebar_label: Installing archive migration package +hide_title: false +description: Satisfying the archive migration prerequisites. +keywords: + - Berkeley + - upgrade + - archive migration + - installing + - prerequisites + - mina archive node + - archive node +--- + +The archive node Berkeley migration package is sufficient for satisfying the migration from Devnet/Mainnet to Berkeley. +However, it has some limitations. For example, the migration package does not migrate a non-canonical chain and it skips orphaned blocks that are not part of a canonical chain. + +To mitigate these limitations, the archive node maintenance package is available for use by archive node operators who want to maintain a copy of their Devnet and Mainnet databases for historical reasons. + +## Install with Google Cloud SDK + +The Google Cloud SDK installer does not always register a `google-cloud-sdk` apt package. The best way to install gsutil is using the apt repostory: + +```sh +curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg +echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list +sudo apt-get update && sudo apt-get install google-cloud-sdk +``` + +## Download the o1labs Mainnet archive database + +We strongly encourage you to perform the migration on your own data to preserve the benefits of decentralization. However, if you want to use the archive data that o1labs runs (for example, to bootstrap a new archive from SQL without waiting all day for the chain to download and replay), you can use the following steps: + +1. Download the Devnet/Mainnet archive data using cURL or gsutil: + + - cURL: + + For Devnet: + ```sh + curl https://storage.googleapis.com/mina-archive-dumps/devnet-archive-dump-{date}_0000.sql.tar.gz + ``` + + For Mainnet: + ```sh + curl https://storage.googleapis.com/mina-archive-dumps/mainnet-archive-dump-{date}_0000.sql.tar.gz + ``` + + To filter the dumps by date, replace `{date}` using the required `yyyy-dd-mm` format. For example, for March 15, 2024, use `2024-03-15`. + + :warning: The majority of backups have the `0000` suffix. If a download with that name suffix is not available, try incrementing it. For example, `0001`, `0002`, and so on. + + - gsutil: + + ```sh + gsutil cp gs://mina-archive-dumps/mainnet-archive-dump-2024-01-15* . + ``` + +2. Extract the tar package. + + ```sh + tar -xvzf {network}-archive-dump-{date}_0000.sql.tar.gz {network}-archive-dump-{date}_0000.sql + ``` + +3. Import the Devnet/Mainnet archive dump into the Berkeley database. + + Run this command at the database server: + + ```sh + psql -U {user} -f {network}-archive-dump-{date}_0000.sql + ``` + + The database in the dump **archive_balances_migrated** is created with the Devnet/Mainnet archive schema. + + Note: This database does not have any Berkeley changes. + +## Ensure the location of Google Cloud bucket with the Devnet/Mainnet precomputed blocks + +The recommended method is to perform migration on your own data to preserve the benefits of decentralization. + +`gsutil cp gs://mina_network_block_data/{network}-*.json .` + +:warning: Precomputed blocks for the Mainnet network take ~800 GB of disk space. Plan for adequate time to download these blocks. The Berkeley migration app downloads them incrementally only when needed. + +## Validate the Devnet/Mainnet database + +The correct Devnet/Mainnet database state is crucial for a successful migration. + +[Missing blocks](/berkeley-upgrade/archive-migration/mainnet-database-maintenance#missing-blocks) is one the most frequent issues when dealing with the Devnet/Mainnet archive. Although this step is optional, it is strongly recommended that you verify the archive condition before you start the migration process. + +To learn how to maintain archive data, see [Devnet/Mainnet database maintenance](/berkeley-upgrade/archive-migration/mainnet-database-maintenance). + +## Download the migration applications + +Migration applications are distributed as part of the archive migration Docker and Debian packages. + +Choose the packages that are appropriate for your environment. + +### Debian packages + +To get the Debian packages: + +``` +CODENAME=bullseye +CHANNEL=stable +VERSION=3.0.1-e848ecb + +echo "deb [trusted=yes] http://packages.o1test.net $CODENAME $CHANNEL" | tee /etc/apt/sources.list.d/mina.list +apt-get update +apt-get install --allow-downgrades -y "mina-archive-migration=$VERSION" +``` + +### Docker image + +To get the Docker image: + +``` +docker pull gcr.io/o1labs-192920/mina-archive-migration:3.0.1-e848ecb-{codename} +``` + +Where supported codenames are: +- bullseye +- focal +- buster + + +## Devnet/Mainnet genesis ledger + +The Mina Devnet/Mainnet genesis ledger is stored in GitHub in the `mina` repository under the `genesis_ledgers` subfolder. However, if you are already running a daemon that is connected to the Mina Mainnet or the Devnet network, you already have the genesis ledger locally. + +## Berkeley database schema files + +You can get the Berkeley schema files from different locations: + +- GitHub repository from the `berkeley` branch. + + Note: The `berkeley` branch can contain new updates regarding schema files, so always get the latest schema files instead of using an already downloaded schema. + +- Archive/Rosetta Docker from `berkeley` version + +### Example: Downloading schema sources from GitHub + + ```sh + wget https://raw.githubusercontent.com/MinaProtocol/mina/berkeley/src/app/archive/zkapp_tables.sql + + wget https://raw.githubusercontent.com/MinaProtocol/mina/berkeley/src/app/archive/create_schema.sql + ``` + +## Next steps + +Congratulations on completing the essential preparation and verification steps. You are now ready to perform the migration steps in [Migrating Devnet/Mainnet Archive to Berkeley Archive](/berkeley-upgrade/archive-migration/migrating-archive-database-to-berkeley). diff --git a/docs/berkeley-upgrade/archive-migration/archive-migration-prerequisites.mdx b/docs/berkeley-upgrade/archive-migration/archive-migration-prerequisites.mdx new file mode 100644 index 000000000..82c4cd68b --- /dev/null +++ b/docs/berkeley-upgrade/archive-migration/archive-migration-prerequisites.mdx @@ -0,0 +1,66 @@ +--- +title: Archive migration prerequisites +sidebar_label: Archive migration prerequisites +hide_title: false +description: Overview of the migration tools and requirements to successfully migrate the Devnet/Mainnet archive database. +keywords: + - Berkeley + - upgrade + - archive migration + - planning + - prerequisites + - mina archive node + - archive node +--- + +To successfully migrate the archive database into the Berkeley version of the Mina network, you must ensure that your environment meets the foundational requirements. + +## Migration host + +- PostgreSQL database for database server +- If you use Docker, then any of the supported OS by Mina (bullseye, focal, or buster) with at least 32 GB of RAM +- gsutil application from Google Cloud Suite in version 5 or later +- (Optional) Docker in version 23.0 or later + +## (Optional) Devnet/Mainnet database + +One of the most obvious prerequisites is a Mainnet database. + +If you don't have an existing database with Devnet/Mainnet archive data, you can always download it from the Google Cloud bucket. However, we strongly encourage you to perform migration on your own data to preserve the benefits of decentralization. +You can use any gsutil-compatible alternative to Google Cloud or a gsutil wrapper program. + +## (Optional) Google Cloud bucket with Devnet/Mainnet precomputed blocks + +Precomputed blocks are the JSON files that a correctly configured node updloads to the Google Cloud bucket. +The Devnet/Mainnet to Berkeley archive data migration requires access to precomputed blocks that are uploaded by daemons that are connected to the Devnet or Mainnet networks. + +The **berkeley-migration** app uses the gsutil app to download blocks. If you didn't store precomputed blocks during the first phase of migration, you can use the precomputed blocks provided by Mina Foundation. +However, it is strongly recommended that you perform migration on your own data to preserve the benefits of decentralization. + +For Devnet blocks: + +```sh +gsutil cp gs://mina_network_block_data/devnet-*.json . +``` + +For Mainnet blocks: + +```sh +gsutil cp gs://mina_network_block_data/mainnet-*.json . +``` + +:warning: Precomputed blocks for the Mainnet network take ~800 GB of disk space. Plan for adequate time to download these blocks. The Berkeley migration app downloads them incrementally only when needed. You can instead download a 100 GB bundle of only the canonical Mainnet blocks that unpacks into ~220 GB: + +```sh +gsutil cp gs://mina_network_block_data/mainnet-bundle-2024-03-20.tar.zst . ; tar -xf mainnet-bundle-2024-03-20.tar.zst +``` + +:warning: Precomputed blocks for the Devnet network take several hundred GBs. Plan for adequate time to download these blocks. Instead, you can download a ~50 GB bundle of only the canonical Devnet blocks that unpacks into ~90 GB: + +```sh +gsutil cp gs://mina_network_block_data/devnet-bundle-3NKRsRWBzmPR8Z8ZmJb4u8FLpnSkjRitUpKZzVkHp11QuwP5i839.tar.gz . ; tar -xf devnet-bundle-3NKRsRWBzmPR8Z8ZmJb4u8FLpnSkjRitUpKZzVkHp11QuwP5i839.tar.gz +``` + +These bundles are partial. Updated documentation with the new links and final data will be provided _after_ the Berkeley major upgrade is completed. + +The best practice is to collect precomputed blocks by yourself or by other third parties to preserve the benefits of decentralization. diff --git a/docs/berkeley-upgrade/archive-migration/debian-example.mdx b/docs/berkeley-upgrade/archive-migration/debian-example.mdx new file mode 100644 index 000000000..7a76cc0a5 --- /dev/null +++ b/docs/berkeley-upgrade/archive-migration/debian-example.mdx @@ -0,0 +1,79 @@ +--- +title: Example of Devnet Archive Migration (Debian) +sidebar_label: Debian example (Devnet) +hide_title: true +description: A copy-paste example of how to do a Devnet migration. +keywords: + - Berkeley + - upgrade + - archive migration + - mina archive node + - archive node +--- + +# Debian example + +You can follow these steps that can be copy-pasted directly into a fresh Debian 11. + +This example uses an altered two-step version of the [full simplified workflow](/berkeley-upgrade/archive-migration/migrating-archive-database-to-berkeley#simplified-approach). + +```sh +apt update && apt install lsb-release sudo postgresql curl wget gpg # debian:11 is surprisingly light + +curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg +echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list +sudo apt-get update && sudo apt-get install google-cloud-sdk + +sudo rm /etc/apt/sources.list.d/mina*.list +sudo echo "deb [trusted=yes] http://packages.o1test.net $(lsb_release -cs) unstable" | sudo tee /etc/apt/sources.list.d/mina.list +sudo apt-get update && sudo apt-get install --allow-downgrades -y mina-archive-migration=3.0.0-rc1-4277e73 + +mkdir -p mina-migration-workdir +cd mina-migration-workdir + +gsutil cp gs://mina_network_block_data/devnet-bundle-3NKRsRWBzmPR8Z8ZmJb4u8FLpnSkjRitUpKZzVkHp11QuwP5i839.tar.gz . +tar -xf devnet-bundle-3NKRsRWBzmPR8Z8ZmJb4u8FLpnSkjRitUpKZzVkHp11QuwP5i839.tar.gz + +wget https://raw.githubusercontent.com/MinaProtocol/mina/berkeley/src/app/archive/create_schema.sql +wget https://raw.githubusercontent.com/MinaProtocol/mina/berkeley/src/app/archive/zkapp_tables.sql + +# this next step is required only if you don't have an archive yet +createdb devnet_balances_migrated +createdb devnet_really_migrated + +psql -d devnet_really_migrated -f create_schema.sql + +gsutil cp gs://mina-archive-dumps/devnet-archive-dump-2024-03-22_0000.sql.tar.gz . +tar -xf devnet-archive-dump-2024-03-22_0000.sql.tar.gz +# the next step ensures you don't accidentally merge mainnet and devnet data +sed -i -e s/archive_balances_migrated/devnet_balances_migrated/g devnet-archive-dump-2024-03-22_0000.sql +psql -d devnet_balances_migrated -f devnet-archive-dump-2024-03-22_0000.sql + +mina-berkeley-migration-script initial \ + --genesis-ledger /var/lib/coda/devnet.json \ + --source-db postgres:///devnet_balances_migrated \ + --target-db postgres:///devnet_really_migrated \ + --blocks-batch-size 100 --blocks-bucket mina_network_block_data \ + --network devnet + +# now, do a final migration + +gsutil cp gs://mina-archive-dumps/devnet-archive-dump-2024-03-22_2050.sql.tar.gz . +tar -xf devnet-archive-dump-2024-03-22_2050.sql.tar.gz +# the next step ensures you don't accidentally merge mainnet and devnet data +sed -i -e s/archive_balances_migrated/devnet_balances_migrated/g devnet-archive-dump-2024-03-22_2050.sql +psql -d devnet_balances_migrated -f devnet-archive-dump-2024-03-22_2050.sql + +curl -O https://gist.githubusercontent.com/ghost-not-in-the-shell/cfe629a15702e7bae7b0c1415fe0d85e/raw/8d8bff2814c1d0c15deb70b388dea8a28a485184/genesis.json + +mina-berkeley-migration-script final \ + --genesis-ledger /var/lib/coda/devnet.json \ + --source-db postgres:///devnet_balances_migrated \ + --target-db postgres:///devnet_really_migrated \ + --blocks-batch-size 100 --blocks-bucket mina_network_block_data \ + --network devnet \ + --replayer-checkpoint migration-checkpoint-437195.json \ + --fork-state-hash 3NKoUJX87VrfmNAoUdqoWUykVvt66ztm5rzruDQR7ihwYaWsdJKq \ + --fork-config genesis.json \ + --prefetch-blocks +``` diff --git a/docs/berkeley-upgrade/archive-migration/docker-example.mdx b/docs/berkeley-upgrade/archive-migration/docker-example.mdx new file mode 100644 index 000000000..372010248 --- /dev/null +++ b/docs/berkeley-upgrade/archive-migration/docker-example.mdx @@ -0,0 +1,75 @@ +--- +title: Example of Mainnet Archive Migration (Docker) +sidebar_label: Docker example (Mainnet) +hide_title: true +description: A copy-paste example of how to do a Mainnet migration. +keywords: + - Berkeley + - upgrade + - archive migration + - mina archive node + - archive node +--- + +# Docker example + +You can follow these steps that can be copy-pasted directly into a OS running Docker. + +This example performs a Mainnet initial migration following the [debian-example](/berkeley-upgrade/archive-migration/debian-example) + +```sh + +# Create a new directory for the migration data +mkdir $(pwd)/mainnet-migration && cd $(pwd)/mainnet-migration + +# Create Network +docker network create mainnet + +# Launch Local Postgres Database +docker run --name postgres -d -p 5432:5432 --network mainnet -v $(pwd)/mainnet-migration/postgresql/data:/var/lib/postgresql/data -e POSTGRES_USER=mina -e POSTGRES_PASSWORD=minamina -d postgres:13-bullseye + +export PGHOST="localhost" +export PGPORT=5432 +export PGUSER="mina" +export PGPASSWORD="minamina" + +# Drop DBs if they exist +psql -c "DROP DATABASE IF EXISTS mainnet_balances_migrated;" +psql -c "DROP DATABASE IF EXISTS mainnet_really_migrated;" + +# Create DBs +psql -c "CREATE DATABASE mainnet_balances_migrated;" +psql -c "CREATE DATABASE mainnet_really_migrated;" + +# Retrieve Archive Node Backup +wget https://673156464838-mina-archive-node-backups.s3.us-west-2.amazonaws.com/mainnet/mainnet-archive-dump-2024-04-29_0000.sql.tar.gz +tar -xf mainnet-archive-dump-2024-04-29_0000.sql.tar.gz + +# Replace the database name in the dump +sed -i -e s/archive_balances_migrated/mainnet_balances_migrated/g mainnet-archive-dump-2024-04-29_0000.sql +psql mainnet_balances_migrated -f mainnet-archive-dump-2024-04-29_0000.sql + +# Prepare target +wget https://raw.githubusercontent.com/MinaProtocol/mina/berkeley/src/app/archive/create_schema.sql +wget https://raw.githubusercontent.com/MinaProtocol/mina/berkeley/src/app/archive/zkapp_tables.sql +psql mainnet_really_migrated -f create_schema.sql + +# Start migration +docker create --name mainnet-db-migration \ + -v $(pwd)/mainnet-migration:/data \ + --network mainnet gcr.io/o1labs-192920/mina-archive-migration:3.0.1-e848ecb-bullseye -- bash -c ' + wget http://673156464838-mina-genesis-ledgers.s3-website-us-west-2.amazonaws.com/mainnet/genesis_ledger.json; mina-berkeley-migration-script initial \ + --genesis-ledger genesis_ledger.json \ + --source-db postgres://mina:minamina@postgres:5432/mainnet_balances_migrated \ + --target-db postgres://mina:minamina@postgres:5432/mainnet_really_migrated \ + --blocks-batch-size 5000 \ + --blocks-bucket mina_network_block_data \ + --checkpoint-output-path /data/checkpoints/. \ + --precomputed-blocks-local-path /data/precomputed_blocks/. \ + --network mainnet' + +docker start mainnet-db-migration + +docker logs -f mainnet-db-migration + +``` diff --git a/docs/berkeley-upgrade/archive-migration/index.mdx b/docs/berkeley-upgrade/archive-migration/index.mdx new file mode 100644 index 000000000..e43dcf017 --- /dev/null +++ b/docs/berkeley-upgrade/archive-migration/index.mdx @@ -0,0 +1,48 @@ +--- +title: Archive Migration +sidebar_label: Archive Migration +hide_title: true +description: Berkeley upgrade is a major upgrade that requires all nodes in a network to upgrade to a newer version. It is not backward compatible. +keywords: + - Berkeley + - upgrade + - archive migration + - mina archive node + - archive node +--- + +# Archive Migration + +The Berkeley upgrade is a major upgrade that requires all nodes in a network to upgrade to a newer version. It is not backward compatible. + +A major upgrade occurs when there are major changes to the core protocol that require all nodes on the network to update to the latest software. + +## How to prepare for the Berkeley upgrade + +The Berkeley upgrade requires upgrading all nodes, including archive nodes. One of the required steps is to migrate archive databases from the current Mainnet format to Berkeley. This migration requires actions and efforts from node operators and exchanges. + +Learn about the archive data migration: + +- [Understanding the migration process](/berkeley-upgrade/archive-migration/understanding-archive-migration) +- [Prerequisites before migration](/berkeley-upgrade/archive-migration/archive-migration-prerequisites) +- [Suggested installation procedure](/berkeley-upgrade/archive-migration/archive-migration-installation) +- [How to perform archive migration](/berkeley-upgrade/archive-migration/migrating-archive-database-to-berkeley) + +Finally, see the shell script example that is compatible with a stock Debian 11 container: + +- [Worked Devnet Debian example using March 22 data](/berkeley-upgrade/archive-migration/debian-example) +- [Worked Mainnet Docker example using April 29 data](/berkeley-upgrade/archive-migration/docker-example) + +## What will happen with original Devnet/Mainnet data + +After the migration, you will have two databases: + +- The original Devnet/Mainnet database with small data adjustments (all pending blocks from last canoncial block until the fork block are converted to canoncial blocks) +- A new Berkeley database based on Devnet/Mainnet data, but: + - Without Devnet/Mainnet orphaned blocks + - Without pending blocks that are not in the canonical chain + - With all pending blocks on the canonical chain converted to canonical blocks + +There is no requirement to preserve the original Devnet/Mainnet database after migration. However, if for some reason you want to keep the Mainnet orphaned or non-canonical pending blocks, you can download the archive maintenance package for the Devnet/Mainnet database. + +To learn about maintaining archive data, see [Devnet/Mainnet database maintenance](/berkeley-upgrade/archive-migration/mainnet-database-maintenance). diff --git a/docs/berkeley-upgrade/archive-migration/mainnet-database-maintenance.mdx b/docs/berkeley-upgrade/archive-migration/mainnet-database-maintenance.mdx new file mode 100644 index 000000000..5d8a7abde --- /dev/null +++ b/docs/berkeley-upgrade/archive-migration/mainnet-database-maintenance.mdx @@ -0,0 +1,196 @@ +--- +title: Devnet/Mainnet database maintenance +sidebar_label: Devnet/Mainnet database maintenance +hide_title: true +description: Steps to properly maintain correctness of archive database. +keywords: + - Berkeley + - upgrade + - archive migration + - planning + - prerequisites + - mina archive node + - archive node + - mainnet + - devnet + - database +--- + +# Devnet/Mainnet database maintenance + +After the Berkeley migration, the original Devnet/Mainnet database is not required unless you are interested in +preserving some aspect of the database that is lost during the migration process. + +Two databases exist after the successful migration: + +- The original Devnet/Mainnet database with small data adjustments: + - All pending blocks from last canoncial block until the fork block are converted to canonical blocks + +- A new Berkeley database based on Devnet/Mainnet data with these differences: + - Without Devnet/Mainnet orphaned blocks + - Without pending blocks that are not in the canonical chain + - With all pending blocks on the canonical chain converted to canonical blocks + +The o1Labs and Mina Foundation teams have consistently prioritized rigorous testing and the delivery of high-quality software products. + +However, being human entails the possibility of making mistakes. + +## Known issues + +Recently, a few mistakes were identified while working on a version of Mina used on Mainnet. These issues were promptly addressed; however, within the decentralized environment, archive nodes can retain historical issues despite our best efforts. + +Fixes are available for the following known issues: + +- **Missing or invalid nonces** - a historical issue skewed nonces in the `balances` table. Although the issue was resolved, you might still have nonces that are missing or invalid. +- **Incorrect ledger hashes** - a historical issue with the same root cause as 'Missing or invalid nonces'. However, the outcome is that a 'replayer run' operation of validating archive node against daemon ledger shows ledger mismatches and cannot pass problematic blocks. +- **Missing blocks** - This recurring missing blocks issue consistently poses challenges and is a source of concern for all archive node operators. This persistent challenge from disruptions in daemon node operations can potentially lead to incomplete block reception by archive nodes. This situation can compromise chain continuity within the archive database. + +To address these issues, install and use the special archive node maintenance package that includes fixes. + +## Installing the archive node maintenance package + +The package provides support for codenames: + +- bullseye +- buster +- focal + +The following steps describe only the bullseye package installation. Modify the steps as appropriate for your environment. + +### Debian packages + +To get the Debian package: + +```sh +CODENAME=bullseye +CHANNEL=stable +VERSION=1.4.1 + +echo "deb [trusted=yes] http://packages.o1test.net $CODENAME $CHANNEL" | tee /etc/apt/sources.list.d/mina.list +apt-get update +apt-get install --allow-downgrades -y "mina-archive-maintenance=$VERSION" +``` + +### Docker image + +To get the Docker image: + +```sh +docker pull gcr.io/o1labs-192920/mina-archive-maintenance:1.4.1 +``` + +## Usage for missing or invalid nonces + +The replayer application was developed to verify the Devnet/Mainnet archive data. You must run the replayer application against your existing Devnet/Mainnet database to verify the blockchain state. + +To run the replayer application: + +```sh +mina-replayer \ + --archive-uri {db_connection_string} \ + --input-file reference_replayer_input.json \ + --output-file replayer_input_file.json \ + --checkpoint-interval 10000 \ + --fix-nonces \ + --set-nonces \ + --dump-repair-script +``` + +where: + +- `archive-uri` - connection string to the archive database +- `input-file` - JSON file that holds the archive database +- `output-file` - JSON file that will hold the ledger with auxiliary information, like global slot and blockchain height, which will be dumped on the last block +- `checkpoint-interval` - frequency of checkpoints expressed in blocks count +- `replayer_input_file.json` - JSON file constructed from the Devnet/Mainnet genesis ledger: + + ```sh + jq '.ledger.accounts' genesis_ledger.json | jq '{genesis_ledger: {accounts: .}}' > replayer_input_config.json + ``` + +- `--fix-nonces` - adjust nonces values while replaying transactions +- `--set-nonces` - set missing nonces while replaying transactions +- `--dump-repair-script` - path to the output SQL script that will contain all updates to nonces made during the replayer run that can be directly applied to other database instances that contain the same data with invalid nonces + +Running a replayer from scratch on a Devnet/Mainnet database can take up to a couple of days. The recommended best practice is to break the replayer into smaller parts by using the checkpoint capabilities of the replayer. +Additionally, running the replayer can exert significant demands on system resources that potentially affect the performance of the archive node. Because of the large resource requirements, we recommend that you execute the replayer in isolation from network connections, preferably within an isolated environment where the Devnet/Mainnet dumps can be imported. + +## Bad ledger hashes + +There is no ultimate fix for this issue because preserving historical ledger hashes is essential to the overall security of the Mina network. Even with this issue, you can validate archive data integrity. + +The replayer application has a built-in mechanism to skip errors when the `--continue-on-error` flag is enabled. +However, instead of skipping only blocks with bad ledger hashes, this mode skipped all of the problems with integrity. +With the new archive node maintenance package, you can run the replayer application without a special flag and to correctly handle the bad ledger hashes issue. + +To run replayer: + +```sh +mina-replayer --archive-uri {db_connection_string} --input-file reference_replayer_input.json --output-file reference_replayer_output.json --checkpoint-interval 10000 +``` + +where: + +- `archive-uri` - connection string to the archive database +- `input-file` - JSON file that holds the archive database +- `output-file` - JSON file that will hold the ledger with auxiliary information, like global slot and blockchain height, which will be dumped on the last block +- `checkpoint-interval` - frequency of checkpoints expressed in blocks count +- `replayer_input_file.json` - JSON file constructed from the Devnet/Mainnet genesis ledger: + + ``` + jq '.ledger.accounts' genesis_ledger.json | jq '{genesis_ledger: {accounts: .}}' > replayer_input_config.json + ``` + +:warning: Running a replayer from scratch on a Devnet/Mainnet database can take up to a couple of days. The recommended best practice is to break the replayer into smaller parts by using the checkpoint capabilities of the replayer. + +:warning: You must run the replayer using the Mainnet version. You can run it from the Docker image at `gcr.io/o1labs-192920/mina-archive:2.0.0-039296a-bullseye`. + +## Missing blocks + +The daemon node unavailability can cause the archive node to miss some of the blocks. This recurring missing blocks issue consistently poses challenges. To address this issue, you can reapply missing blocks. + +If you uploaded the missing blocks to Google Cloud, the missing blocks can be reapplied from precomputed blocks to preserve chain continuity. + +1. To automatically verify and patch missing blocks, use the [download_missing_blocks.sh](https://raw.githubusercontent.com/MinaProtocol/mina/2.0.0berkeley_rc1/src/app/rosetta/download-missing-blocks.sh) script. + + The `download-missing-blocks` script uses `localhost` as the database host so the script assumes that psql is running on localhost on port 5432. Modify `PG_CONN` in `download_missing_block.sh` for your environment. + +1. Install the required `mina-archive-blocks` and `mina-missing-blocks-auditor` scripts that are packed in the `gcr.io/o1labs-192920/mina-archive:2.0.0-039296a-bullseye` Docker image. + +1. Export the `BLOCKS_BUCKET`: + + ```sh + export BLOCKS_BUCKET="https://storage.googleapis.com/my_bucket_with_precomputed_blocks" + ``` + +1. Run the `mina-missing-blocks-auditor` script from the database host: + + For Devnet: + + ```sh + download-missing-blocks.sh devnet {db_user} {db_password} + ``` + + For Mainnet: + + ```sh + download-missing-blocks.sh mainnet {db_user} {db_password} + ``` +### Using precomputed blocks from O1labs bucket + +O1labs maintains a Google bucket containing precomputed blocks from Devnet and Mainnet, accessible at https://storage.googleapis.com/mina_network_block_data/. + +Note: It's important to highlight that precomputed blocks for **Devnet** between heights `2` and `1582` have missing fields or incorrect transaction data. Utilizing these blocks to patch your Devnet archive database will result in failure. For those who rely on precomputed blocks from this bucket, please follow the outlined steps: + +1. Download additional blocks from `gs://mina_network_block_data/devnet-extensional-bundle.tar.gz`. +2. Install the necessary `mina-archive-blocks` script contained within the `gcr.io/o1labs-192920/mina-archive:2.0.0-039296a-bullseye` Docker image. +3. Execute mina-archive-blocks to import the extracted blocks from step 1 using the provided command: + + ```sh + mina-archive-blocks --archive-uri --extensional ./extensional/* + ``` +4. Proceed with patching your Devnet database with blocks having heights other than `2` to `1582` using the available precomputed blocks. + +## Next steps + +Now that you have completed the steps to properly maintain the correctness of the archive database, you are ready to perform the archive [migration process](/berkeley-upgrade/archive-migration/migrating-archive-database-to-berkeley). diff --git a/docs/berkeley-upgrade/archive-migration/migrating-archive-database-to-berkeley.mdx b/docs/berkeley-upgrade/archive-migration/migrating-archive-database-to-berkeley.mdx new file mode 100644 index 000000000..304a0068b --- /dev/null +++ b/docs/berkeley-upgrade/archive-migration/migrating-archive-database-to-berkeley.mdx @@ -0,0 +1,632 @@ +--- +title: Migrating Devnet/Mainnet Archive to Berkeley Archive +sidebar_label: Performing archive migration +hide_title: true +description: Steps to properly migrate archives from Devnet/Mainnet to Berkeley. +keywords: + - Berkeley + - upgrade + - archive migration + - planning + - prerequisites + - mina archive node + - archive node +--- + +# Migrating Devnet/Mainnet Archive to Berkeley Archive + +Before you start the process to migrate your archive database from the current Mainnet or Devnet format to Berkeley, be sure that you: + +- [Understand the Archive Migration](/berkeley-upgrade/archive-migration/understanding-archive-migration) +- Meet the foundational requirements in [Archive migration prerequisites](/berkeley-upgrade/archive-migration/archive-migration-prerequisites) +- Have successfully installed the [archive migration package](/berkeley-upgrade/archive-migration/archive-migration-installation) + +## Migration process + +The Devnet/Mainnet migration can take up to a couple of days. +Therefore, you can achieve a successful migration by using three stages: + +- **Stage 1:** Initial migration + +- **Stage 2:** Incremental migration + +- **Stage 3:** Remainder migration + +Each stage has three migration phases: + +- **Phase 1:** Copying data and precomputed blocks from Devnet/Mainnet database using the **berkeley_migration** app. + +- **Phase 2:** Populating new Berkeley tables using the **replayer app in migration mode** + +- **Phase 3:** Additional validation for migrated database + +Review these phases and stages before you start the migration. + +## Simplified approach + +For convenience, use the `mina-berkeley-migration-script` app if you do not need to delve into the details of migration or if your environment does not require a special approach to migration. + +### Stage 1: Initial migration + +``` +mina-berkeley-migration-script \ + initial \ + --genesis-ledger ledger.json \ + --source-db postgres://postgres:postgres@localhost:5432/source \ + --target-db postgres://postgres:postgres@localhost:5432/migrated \ + --blocks-bucket mina_network_block_data \ + --blocks-batch-size 500 \ + --checkpoint-interval 10000 \ + --checkpoint-output-path . \ + --precomputed-blocks-local-path . \ + --network NETWORK +``` + +where: + +`-g | --genesis-ledger`: path to the genesis ledger file + +`-s | --source-db`: connection string to the database to be migrated + +`-t | --target-db`: connection string to the database that will hold the migrated data + +`-b | --blocks-bucket`: name of the precomputed blocks bucket. Precomputed blocks are assumed to be named with format: `{network}-{height}-{state_hash}.json` + +`-bs | --blocks-batch-size`: number of precomputed blocks to be fetched at one time from Google Cloud. A larger number, like 1000, can help speed up the migration process. + +`-n | --network`: network name (`devnet` or `mainnet`) when determining precomputed blocks. Precomputed blocks are assumed to be named with format: `{network}-{height}-{state_hash}.json`. + +`-c | --checkpoint-output-path`: path to folder for replayer checkpoint files + +`-i | --checkpoint-interval`: frequency of dumping checkpoint expressed in blocks count + +`-l | --precomputed-blocks-local-path`: path to folder for on-disk precomputed blocks location + +The command output is the `migration-replayer-XXX.json` file required for the next run. + +### Stage 2: Incremental migration + +``` +mina-berkeley-migration-script \ + incremental \ + --genesis-ledger ledger.json \ + --source-db postgres://postgres:postgres@localhost:5432/source \ + --target-db postgres://postgres:postgres@localhost:5432/migrated \ + --blocks-bucket mina_network_block_data \ + --blocks-batch-size 500 \ + --network NETWORK \ + --checkpoint-output-path . \ + --checkpoint-interval 10000 \ + --precomputed-blocks-local-path . \ + --replayer-checkpoint migration-checkpoint-XXX.json +``` + +where: + +`-g | --genesis-ledger`: path to the genesis ledger file + +`-s | --source-db`: connection string to the database to be migrated + +`-t | --target-db`: connection string to the database that will hold the migrated data + +`-b | --blocks-bucket`: name of the precomputed blocks bucket. Precomputed blocks are assumed to be named with format: `{network}-{height}-{state_hash}.json` + +`-bs | --blocks-batch-size`: number of precomputed blocks to be fetched at one time from Google Cloud. A larger number, like 1000, can help speed up migration process. + +`-n | --network`: network name (`devnet` or `mainnet`) when determining precomputed blocks. Precomputed blocks are assumed to be named with format: `{network}-{height}-{state_hash}.json`. + +`-r | --replayer-checkpoint`: path to the latest checkpoint file `migration-checkpoint-XXX.json` + +`-c | --checkpoint-output-path`: path to folder for replayer checkpoint files + +`-i | --checkpoint-interval`: frequency of dumping checkpoint expressed in blocks count + +`-l | --precomputed-blocks-local-path`: path to folder for on-disk precomputed blocks location + +### Stage 3: Remainder migration + +``` +mina-berkeley-migration-script \ + final \ + --genesis-ledger ledger.json \ + --source-db postgres://postgres:postgres@localhost:5432/source \ + --target-db postgres://postgres:postgres@localhost:5432/migrated \ + --blocks-bucket mina_network_block_data \ + --blocks-batch-size 500 \ + --network NETWORK \ + --checkpoint-output-path . \ + --checkpoint-interval 10000 \ + --precomputed-blocks-local-path . \ + --replayer-checkpoint migration-checkpoint-XXX.json \ + -fc fork-genesis-config.json +``` + +where: + +`-g | --genesis-ledger`: path to the genesis ledger file + +`-s | --source-db`: connection string to the database to be migrated + +`-t | --target-db`: connection string to the database that will hold the migrated data + +`-b | --blocks-bucket`: name of the precomputed blocks bucket. Precomputed blocks are assumed to be named with format: `{network}-{height}-{state_hash}.json` + +`-bs | --blocks-batch-size`: number of precomputed blocks to be fetched at one time from Google Cloud. A larger number, like 1000, can help speed up the migration process. + +`-n | --network`: network name (`devnet` or `mainnet`) when determining precomputed blocks. Precomputed blocks are assumed to be named with format: `{network}-{height}-{state_hash}.json`. + +`-r | --replayer-checkpoint`: path to the latest checkpoint file `migration-checkpoint-XXX.json` + +`-c | --checkpoint-output-path`: path to folder for replayer checkpoint files + +`-i | --checkpoint-interval`: frequency of dumping checkpoint expressed in blocks count + +`-l | --precomputed-blocks-local-path`: path to folder for on-disk precomputed blocks location + +`-fc | --fork-config`: fork genesis config file is the new genesis config that is distributed with the new daemon and is published after the fork block is announced + +## Advanced approach + +If the simplified berkeley migration script is, for some reason, not suitable for you, it is possible to run the migration using the **berkeley_migration** and **replayer** apps without an interface the script provides. + +### Stage 1: Initial migration + +This first stage requires only the initial Berkeley schema, which is the foundation for the next migration stage. This schema populates the migrated database and creates an initial checkpoint for further incremental migration. + +- Inputs + - Unmigrated Devnet/Mainnet database + - Devnet/Mainnet genesis ledger + - Empty target Berkeley database with the schema created, but without any content + +- Outputs + - Migrated Devnet/Mainnet database to the Berkeley format from genesis up to the last canonical block in the original database + - Replayer checkpoint that can be used for incremental migration + +#### Phase 1: Berkeley migration app run + +``` +mina-berkeley-migration \ + --batch-size 1000 \ + --config-file ledger.json \ + --mainnet-archive-uri postgres://postgres:postgres@localhost:5432/source \ + --migrated-archive-uri postgres://postgres:postgres@localhost:5432/migrated \ + --blocks-bucket mina_network_block_data \ + --precomputed-blocks-local-path . \ + --keep-precomputed-blocks \ + --network NETWORK +``` + +where: + +`--batch-size`: number of precomputed blocks to be fetched at one time from Google Cloud. A larger number, like 1000, can help speed up migration process. + +`--config-file`: path to the genesis ledger file + +`--mainnet-archive-uri`: connection string to the database to be migrated + +`--migrated-archive-uri`: connection string to the database that will hold the migrated data + +`--blocks-bucket`: name of the precomputed blocks bucket. Precomputed blocks are assumed to be named with format: `{network}-{height}-{state_hash}.json` + +`--precomputed-blocks-local-path`: path to folder for on-disk precomputed blocks location + +`--keep-precomputed-blocks`: keep the precomputed blocks on-disk after the migration is complete + +`--network`: the network name (`devnet` or `mainnet`) when determining precomputed blocks. Precomputed blocks are assumed to be named with format: `{network}-{height}-{state_hash}.json` + +#### Phase 2: Replayer in migration mode run + +Replayer config must contain the Devnet/Mainnet ledger as the starting point. So first, you must prepare the replayer config file: + +``` + jq '.ledger.accounts' genesis_ledger.json | jq '{genesis_ledger: {accounts: .}}' > replayer_input_config.json +``` + +where: + + `genesis_ledger.json` is the genesis file from a daemon bootstrap on a particular network + +Then: +``` + mina-migration-replayer \ + --migration-mode \ + --archive-uri postgres://postgres:postgres@localhost:5432/migrated \ + --input-file replayer_input_config.json \ + --checkpoint-interval 10000 \ + --checkpoint-output-folder . +``` + +where: + +`--migration-mode`: flag for migration + +`--archive-uri`: connection string to the database that will hold the migrated data + +`--input-file`: path to the replayer input file, see below on how's created + +`replayer_input_config.json`: is a file constructed out of network genesis ledger: + ``` + jq '.ledger.accounts' genesis_ledger.json | jq '{genesis_ledger: {accounts: .}}' > replayer_input_config.json + ``` + +`--checkpoint-interval`: frequency of checkpoints file expressed in blocks count + +`--checkpoint-output-folder`: path to folder for replayer checkpoint files + +#### Phase 3: Validations + +Use the **berkeley_migration_verifier** app to perform checks for both the fully migrated and partially migrated databases. + +``` + mina-berkeley-migration-verifier \ + pre-fork \ + --mainnet-archive-uri postgres://postgres:postgres@localhost:5432/source \ + --migrated-archive-uri postgres://postgres:postgres@localhost:5432/migrated +``` + +where: + +`--mainnet-archive-uri`: connection string to the database to be migrated + +`--migrated-archive-uri`: connection string to the database that will hold the migrated data + +### Stage 2: Incremental migration + +After the initial migration, the data is migrated data up to the last canonical block. However, Devnet/Mainnet data is progressing with new blocks that must also be migrated again and again until the fork block is announced. + +:::info +Incremental migration can, and probably must, be repeated a couple of times until the fork block is announced by Mina Foundation. +Run the incremental migration multiple times with the latest Devnet/Mainnet database and the latest replayer checkpoint file. +::: + +- Inputs + - Latest Devnet/Mainnet database + - Devnet/Mainnet genesis ledger + - Replayer checkpoint from last run + - Migrated berkeley database from initial migration + +- Outputs + - Migrated Devnet/Mainnet database to the Berkeley format up to the last canonical block + - Replayer checkpoint which can be used for the next incremental migration + +### Phase 1: Berkeley migration app run + +``` +mina-berkeley-migration \ + --batch-size 1000 \ + --config-file ledger.json \ + --mainnet-archive-uri postgres://postgres:postgres@localhost:5432/source \ + --migrated-archive-uri postgres://postgres:postgres@localhost:5432/migrated \ + --blocks-bucket mina_network_block_data \ + --precomputed-blocks-local-path . \ + --keep-precomputed-blocks \ + --network NETWORK +``` + +where: + +`--batch-size`: number of precomputed blocks to be fetched at one time from Google Cloud. A larger number, like 1000, can help speed up migration process. + +`--config-file`: path to the genesis ledger file + +`--mainnet-archive-uri`: connection string to the database to be migrated + +`--migrated-archive-uri`: connection string to the database that will hold the migrated data + +`--blocks-bucket`: name of the precomputed blocks bucket. Precomputed blocks are assumed to be named with format: `{network}-{height}-{state_hash}.json` + +`--precomputed-blocks-local-path`: path to folder for on-disk precomputed blocks location + +`--keep-precomputed-blocks`: keep the precomputed blocks on-disk after the migration is complete + +`--network`: the network name (`devnet` or `mainnet`) when determining precomputed blocks. Precomputed blocks are assumed to be named with format: `{network}-{height}-{state_hash}.json` + +#### Phase 2: Replayer in migration mode run + +``` + mina-migration-replayer \ + --migration-mode \ + --archive-uri postgres://postgres:postgres@localhost:5432/migrated \ + --input-file replayer-checkpoint-XXX.json \ + --checkpoint-interval 10000 \ + --checkpoint-output-folder . +``` + +where: + +`--migration-mode`: flag for migration + +`--archive-uri`: connection string to the database that will hold the migrated data + +`--input-file`: path to the latest checkpoint file `replayer-checkpoint-XXX.json` + +`replayer-checkpoint-XXX.json`: the latest checkpoint generated from the previous migration + +`--checkpoint-interval`: frequency of checkpoints file expressed in blocks count + +`--checkpoint-output-folder`: path to folder for replayer checkpoint files + +Incremental migration can be run continuously on top of the initial migration or last incremental until the fork block is announced. + +#### Phase 3: Validations + +Use the **berkeley_migration_verifier** app to perform checks for both the fully migrated and partially migrated database. + +``` + mina-berkeley-migration-verifier \ + pre-fork \ + --mainnet-archive-uri postgres://postgres:postgres@localhost:5432/source \ + --migrated-archive-uri postgres://postgres:postgres@localhost:5432/migrated +``` + +where: + +`--mainnet-archive-uri`: connection string to the database to be migrated + +`--migrated-archive-uri`: connection string to the database that will hold the migrated data + +Note that: you can run incremental migration continuously on top of the initial migration or the last incremental until the fork block is announced. + +### Stage 3: Remainder migration + +When the fork block is announced, you must tackle the remainder migration. This is the last migration run +you need to perform. In this stage, you close the migration cycle with the last migration of the remainder blocks between the current last canonical block and the fork block (which can be pending, so you don't need to wait 290 blocks until it would become canonical). +You must use `--fork-state-hash` as an additional parameter to the **berkeley-migration** app. + +- Inputs + - Latest Devnet/Mainnet database + - Devnet/Mainnet genesis ledger + - Replayer checkpoint from last run + - Migrated Berkeley database from last run + - Fork block state hash + +- Outputs + - Migrated devnet/mainnet database to berkeley up to fork point + - Replayer checkpoint which can be used for the next incremental migration + +:::info +The migrated database output from this stage of the final migration is required to initialize your archive nodes on the upgraded network. +::: + +#### Phase 1: Berkeley migration app run + +``` +mina-berkeley-migration \ + --batch-size 1000 \ + --config-file ledger.json \ + --mainnet-archive-uri postgres://postgres:postgres@localhost:5432/source \ + --migrated-archive-uri postgres://postgres:postgres@localhost:5432/migrated \ + --blocks-bucket mina_network_block_data \ + --precomputed-blocks-local-path \ + --keep-precomputed-blocks \ + --network NETWORK \ + --fork-state-hash {fork-state-hash} +``` + +where: + +`--batch-size`: number of precomputed blocks to be fetched at one time from Google Cloud. A larger number, like 1000, can help speed up migration process. + +`--config-file`: path to the genesis ledger file + +`--mainnet-archive-uri`: connection string to the database to be migrated + +`--migrated-archive-uri`: connection string to the database that will hold the migrated data + +`--blocks-bucket`: name of the precomputed blocks bucket. Precomputed blocks are assumed to be named with format: `{network}-{height}-{state_hash}.json` + +`--precomputed-blocks-local-path`: path to folder for on-disk precomputed blocks location + +`--keep-precomputed-blocks`: keep the precomputed blocks on-disk after the migration is complete + +`--network`: the network name (`devnet` or `mainnet`) when determining precomputed blocks. Precomputed blocks are assumed to be named with format: `{network}-{height}-{state_hash}.json` + +`--fork-state-hash`: fork state hash + +:::info +When you run the **berkeley-migration** app with fork-state-hash, there is no requirement for the fork state block to be canonical. +The tool automatically converts all pending blocks in the subchain, including the fork block, to canonical blocks. +::: + +#### Phase 2: Replayer in migration mode run + +``` +mina-migration-replayer \ + --migration-mode \ + --archive-uri postgres://postgres:postgres@localhost:5432/migrated \ + --input-file replayer-checkpoint-XXX.json \ + --checkpoint-interval 10000 \ + --checkpoint-output-folder . +``` + +where: + +`--migration-mode`: flag for migration + +`--archive-uri`: connection string to the database that will hold the migrated data + +`--input-file`: path to the latest checkpoint file `replayer-checkpoint-XXX.json` from stage 1 + +`replayer-checkpoint-XXX.json`: the latest checkpoint generated from the previous migration + +`--checkpoint-interval`: frequency of checkpoints file expressed in blocks count + +`--checkpoint-output-folder`: path to folder for replayer checkpoint files + +#### Phase 3: Validations + +Use the **berkeley_migration_verifier** app to perform checks for both the fully migrated and partially migrated databases. + +``` + mina-berkeley-migration-verifier \ + post-fork \ + --mainnet-archive-uri postgres://postgres:postgres@localhost:5432/source \ + --migrated-archive-uri postgres://postgres:postgres@localhost:5432/migrated \ + --fork-config-file fork_genesis_config.json \ + --migrated-replayer-output replayer-checkpoint-XXXX.json +``` + +where: + +`--mainnet-archive-uri`: connection string to the database to be migrated + +`--migrated-archive-uri`: connection string to the database that will hold the migrated data + +`--migrated-replayer-output`: path to the latest checkpoint file `replayer-checkpoint-XXX.json` + +`--fork-config`: fork genesis config file is the new genesis config that is distributed with the new daemon and is published after the fork block is announced + +### Example migration steps using Mina Foundation data for Devnet using Debian + +See: [Worked example using March 22 data](/berkeley-upgrade/archive-migration/debian-example) + +### Example migration steps using Mina Foundation data for Mainnet using Docker + +See: [Worked example using March 22 data](/berkeley-upgrade/archive-migration/docker-example) + +## How to verify a successful migration + +o1Labs and Mina Foundation make every effort to provide reliable tools of high quality. However, it is not possible to eliminate all errors and test all possible Mainnet archive variations. +All important checks are implemented in the `mina-berkeley-migration-verifier` application. +However, you can use the following checklist if you want to perform the checks manually: + +1. All transaction (user command and internal command) hashes are left intact. + + Verify that the `user_command` and `internal_command` tables have the Devnet/Mainnet format of hashes. For example, `CkpZirFuoLVV...`. + +2. Parent-child block relationship is preserved + + Verify that a given block in the migrated archive has the same parent in the Devnet/Mainnet archive (`state_hash` and `parent_hash` columns) that was used as input. + +3. Account balances remain the same + + Verify the same balance exists for a given block in Mainnet and the migrated databases. + +## Tips and tricks + +We are aware that the migration process can be very long (a couple of days). Therefore, we encourage you to use cron jobs that migrate data incrementally. +The cron job requires access to Google Cloud buckets (or other storage): + +- A bucket to store migrated-so-far database dumps +- A bucket to store checkpoint files + +We are tightly coupled with Google Cloud infrastructure due to the precomputed block upload mechanism. +This is why we are using also buckets for storing dumps and checkpoint. However, you do not have to use Google Cloud for other things than +precomputed blocks. With configuration, you can use any gsutil-compatible storage backend (for example, S3). + +Before running the cron job, upload an initial database dump and an initial checkpoint file. + +To create the files, run these steps locally: + +1. Download a Devnet/Mainnet archive dump and load it into PostgreSQL. +2. Create an empty database using the new archive schema. +3. Run the **berkeley-migration** app against the Devnet/Mainnet and new databases. +4. Run the **replayer app in migration mode** with the `--checkpoint-interval` set to a suitable value (perhaps 100) and start with the original Devnet/Mainnet ledger in the input file. +5. Use pg_dump to dump the migrated database and upload it. +6. Upload the most recent checkpoint file. + +The cron job performs the same steps in an automated fashion: + +1. Pulls the latest Devnet/Mainnet archive dump and loads it into PostgresQL. +2. Pulls the latest migrated database and loads it into PostgreSQL. +3. Pulls the latest checkpoint file. +4. Runs the **berkeley-migration** app against the two databases. +5. Runs the **replayer app in migration mode** using the downloaded checkpoint file; set the checkpoint interval to be smaller (perhaps 50) because there are typically only 200 or so blocks in a day. +7. Uploads the migrated database. +8. Uploads the most recent checkpoint file. + +Be sure to monitor the cron job for errors. + +Just before the Berkeley upgrade, migrate the last few blocks by running locally: + +1. Download the Devnet/Mainnet archive data directly from the k8s PostgreSQL node (not from the archive dump), and load it into PostgreSQL. +2. Download the most recent migrated database and load it into PostgresQL. +3. Download the most recent checkpoint file. +4. Run the **berkeley-migration** app against the two databases. +5. Run the **replayer app in migration mode** using the most recent checkpoint file. + +It is worthwhile to perform these last steps as a dry run to make sure all goes well. You can run these steps as many times as needed. + +## Known migration problems + +Please remember that rerunning after crash is always possible. +After solving any of below issues you can rerun process and migration +will continue form last position + +#### Async was unable to add a file descriptor to its table of open file descriptors +For example: + +``` + ("Async was unable to add a file descriptor to its table of open file descriptors" + (file_descr 18) + (error + "Attempt to register a file descriptor with Async that Async believes it is already managing.") + (backtrace + ...... +``` +A remedy is to lower `--block-batch-size` parameter to values up to 500. + +#### Map.find_exn: not found +For example: +``` +(monitor.ml.Error + (Not_found_s + ("Map.find_exn: not found" +.... +``` + +Usually this error means that there is a gap in canonical chain. In order to fix it please +ensure that missing-block-auditor run is successful + +#### Yojson.Json_error .. Unexpected end of input + +For example: + +``` +(monitor.ml.Error + ("Yojson.Json_error(\"Line 1, bytes 1003519-1003520:\\nUnexpected end of input\")") + ("Raised at Yojson.json_error in file \"common.ml\", line 5, characters 19-39" + "Called from Yojson.Safe.__ocaml_lex_read_json_rec in file \"lib/read.mll\", line 215, characters 28-52" +... +``` + +This issue is caused by invalid precomputed block. Deleting the downloaded precomputed blocks should resolve this issue. + +#### Error querying db, error: Request to ... failed: ERROR: column \"type\" does not exist + +You provided the migrated schema as source one when invoking script or berkeley-migration app + +#### Poor performance of migration when accessing remote database + +We conducted migration tests with both a local database and a distant database (RDS). +The migration using the local database appears to process significantly faster. We strongly suggest to use offline database installed locally + +#### ERROR: out of shared memory + +``` +(monitor.ml.Error (Failure "Error querying for user commands with id 1686617, error Request to postgresql://user:pwd@host:port/db failed: ERROR: out of shared memory +\nHINT: You might need to increase max_pred_locks_per_transaction +``` + +Solution is either to increase `max_pred_locks_per_transaction` setting in postgres database. +Alternative is to isolate database from mainnet traffic (for example by exporting dump from live database and import it on isolated environment) + +#### Berkeley migration app is consuming all of my resources + +When running a full migration, you can stumble on memory leaks that prevent you from cleanly performing the migration in one pass. A machine with 64 GB of RAM can be frozen after ~40k migrated blocks. Each 200 blocks inserted into the database increases the memory leak by 4-10 MB. + +A potential workaround is to split the migration into smaller parts using cron jobs or automation scripts. + +## FAQ + +### Migrated database is missing orphaned blocks + +By design, Berkeley migration omits orphaned blocks and, by default, migrates only canonical (and pending, if setup correctly) blocks. + +### Replayer in migration mode overrides my old checkpoints + +By default, the replayer dumps the checkpoint to the current folder. All checkpoint files have a similar format: + +`replayer-checkpoint-{number}.json.` + +To prevent override of old checkpoints, use the `--checkpoint-output-folder` and `--checkpoint-file-prefix` parameters to modify the output folder and prefix. diff --git a/docs/berkeley-upgrade/archive-migration/understanding-archive-migration.mdx b/docs/berkeley-upgrade/archive-migration/understanding-archive-migration.mdx new file mode 100644 index 000000000..d80a53eaa --- /dev/null +++ b/docs/berkeley-upgrade/archive-migration/understanding-archive-migration.mdx @@ -0,0 +1,56 @@ +--- +title: Understanding archive migration +sidebar_label: Understanding archive migration +hide_title: true +description: Overview of the migration tools and requirements to successfully migrate Devnet/Mainnet archive database. +keywords: + - Berkeley + - upgrade + - archive migration + - planning + - prerequisites + - mina archive node + - archive node +--- + +# Understanding the Archive Migration + +You can reduce risks and effort by reading all of the archive documentation in entirety. + +## Archive node migration overview + +Archive node migration is a crucial part of the Berkeley upgrade. The current Devnet and Mainnet database format must be converted to the Berkeley format to preserve historical data and assure archive node chain continuity. + +For this purpose, the o1Labs and Mina Foundation teams prepared a migration package. + +### Archive node Berkeley migration package + +This package contains the required applications to migrate existing Devnet and Mainnet databases into the new Berkeley schema and a usability script: + +1. **berkeley-migration** + + Use the **berkeley-migration** app to migrate as much data as possible from the Devnet/Mainnet database and download precomputed blocks to get the window density data. + + This app runs against the Devnet/Mainnet database and the new Berkeley database. + +2. **replayer app in migration mode** + + The existing replayer application is enhanced with a new migration mode. Use the **replayer app in migration mode** to analyze the transactions in the partially migrated database (resulting from running berkeley-migration app) and populate the `accounts_accessed` and `accounts_created` tables. This app also does the checks performed by the standard replayer, but does not check ledger hashes because the Berkeley ledger has greater depth that results in different hashes. + + This app runs only against the new archive database. + +3. **berkeley-migration-verifier** + + Use the **berkeley-migration-verifier** verification software to determine if the migration (even incomplete) was successful. The app uses SQL validations on the migrated database. + +4. **end-to-end migration script** + + This shell script wraps all phases and stages of migration into a single script. It is provided purely for node operators usability and is equivalent of running the **berkely-migration** app, the **replayer app in migration mode**, and the **berkeley-migration-verifier** apps in the correct order. + +### Incrementality + +Use the **berkeley-migration** and **replayer** apps incrementally so that you can migrate part of the Devnet/Mainnet database, and, as new blocks are added to the Devnet/Mainnet databases, the new data can be migrated. + +To obtain that incrementality, the **berkeley-migration** app looks at the migrated database and determines the most recent migrated block. It continues migration starting at the next block in the Devnet/Mainnet data. The **replayer app in migration mode** uses the checkpoint mechanism already in place for the replayer. A checkpoint file indicates the global slot since genesis for starting the replay and the ledger to use for that replay. New checkpoint files are written as it proceeds. + +To take advantage of the incrementality, run a cron job that migrates a day's worth of data at a time (or some other interval). With the cron job in place, at the time of the actual Berkeley upgrade, you will need to migrate only a small amount of data. diff --git a/docs/berkeley-upgrade/flags-configs.mdx b/docs/berkeley-upgrade/flags-configs.mdx new file mode 100644 index 000000000..878a8e71a --- /dev/null +++ b/docs/berkeley-upgrade/flags-configs.mdx @@ -0,0 +1,137 @@ +--- +title: Post-Upgrade Flags and Configurations for Mainnet +sidebar_label: Post-Upgrade Flags and Configurations +hide_title: true +description: Post-Upgrade Flags and Configurations for Mainnet +keywords: + - Berkeley + - upgrade + - flags + - configurations +--- + +# Post-Upgrade Flags and Configurations for Mainnet + +Please refer to the Berkeley node release notes [here](https://github.com/MinaProtocol/mina/releases/tag/2.0.0). + +### Network details + +``` +Chain ID +5f704cc0c82e0ed70e873f0893d7e06f148524e3f0bdae2afb02e7819a0c24d1 + +Git SHA-1 +039296a260080ed02d0d0750d185921f030b6c9c + +Seed List +https://bootnodes.minaprotocol.com/networks/mainnet.txt + +Node build +https://github.com/MinaProtocol/mina/releases/tag/2.0.0 +``` + +### Block Producer​s + +Start your node post-upgrade in Mainnet with the flags and environment variables listed below. + +``` +mina daemon +--block-producer-key +--config-directory +--file-log-rotations 500 +--generate-genesis-proof true +--libp2p-keypair +--log-json +--peer-list-url https://bootnodes.minaprotocol.com/networks/mainnet.txt + +ENVIRONMENT VARIABLES +RAYON_NUM_THREADS=6 +MINA_LIBP2P_PASS +MINA_PRIVKEY_PASS +``` + +### SNARK Coordinator +Configure your node post-upgrade in Mainnet with specific flags and environment variables as listed. + +``` +mina daemon +--config-directory +--enable-peer-exchange true +--file-log-rotations 500 +--libp2p-keypair +--log-json +--peer-list-url https://bootnodes.minaprotocol.com/networks/mainnet.txt +--run-snark-coordinator +--snark-worker-fee 0.001 +--work-selection [seq|rand] + +ENVIRONMENT VARIABLES +MINA_LIBP2P_PASS +``` + +### SNARK Workers +Connect to a SNARK Coordinator node if required and run the following flags. +``` +mina internal snark-worker +--proof-level full +--shutdown-on-disconnect false +--daemon-address + +ENVIRONMENT VARIABLES +RAYON_NUM_THREADS:8 +``` + +### Archive Node +Running an Archive Node involves setting up a non-block-producing node and a PostgreSQL database configured with specific flags and environment variables. + +For more information about running archive nodes, see [Archive Node](/node-operators/archive-node). + +The PostgreSQL database requires two schemas: +1. The PostgreSQL schema used by the Mina archive database: in the [release notes](https://github.com/MinaProtocol/mina/releases/tag/2.0.0) +2. The PostgreSQL schema extensions to support zkApp commands: in the [release notes](https://github.com/MinaProtocol/mina/releases/tag/2.0.0) + +The non-block-producing node must be configured with the following flags: +``` +mina daemon +--archive-address : +--config-directory +--enable-peer-exchange true +--file-log-rotations 500 +--generate-genesis-proof true +--libp2p-keypair +--log-json +--peer-list-url https://bootnodes.minaprotocol.com/networks/mainnet.txt + +ENVIRONMENT VARIABLES +MINA_LIBP2P_PASS +``` + +This non-block-producing node connects to the archive node with the addresses and port specified in the `--archive-address` flag. + +The **archive node** command looks like this: + +``` +mina-archive run +--metrics-port +--postgres-uri postgres://:@
:/ +--server-port 3086 +--log-json +--log-level DEBUG +``` + +### Rosetta API +Once you have the Archive Node stack up and running, start the Rosetta API Docker image with the following command: + +``` +docker run +--name rosetta --rm \ +-p 3088:3088 \ +--entrypoint '' \ +gcr.io/o1labs-192920/mina-rosetta: \ +/usr/local/bin/mina-rosetta \ +--archive-uri "${PG_CONNECTION_STRING}" \ +--graphql-uri "${GRAPHQL_URL}" \ +--log-json \ +--log-level ${LOG_LEVEL} \ +--port 3088 +``` diff --git a/docs/berkeley-upgrade/requirements.mdx b/docs/berkeley-upgrade/requirements.mdx new file mode 100644 index 000000000..4489e6bb1 --- /dev/null +++ b/docs/berkeley-upgrade/requirements.mdx @@ -0,0 +1,67 @@ +--- +title: Requirements +sidebar_label: Requirements +hide_title: true +description: Berkeley upgrade is a major upgrade that requires all nodes in a network to upgrade to a newer version. It is not backward compatible. +keywords: + - Berkeley + - upgrade + - hardware requirements +--- + +# Requirements + +## Hardware Requirements + +Please note the following are the hardware requirements for each node type after the upgrade: + +| Node Type | Memory | CPU | Storage | Network | +|--|--|--|--|--| +| Mina Daemon Node | 32 GB RAM | 8 core processor with BMI2 and AVX CPU instruction set are required | 64 GB | 1 Mbps Internet Connection | +| SNARK Coordinator | 32 GB RAM | 8 core processor | 64 GB | 1 Mbps Internet Connection | +| SNARK Worker | 32 GB RAM | 4 core/8 threads per worker with BMI2 and AVX CPU instruction set are required | 64 GB | 1 Mbps Internet Connection | +| Archive Node | 32 GB RAM | 8 core processor | 64 GB | 1 Mbps Internet Connection | +| Rosetta API standalone Docker image | 32 GB RAM | 8 core processor | 64 GB | 1 Mbps Internet Connection | +| Mina Seed Node | 64 GB RAM | 8 core processor | 64 GB | 1 Mbps Internet Connection | + +## Mina Daemon Requirements + +### Installation + +:::caution + +If you have `mina-generate-keypair` installed, you will need to first `sudo apt remove mina-generate-keypair` before installing `mina-mainnet=3.0.0-93e0279`. +The `mina-generate-keypair` binary is now installed as part of the mina-mainnet package. + +::: + +### IP and Port configuration + +**IP:** + +By default, the Mina Daemon will attempt to retrieve its public IP address from the system. If you are running the node behind a NAT or firewall, you can set the `--external-ip` flag to specify the public IP address. + +**Port:** + +Nodes must expose a port publicly to communicate with other peers. +Mina uses by default the port `8302` which is the default libp2p port. + +You can use a different port by setting the `--external-port` flag. + +### Node Auto-restart + +Ensure your nodes are set to restart automatically after a crash. For guidance, refer to the [auto-restart instructions](/node-operators/block-producer-node/connecting-to-the-network#start-a-mina-node-with-auto-restart-flows-using-systemd) + +## Seed Peer Requirements + +### Generation of libp2p keypair + +To ensure connectivity across the network, it is essential that all seed nodes start with the **same** `libp2p` keypair. +This consistency allows other nodes in the network to reliably connect. +Although the same libp2p keys can be reused from before the upgrade, if you need to manually generate new libp2p keys, use the following command: + +``` +mina libp2p generate-keypair --privkey-path +``` + +Further information on [generating key pairs](/node-operators/generating-a-keypair) on Mina Protocol. diff --git a/docs/berkeley-upgrade/upgrade-steps.mdx b/docs/berkeley-upgrade/upgrade-steps.mdx new file mode 100644 index 000000000..767c04f2e --- /dev/null +++ b/docs/berkeley-upgrade/upgrade-steps.mdx @@ -0,0 +1,132 @@ +--- +title: Upgrade Steps +sidebar_label: Upgrade Steps +hide_title: true +description: Detailed upgrade steps and operators' tasks +keywords: + - Berkeley + - upgrade + - Detailed upgrade steps and operators' tasks +--- + +# Upgrade Steps + + Mainnet Upgrade steps + +Below it's the description in detail of all the upgrade steps and what which node operator type should do to in each step. + +## Pre-Upgrade + +- During the Pre-Upgrade phase, node operators should prepare for the upcoming upgrade. The most important steps are: + - Review the [upgrade readiness checklist](https://docs.google.com/document/d/1rTmJvyaK33dWjJXMOSiUIGgf8z7turxolGHUpVHNxEU/edit#heading=h.2hqz0ixwjk3f) to confirm they have covered the required steps. + - Upgrade their nodes to the 1.4.1 stable version + - Ensure servers are provisioned to run Berkeley nodes, meeting the new hardware requirements + - Upgrade their nodes to the node version [2.0.0](https://github.com/MinaProtocol/mina/releases/tag/2.0.0), with stop-slots, when this version becomes available + - Start the archive node initial migration if they run archive nodes and wish to perform the migration in a decentralized manner + +**Please note:** a simplified Node Status service will be part of the upgrade tooling and enabled by default in Pre-Upgrade release with the stop-slots ([2.0.0](https://github.com/MinaProtocol/mina/releases/tag/2.0.0)). This feature will allow for a safe upgrade by monitoring the amount of upgraded active stake. Only non-sensitive data will be reported. If operators are not comfortable sharing their node version, they will have the option to disable the node version reports by using the appropriate node flag `--node-stats-type none` + +### Block Producers and SNARK Workers +1. Review the [upgrade readiness checklist](https://docs.google.com/document/d/1rTmJvyaK33dWjJXMOSiUIGgf8z7turxolGHUpVHNxEU). +1. Provision servers that meet the minimum hardware requirements, including the new 32GB RAM requirement and support for _AVX_ and _BMI2_ CPU instructions. +1. Upgrade nodes to node version [2.0.0](https://github.com/MinaProtocol/mina/releases/tag/2.0.0) ([2.0.0](https://github.com/MinaProtocol/mina/releases/tag/2.0.0) has built-in stop slots). + +### Archive Node Operators and Rosetta Operators +- Two migration processes will be available to archive node operators: _trustless_ and _trustful_. If the archive node operator wants to perform the _trustless_ migration, they should follow these steps; otherwise, proceed to the Upgrade phase. The _trustful_ migration will rely on o1Labs database exports and Docker images to migrate the archive node database and doesn’t require any actions at this stage. + +1. Trustless migration: + - Perform the initial archive node migration. Since Mainnet is a long-lived network, the initial migration process can take up to 48 hours, depending on your server specification and infrastructure. + - If your Mina Daemon, archive node, or PostgreSQL database runs on different machines, the migration performance will be greatly impacted. + - For more information on the archive node migration process, please refer to the [Archive Migration](/berkeley-upgrade/archive-migration) section. +2. Upgrade all nodes to the latest stable version [2.0.0](https://github.com/MinaProtocol/mina/releases/tag/2.0.0). +3. Provision servers that meet the minimum hardware requirements, primarily the new 32GB RAM requirement. +4. Upgrade their nodes to the version that includes built-in stop slots before the pre-defined _stop-transaction-slot_. + +### Exchanges +1. Make sure to test your system integration with Berkeley's new features. Pay special attention to: + - If you use the **o1labs/client-sdk** library to sign transactions, you should switch to **mina-signer** https://www.npmjs.com/package/mina-signer. **o1labs/client-sdk was deprecated some time ago and will be unusable** once the network has been upgraded. Please review the migration instructions in [Appendix](/berkeley-upgrade/appendix) + - If you rely on the archive node SQL database tables, please review the schema changes in Appendix 1 of this document. +2. Upgrade all nodes to the latest stable version [2.0.0](https://github.com/MinaProtocol/mina/releases/tag/2.0.0). +3. Provision servers that meet the minimum hardware requirements, particularly the new 32GB RAM requirement. +4. Upgrade your nodes to the version that includes built-in stop slots before the pre-defined _stop-transaction-slot_. + +*** + +## State Finalization +- Between the predefined _stop-transaction-slot_ and _stop-network-slot_, a stabilization period of 100 slots will occur. During this phase, the network consensus will not accept new blocks with transactions on them, including coinbase transactions. The state finalization period ensures all nodes reach a consensus on the latest network state before the upgrade. +- During the state finalization slots, it is crucial to maintain a high block density. Therefore, block producers and SNARK workers shall continue running their nodes to support the network's stability and security. +- Archive nodes should also continue to execute to ensure finalized blocks are in the database and can be migrated, preserving the integrity and accessibility of the network's history. + +### Block Producers and SNARK Workers +1. It is crucial for the network's successful upgrade that all block producers and SNARK workers maintain their block-producing nodes up and running throughout the state finalization phase. +2. If you are running multiple daemons like is common with many operators, you can run one single node at this stage. +3. If you are a Delegation Program operator, remember that your uptime data will continue to be tracked during the state finalization phase and will be considered for the delegation grant in the following epoch. + +### Archive Node Operators and Rosetta Operators +**If you plan to do the _trustful_ migration, you can skip this step.** +If you are doing the trustless migration, then: +1. Continue to execute the archive node to ensure finalized blocks are in the database and can be migrated. +2. Continue to run incremental archive node migrations until after the network stops at the stop-network slot. +3. For more information on the archive node migration process, please refer to the [Archive Migration](/berkeley-upgrade/archive-migration) section + +### Exchanges + +Exchanges shall disable MINA deposits and withdrawals during the state finalization period (the period between _stop-transaction-slot_ and _stop-network-slot_) since any transactions after the _stop-transaction-slot_ will not be part of the upgraded chain. + +Remember that although you might be able to submit transactions, the majority of the block producers will be running a node that discards any blocks with transactions. + +*** + +## Upgrade + +- Starting at the _stop-network-slot_ the network will not produce nor accept new blocks, resulting in halting the network. During the upgrade period, o1Labs will use automated tooling to export the network state based on the block at the slot just before the _stop-transaction-slot_. The exported state will then be baked into the new Berkeley build, which will be used to initiate the upgraded network. It is during the upgrade windows that the Berkeley network infrastructure will be bootstrapped, and seed nodes will become available. o1Labs will also finalize the archive node migration and publish the PostgreSQL database dumps for import by the archive node operators who wish to bootstrap their archives in a trustful manner. +- There is a tool available to validate that the Berkeley node was built from the pre-upgrade network state. To validate, follow the instructions provided in this [location](https://github.com/MinaProtocol/mina/blob/berkeley/docs/upgrading-to-berkeley.md) + +### Block Producers and SNARK Workers +1. During the upgrade phase (between _stop-network-slot_ and the publishing of the Berkeley release), block producers can shut down their nodes. +2. After the publication of the Berkeley node release, block producers and SNARK workers should upgrade their nodes and be prepared for block production at the genesis timestamp, which is the slot when the first Berkeley block will be produced. +3. It is possible to continue using the same libp2p key after the upgrade. Remember to adjust the new flag to pass the libp2p key to the node. + +### Archive Node Operators and Rosetta Operators +1. Upon publishing the archive node Berkeley release, archive node operators and Rosetta operators should upgrade their systems. +There will be both Docker images and archive node releases available to choose from. +2. Depending on the chosen migration method: + - _Trustless_ + - Operators should direct their Berkeley archive process to the previously migrated database. + - _Trustful_ + - Operators shall import the SQL dump file provided by o1Labs to a freshly created database. + - Operators should direct their Berkeley archive process to the newly created database. + +**Please note:** both the _trustless_ and _trustful_ migration processes will discard all Mainnet blocks that are not canonical. If you wish to preserve the entire block history, i.e. including non-canonical blocks, you should maintain the Mainnet archive node database for posterior querying needs. + +### Exchanges +1. Exchanges shall disable MINA deposits and withdrawals during the entirety of the upgrade downtime, since the _stop-transaction-slot_ until the Mainnet Berkeley network is operational. +2. After the Berkeley releases are published, exchanges should upgrade their nodes and prepare for the new network to start block production. + +*** + +## Post-Upgrade +- At approximately 1 hour after the publishing of the Berkeley node release, at a predefined slot (Berkeley genesis timestamp), block production will start, and the network is successfully upgraded. +- Node operators can monitor their nodes and provide feedback to the technical team in case of any issues. Builders can start deploying zkApps. +- **Please note:** The Node Status service will not be enabled by default in the Berkeley release. If you wish to provide Node Status and Error metrics and reports to Mina Foundation, helping monitor the network in the initial phase, please use the following flags when running your nodes: + - `--node-stats-type [full|simple]` + - `--node-status-url https://nodestats.minaprotocol.com/submit/stats` + - `--node-error-url https://nodestats.minaprotocol.com/submit/stats` + - The error collection service tries to report any node crashes before the node process is terminated + +### Block Producers and SNARK Workers +1. Ensure that all systems have been upgraded and prepared for the start of block production. +2. Monitor nodes and network health, and provide feedback to the engineering team in case of any issues. + +### Archive Node Operators and Rosetta Operators +1. Ensure that all systems have been upgraded and prepared for the start of block production. +2. Monitor nodes and network health, and provide feedback to the engineering team in case of any issues. + +### Exchange and Builders +1. After the predefined Berkeley genesis timestamp, block production will commence, and MINA deposits and withdrawals can be resumed. +2. Ensure that all systems have been upgraded and prepared for the start of block production. +3. Monitor nodes and network health, and provide feedback to the engineering team in case of any issues. diff --git a/docs/welcome.mdx b/docs/welcome.mdx index 850bd6e1a..84a613851 100644 --- a/docs/welcome.mdx +++ b/docs/welcome.mdx @@ -12,12 +12,33 @@ import HomepageFeatures from "@site/src/components/features/HomepageFeatures"; :::caution Berkeley Mainnet release has landed Please make sure to upgrade your mina nodes to **3.0.0** ([Release notes](https://github.com/MinaProtocol/mina/releases/tag/3.0.0)) -[See instructions on how to upgrade your Mina node](/node-operators/requirements) +[See instructions on how to upgrade your Mina node](/berkeley-upgrade/requirements) **Note**: Non-seed nodes will remain in `Bootstrap` status until such a point as block production begins at **`00:00UTC on June 5th`**. During this period of no block production, nodes will automatically **terminate after 25 minutes**, this is **expected behavior**. -Please ensure you have configured your nodes to [auto-restart](/node-operators/requirements#node-auto-restart) on crash to have them automatically try and reconnect. +Please ensure you have configured your nodes to [auto-restart](/berkeley-upgrade/requirements#node-auto-restart) on crash to have them automatically try and reconnect. ::: +## Mainnet Upgrade Timeline + + + +## Feedback and Questions +Thank you for participating in the Berkeley upgrade. + +If you have any questions or feedback related to the Berkeley upgrade, please use the dedicated Discord [#mainnet-updates](https://discord.com/channels/484437221055922177/816099272859844638) channel. + +## Next + +[**How to upgrade your Mina node**](/berkeley-upgrade/requirements). + +
+
+ +--- + +
+
+ diff --git a/sidebars.js b/sidebars.js index de45ded99..a64529ccf 100644 --- a/sidebars.js +++ b/sidebars.js @@ -6,6 +6,37 @@ module.exports = { label: 'About Mina', href: 'https://minaprotocol.com/about', }, + { + type: 'category', + label: 'Berkeley Upgrade', + link: { + type: 'doc', + id: 'berkeley-upgrade/requirements', + }, + items: [ + { + type: 'category', + label: 'Archive Migration', + link: { + type: 'doc', + id: 'berkeley-upgrade/archive-migration/index', + }, + items: [ + 'berkeley-upgrade/archive-migration/understanding-archive-migration', + 'berkeley-upgrade/archive-migration/archive-migration-prerequisites', + 'berkeley-upgrade/archive-migration/archive-migration-installation', + 'berkeley-upgrade/archive-migration/migrating-archive-database-to-berkeley', + 'berkeley-upgrade/archive-migration/mainnet-database-maintenance', + 'berkeley-upgrade/archive-migration/debian-example', + 'berkeley-upgrade/archive-migration/docker-example', + 'berkeley-upgrade/archive-migration/appendix', + ], + }, + 'berkeley-upgrade/upgrade-steps', + 'berkeley-upgrade/flags-configs', + 'berkeley-upgrade/appendix', + ], + }, { type: 'category', label: 'Using Mina',