Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

github: Organise ci.yaml #113

Merged
merged 8 commits into from
Dec 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
114 changes: 71 additions & 43 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
name: CI

# TODO: If these environment variables only affect Nix, should they be moved under the `formal-spec-check` job?
env:
ALLOWED_URIS: "https://github.com https://api.github.com"
TRUSTED_PUBLIC_KEYS: "cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= hydra.iohk.io:f/Ea+s+dFdN+3Y/G+FDgSq+a5NEWhJGzdjvKNGv0/EQ="
Expand All @@ -9,20 +11,27 @@ on:
push:
branches:
- main

jobs:
typecheck:
name: Typecheck specification
################################################################################
# Formal Specification in Agda - under /formal-spec/
################################################################################

formal-spec-typecheck:
name: "formal-spec: Typecheck"
runs-on: ubuntu-22.04
steps:
- name: 📥 Checkout repository
uses: actions/checkout@v4

- name: 💾 Cache Nix store
uses: actions/[email protected]
id: nix-cache
with:
path: /tmp/nixcache
key: ${{ runner.os }}-nix-typecheck-${{ hashFiles('flake.lock') }}
restore-keys: ${{ runner.os }}-nix-typecheck-

- name: 🛠️ Install Nix
uses: cachix/install-nix-action@v21
with:
Expand All @@ -33,18 +42,25 @@ jobs:
trusted-public-keys = ${{ env.TRUSTED_PUBLIC_KEYS }}
substituters = ${{ env.SUBSTITUTERS }}
experimental-features = nix-command flakes

- name: 💾➤ Import Nix store cache
if: "steps.nix-cache.outputs.cache-hit == 'true'"
run: "nix-store --import < /tmp/nixcache"

- name: 🏗️ Build specification
run: |
nix build --show-trace --accept-flake-config .#leiosSpec

- name: ➤💾 Export Nix store cache
if: "steps.nix-cache.outputs.cache-hit != 'true'"
run: "nix-store --export $(find /nix/store -maxdepth 1 -name '*-*') > /tmp/nixcache"

compile:
name: Build Haskell packages with GHC ${{ matrix.ghc-version }} on ${{ matrix.os }}
################################################################################
# Simulation and Prototype in Haskell - under /simulation/
################################################################################

simulation-test:
name: "simulation: Test with GHC ${{ matrix.ghc-version }} on ${{ matrix.os }}"
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
Expand All @@ -53,9 +69,10 @@ jobs:
ghc-version: ["9.8"]

steps:
- uses: actions/checkout@v4
- name: 📥 Checkout repository
uses: actions/checkout@v4

- name: Set up GHC ${{ matrix.ghc-version }}
- name: 🛠️ Install GHC ${{ matrix.ghc-version }}
uses: haskell-actions/setup@v2
id: setup
with:
Expand All @@ -64,87 +81,97 @@ jobs:
cabal-version: "latest"
cabal-update: true

- name: Install libraries
- name: 🛠️ Install system dependencies
run: sudo apt-get install -y graphviz libpango1.0-dev libgtk-3-dev

- name: Configure the build
- name: 🛠️ Configure
run: |
cabal configure --enable-tests --enable-benchmarks --disable-documentation
cabal build all --dry-run
# The last step generates dist-newstyle/cache/plan.json for the cache key.

- name: Restore cached dependencies
- name: 💾➤ Restore dependency cache
uses: actions/cache/restore@v4
id: cache
env:
key: ${{ runner.os }}-ghc-${{ steps.setup.outputs.ghc-version }}-cabal-${{ steps.setup.outputs.cabal-version }}
with:
path: ${{ steps.setup.outputs.cabal-store }}
key: ${{ env.key }}-plan-${{ hashFiles('**/plan.json') }}
key: ${{ env.key }}-plan-${{ hashFiles('dist-newstyle/cache/plan.json') }}
restore-keys: ${{ env.key }}-

- name: Install dependencies
- name: 🛠️ Install Cabal dependencies
# If we had an exact cache hit, the dependencies will be up to date.
if: steps.cache.outputs.cache-hit != 'true'
run: cabal build all --only-dependencies

# Cache dependencies already here, so that we do not have to rebuild them should the subsequent steps fail.
- name: Save cached dependencies
- name: ➤💾 Save dependency cache
uses: actions/cache/save@v4
# If we had an exact cache hit, trying to save the cache would error because of key clash.
if: steps.cache.outputs.cache-hit != 'true'
with:
path: ${{ steps.setup.outputs.cabal-store }}
key: ${{ steps.cache.outputs.cache-primary-key }}

- name: Build
- name: 🏗️ Build
run: cabal build all

- name: Run tests
- name: 🏗️ Test
run: cabal test all

rs-compile:
name: Check Rust packages
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- name: Check Rust packages compilation
working-directory: sim-rs
run: |
cargo check
if [ $? -ne 0 ]; then
echo "Cargo check failed"
exit 1
fi

hlint-check:
name: Check Haskell sources with HLint
simulation-hlint:
name: "simulation: Check with HLint"
runs-on: ubuntu-22.04
steps:
- name: 📥 Checkout repository
uses: actions/checkout@v4

- name: "Set up HLint"
- name: 🛠️ Set up HLint
uses: haskell-actions/hlint-setup@v2

- name: "Run HLint"
- name: 🛠️ Run HLint
uses: haskell-actions/hlint-run@v2
with:
path: simulation/
fail-on: warning

fourmolu-check:
name: Check Haskell sources with fourmolu
simulation-fourmolu:
name: "simulation: Check with fourmolu"
runs-on: ubuntu-22.04
steps:
# Note that you must checkout your code before running haskell-actions/run-fourmolu
- uses: actions/checkout@v4
- uses: haskell-actions/run-fourmolu@v11
- name: 📥 Checkout repository
uses: actions/checkout@v4

- name: 🛠️ Run fourmolu
uses: haskell-actions/run-fourmolu@v11
with:
version: "0.15.0.0"

generate-diagrams:
name: Generate D2 Diagrams
################################################################################
# Simulation in Rust - under /sim-rs/
################################################################################

sim-rs-build:
name: "sim-rs: Check"
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- name: Check Rust packages compilation
working-directory: sim-rs
run: |
cargo check
if [ $? -ne 0 ]; then
echo "Cargo check failed"
exit 1
fi

################################################################################
# Documentation - under various directories
################################################################################

docs-generate-d2-diagrams:
name: "docs: Generate D2 Diagrams"
runs-on: ubuntu-22.04
permissions:
contents: write
Expand Down Expand Up @@ -189,7 +216,8 @@ jobs:
git commit -m "Auto-generate diagram PNGs [skip ci]"
git push origin HEAD:${{ github.head_ref || github.ref_name }}

build-docusaurus:
docs-build:
name: "docs: Build"
runs-on: ubuntu-22.04
steps:
- name: 📥 Checkout repository
Expand Down Expand Up @@ -219,11 +247,11 @@ jobs:
path: |
site/build/*

publish-docs:
docs-publish:
name: "docs: Publish"
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
runs-on: ubuntu-22.04
needs:
- build-docusaurus
needs: docs-build
steps:
- name: 📥 Download Docusaurus build
uses: actions/download-artifact@v4
Expand Down
41 changes: 32 additions & 9 deletions Logbook.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,38 @@
# Leios logbook

## 2024-12-17

### GitHub Actions

- Organised the CI configuration to sort jobs by their corresponding project and added namespace prefixes to all jobs, which are either the top-level directory name for the project or `docs`.
For instance:

- `typecheck` changed to `formal-spec-typecheck`;
- `compile` changed to `simulation-test`, since it calls both `cabal build` and `cabal test`; and
- `rs-compile` changed to `sim-rs-check`, it only calls `cargo check`.

The jobs that relate to publishing the documentation are prefixed by `docs`, e.g., `build-docusaurus` changed to `docs-build`.

### Haskell simulation

- Merged code to run Praos and Leios visualisations from a file such as `data/BenchTopology/topology-dense-52-simple.json`, e.g., run:

```sh
cabal run ols -- viz short-leios-p2p-1 --topology data/BenchTopology/topology-dense-52-simple.json
```

- Added HLint integration to check Haskell sources and ensure consistent use of module imports.
- Added CI job for HLint named `simulation-hlint`.

## 2024-12-13

### Haskell simulation

- Merged leios visualizations on `main`.
- P2P visualization improvements:
* Block types are differentiated by shapes, and pipelines by color.
* Charting diffusion latency of each block type.
* TODO: chart CPU usage.
- Block types are differentiated by shapes, and pipelines by color.
- Charting diffusion latency of each block type.
- TODO: chart CPU usage.
- Reworked generation of EBs and Votes to handle `>= 1` frequencies
like IBs (except max 1 EB per pipeline per node).
- Visualizations helped with discovering and fixing some modeling errors.
Expand Down Expand Up @@ -87,7 +111,6 @@ The general impact of such attacks varies:
- Will can reformat data we need for our simulations, so we don't end up with inconsistent input data sets.
- We will use the [beta-distribution fit](docs/technical-report-1.md#stake-distribution) for representing the unevenness of the stake distribution in our simulations.


### Rust simulation

Generated new test data set to match geographical distribution of mainnet nodes. In this dataset, nodes belong to a region (and have an explicit region tag) and are physically clustered near other nodes in that region.
Expand Down Expand Up @@ -290,7 +313,7 @@ We now have order-of-magnitude estimates for the size and computation required f

## 2024-11-26

### Curve fit to empirically observed distribution of stake pools
### Curve fit to empirically observed distribution of stake pools

The cumulative distribution function for the beta distribution (the [regularized incomplete beta function](https://en.wikipedia.org/wiki/Regularized_incomplete_beta_function)) with parameters `α = 11` and `β = 1` nicely fits the empirical distribution of stake pools at epoch 500. To use this for 2000 stake pools, just divide the x axis into 2000 points and take the difference in consecutive y values as the amount of stake the corresponding pool has.

Expand Down Expand Up @@ -340,7 +363,7 @@ Stopped sending as many redundant vote messages, traffic is far more reasonable.
thread, sufficient workaround atm but would be good to investigate
in future.
- Fixed inaccuracy in praos simulation where it was possible for a
block to validate without having validated the previous one. The
block to validate without having validated the previous one. The
fix also allows for validation to happen via a dedicated queue.
- Defined "business logic" of (Uniform) Short Leios, referencing the
most recent draft.
Expand Down Expand Up @@ -431,8 +454,8 @@ See [Challenges for Leios, Part 1](analysis/challenges-1.md) for analysis of the
Findings:

1. Fees currently average 173.01 lovelace per byte of block.
1. Under best-case conditions, that fee will cover a cost of 115 ADA per GB of storage across 500 stakepools.
2. Under more realistic conditions, that fee will only cover a cost of 8 ADA per GB of storage across 2500 stakepools.
1. Under best-case conditions, that fee will cover a cost of 115 ADA per GB of storage across 500 stakepools.
2. Under more realistic conditions, that fee will only cover a cost of 8 ADA per GB of storage across 2500 stakepools.
2. Stake pools receive on average 20.91% of rewards.
3. The cost of perpetual storage of blocks at VMs ranges $7/GB to $30/GB, strongly depending upon the assumption of how rapidly storage costs decrease in the future.
4. The Cardano Reserves currently supply 99% of the rewards that stake pools and delegators receive.
Expand Down Expand Up @@ -649,7 +672,7 @@ Work continues on visualization; we're still deciding which data to visualize fi
- Action times
- Pie chart of mainnet hosting types (@bwbush)
- Work on pricing model (@bwbush)

### Latency measurements of 52-node cluster

The folder [data/BenchTopology/](data/BenchTopology/README.md) contains latency measurments and topology for a 52-machine `cardano-node` cluster that is used for benchmarking. The machine is spread among three AWS regions.
Expand Down
Loading