QmdpRDxYVnCdvqghj7KfzcaLyqo2NdHcXFriXo3Q7B9SsC
link -- thetestdata
directory as of 27/04/2023 -- Example ResultQmd6fEn5Wgk8LQKouHeBu6vdjL6Ps5ve8zM9iLSopzLz9o
link -- US Consitution blob (text) -- Example Result (Amplify v0.5.12)QmYMGVfoep9rM82KFWK2BGxznRB7qaPFGfvrqMCJe2juLH
link -- text excerpts (file with and w/o extension) taken from Wikipedia pages -- Example Result (Amplify v0.5.12)QmUN1LF7ZyButvMwVPm1uvgrWiB1FRGbn4CRAEVp5JzZmj
link -- CSV blob -- Example Result (Amplify v0.5.12)QmNZwDuAkWB8dasQdt3RnaJ434BBwWvFuTPNunWih446dJ
link -- Misc content dir -- Example Result (Amplify v0.5.12)QmUX8EhBYCdGYVCqa7N6ip1eqRhWtdSB3xy76AGtFgRcas
link -- Image blob -- Example Result (Amplify v0.5.12)QmbRr4kUXMxQfZPnLUSb1kSMDvFtBUcv1HVSDaAUKe4ePj
link -- Image dir -- Example Result (Amplify v0.5.12)QmTjHPpQcDtZ3BpBPN8QuwYMkBgRaPmQknVcFKkc3ickbM
link -- Video blob -- Example Result (Amplify v0.5.12)QmPJXMP2qfFMwadWZ1TAuhpXEiEEb7PibDMz5PgpmyVi7B
link -- Video dir -- Example Result (Amplify v0.5.12)
In the CircleCI file there is a job that filters on tags. This builds the binaries, releases the amplify docker container, and runs a terraform apply
on the production infrastructure.
- Create a new release tag in the format
vX.X.X
. Increment the version number according to semver. - Click the auto-generated release notes. Click the
Set this as latest
button. - Publish release.
After the pipeline finishes you can visit the website or you can check the version by running:
gcloud compute ssh amplify-vm-production-0 -- cat /terraform_node/variables | grep AMPLIFY_VERSION
You can develop the Terraform scripts by creating tags with a postfix of -anything
. E.g. v0.4.2-alpha1
. This won't trigger the main branch's tag filter. But you can change your branch so that it does.
Jobs are individual units of work that execute in a worker, which is just a simple goroutine. You can think of a job being a Bacalhau job, but they could be anything. The crucial element here is that Amplify needs to chain jobs together and so we need to define a common interface that all jobs must implement. We have tried to keep this interface as generic as possible, but we must work within the constraints of the Bacalhau API.
Note that the definition of a job is quite generic in the code, but for now we expect jobs to be Bacalhau Docker jobs , i.e. containers
- The job must be a Bacalhau-like job
- The job must conform to the input and output specifications.
- The job must be named and configured to run in the
config.yml
file - Jobs must have a unique name
- All inputs are passed via the
/inputs
directory is mounted as a volume in the container - Jobs must operate on every file and directory in the
/inputs
directory recursively - Previous nodes may be skipped due to a predicate, so don't assume specific inputs will be present
- Derivative files must be written to the
/outputs
directory - Derivative files should have names that are both unique and consistent with the original file name
- Metadata must be written to
stdout
(so subsequent jobs can predicate) - Errors must be written to
stderr
- Jobs should refrain from breaking changes to the output directory
Workflows are a collection of jobs that are chained together into a DAG. Amplify workflows are defined in a YAML file, which is then parsed and executed by the Amplify engine.
The interesting thing about Amplify workflows is that they run only when they predicate the results of the previous job. This means that we can define a workflow that only runs on specific types of data (images, for example).
- Workflows must be configured in the
config.yml
file - Workflows can be duplicated
- Workflows must have a unique name
- Given a single root CID, a composite CID will be generated with the results of all workflows
The image below shows a simplified version of the Amplify architecture.