You will need the following things properly installed on your backend virtual machine.
- GCP Account
- Slurm Cluster
- Singularity in /apps shared volume
- Git
- Node.js - Version v8.9.0 (with NPM)
- Docker
Allow all traffic from tcp:3030,3000,8080,9000,9092 on GCP firewall on your slurm network
-
git clone https://github.com/KhaledSoliman/openlane-cloud-backend-typescript
-
In the Cloud Console, activate Cloud Shell.
-
In Cloud Shell, set environment variables:
export PROJECT_ID="$(gcloud config get-value core/project)" export CLUSTER_ZONE="us-central1-a" export SINGULARITY_REPO="${PROJECT_ID}-singularity" export SINGULARITY_VERSION=3.7.3 export JOBOUTPUT_BUCKET="${PROJECT_ID}-singularity-job-out"
-
In Cloud Shell, log in to the login node of your Slurm cluster:
export CLUSTER_LOGIN_NODE=$(gcloud compute instances list --zones $CLUSTER_ZONE --filter="name~.*login." --format="value(name)" | head -n1) gcloud compute ssh ${CLUSTER_LOGIN_NODE} --zone $CLUSTER_ZONE
-
Update installed packages and install the necessary development tools (it will take some time so be patient):
sudo yum update -y && \ sudo yum groupinstall -y 'Development Tools' && \ sudo yum install -y \ openssl-devel \ libuuid-devel \ libseccomp-devel \ wget \ squashfs-tools \ cryptsetup
-
Install the Go programming language:
export GOLANG_VERSION=1.16.5 export OS=linux ARCH=amd64 wget https://dl.google.com/go/go$GOLANG_VERSION.$OS-$ARCH.tar.gz sudo tar -C /usr/local -xzvf go$GOLANG_VERSION.$OS-$ARCH.tar.gz rm go$GOLANG_VERSION.$OS-$ARCH.tar.gz echo 'export GOPATH=${HOME}/go' >> ~/.bashrc echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' >> ~/.bashrc source ~/.bashrc
-
Download a Singularity release:
export SINGULARITY_VERSION=3.7.3 wget https://github.com/sylabs/singularity/releases/download/v${SINGULARITY_VERSION}/singularity-${SINGULARITY_VERSION}.tar.gz && \ tar -xzf singularity-${SINGULARITY_VERSION}.tar.gz && \ cd singularity
-
Build and install Singularity in the /apps directory:
./mconfig --prefix=/apps/singularity/${SINGULARITY_VERSION} && \ make -C ./builddir && \ sudo make -C ./builddir install
By default, Singularity assumes that its configuration files are in the /etc directory. The --prefix flag in the preceding command alters the build so that Singularity looks for those files in the /apps/singularity/RELEASE_NUMBER directory. The /apps directory is available on all of the Slurm compute nodes.
-
Create a Singularity modulefile:
sudo mkdir /apps/modulefiles/singularity sudo bash -c "cat > /apps/modulefiles/singularity/${SINGULARITY_VERSION}" <<SINGULARITY_MODULEFILE #%Module1.0##################################################################### ## ## modules singularity/${SINGULARITY_VERSION}. ## ## modulefiles/singularity/${SINGULARITY_VERSION}. ## proc ModulesHelp { } { global version modroot puts stderr "singularity/${SINGULARITY_VERSION} - sets the environment for Singularity ${SINGULARITY_VERSION}" } module-whatis "Sets the environment for using Singularity ${VERSION}" # for Tcl script use only set topdir /apps/singularity/${SINGULARITY_VERSION} set version ${SINGULARITY_VERSION} set sys linux86 prepend-path PATH \$topdir/bin SINGULARITY_MODULEFILE
-
Verify the Singularity installation:
module load singularity/${SINGULARITY_VERSION} singularity
The output is similar to the following:
Usage: singularity [global options...] <command> Available Commands: build Build a Singularity image cache Manage the local cache capability Manage Linux capabilities for users and groups config Manage various singularity configuration (root user only) delete Deletes requested image from the library exec Run a command within a container inspect Show metadata for an image instance Manage containers running as services key Manage OpenPGP keys oci Manage OCI containers plugin Manage Singularity plugins pull Pull an image from a URI push Upload image to the provided URI remote Manage singularity remote endpoints run Run the user-defined default command within a container run-help Show the user-defined help for an image search Search a Container Library for images shell Run a shell within a container sif siftool is a program for Singularity Image Format (SIF) file manipulation sign Attach a cryptographic signature to an image test Run the user-defined tests within a container verify Verify cryptographic signatures attached to an image version Show the version for Singularity Run 'singularity --help' for more detailed usage information.
-
Exit the Slurm cluster login node by pressing Control+D.
-
Change directory into openlane-singularity-build:
cd ./openlane-singularity-build
Use Cloud Build to create the Singularity build step:
MAKE SURE YOUR SERVICE AGENT FOR CLOUD BUILD HAS ACCESS TO GOOGLE CLOUD STORAGE
gcloud builds submit \ --config=singularitybuilder.yaml \ --substitutions=_SINGULARITY_VERSION=${SINGULARITY_VERSION}
-
Create a Cloud Storage bucket for the container image:
gsutil mb gs://${SINGULARITY_REPO}
-
Build the container:
gcloud builds submit --config=containerbuilder.yaml --substitutions=_SINGULARITY_VERSION=${SINGULARITY_VERSION} --timeout 45m
-
Verify the container build:
gsutil ls gs://${SINGULARITY_REPO}
The output is similar to the following:
gs://SINGULARITY_REPO/openlane.sif
-
Allow read access to the container so that the Slurm job can pull the container image:
gsutil acl ch -g All:R gs://${SINGULARITY_REPO}/openlane.sif
After using the openlane-singularity-build to build the openlane singularity container now its time to build the backend and run it.
- In Cloud Shell, log in to the login node of your Slurm cluster:
export CLUSTER_LOGIN_NODE=$(gcloud compute instances list --zones $CLUSTER_ZONE --filter="name~.*login." --format="value(name)" | head -n1) gcloud compute ssh ${CLUSTER_LOGIN_NODE} --zone $CLUSTER_ZONE
- Clone Repo
git clone https://github.com/KhaledSoliman/openlane-cloud-backend-typescript
- Change directory into repo
cd ./openlane-cloud-backend-typescript
- Install docker on centos
- Verify docker is running
Output should be similar to following:
docker
Usage: docker [OPTIONS] COMMAND A self-sufficient runtime for containers Options: --config string Location of client config files (default "/home/khaledsoli111_gmail_com/.docker") -c, --context string Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with "docker context use") -D, --debug Enable debug mode -H, --host list Daemon socket(s) to connect to -l, --log-level string Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info") --tls Use TLS; implied by --tlsverify --tlscacert string Trust certs signed only by this CA (default "/home/khaledsoli111_gmail_com/.docker/ca.pem") --tlscert string Path to TLS certificate file (default "/home/khaledsoli111_gmail_com/.docker/cert.pem") --tlskey string Path to TLS key file (default "/home/khaledsoli111_gmail_com/.docker/key.pem") --tlsverify Use TLS and verify the remote -v, --version Print version information and quit Management Commands: app* Docker App (Docker Inc., v0.9.1-beta3) builder Manage builds buildx* Build with BuildKit (Docker Inc., v0.5.1-docker) config Manage Docker configs container Manage containers context Manage contexts image Manage images manifest Manage Docker image manifests and manifest lists network Manage networks node Manage Swarm nodes plugin Manage plugins scan* Docker Scan (Docker Inc., v0.8.0) secret Manage Docker secrets service Manage services stack Manage Docker stacks swarm Manage Swarm system Manage Docker trust Manage trust on Docker images volume Manage volumes Commands: attach Attach local standard input, output, and error streams to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes to files or directories on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on Docker objects kill Kill one or more running containers load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container pause Pause all processes within one or more containers port List port mappings or a specific mapping for the container ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart one or more containers rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive (streamed to STDOUT by default) search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop one or more running containers tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE top Display the running processes of a container unpause Unpause all processes within one or more containers update Update configuration of one or more containers version Show the Docker version information wait Block until one or more containers stop, then print their exit codes Run 'docker COMMAND --help' for more information on a command. To get more help with docker, check out our guides at https://docs.docker.com/go/guides/
- Install docker compose
- Verify docker compose is installed
Output should be similar to following:
docker-compose --version
docker-compose version 1.29.2, build 1110ad01
- Install nodejs
- Verify npm is installed
Output should be similar to following:
npm --version
3.10.10
- Make .env file with all the needed variables example:
cat > ./.env <<ENV_FILE GOOGLE_APPLICATION_CREDENTIALS="service_account.json" env="dev" PORT=3030 MAILER_PASS="test" JOB_CONCURRENCY=10 ENV_FILE
- Get your service_account file from firebase console and place in root directory
- Run docker-compose:
sudo docker-compose up -d
- Install npm dependencies
npm install sqlite3 npm install
- Build backend
Uses Gulp Gulp for TypeScript build
npm run build
- Run backend:
or
pm2 start ./build/src/server.js
npm run start
- Wait for a minute so that all microservices are up and running.
- When the backend is fully booted, you should see a log saying:
BOOT :: <> <> <> <> <> <> <> <> <> <> Listening on 0.0.0.0:3030 <> <> <> <> <> <> <> <> <> <>
config
└───prod
│ prod_config
└───test
│ test_config
└───uat
│ uat_config
deployment
locales
│ english-us
logger
│ winston-logger-setup
src
└───boot
│ └───initializers
│ initializer-1
│ initializer-2
│ ...
│ boot-file
└───controllers
│ controller-1
│ controller-2
│ ...
└───middlewares
│ middleware-1
│ middleware-2
│ ...
└───models
│ model-1
│ model-2
│ ...
└───routes
│ route-1
│ route-2
│ ...
└───services
│ service-1
│ service-2
│ ...
└───utils
│ util-1
│ util-2
│ ...
└───tests
│ test-1
│ test-2
│ ...
- Prometheus
- Grafana
- Cadvisor
- Mongodb
- Redis
- Nginx
- Kafka
- Zookeeper
I used Redis as the in memory database to store & process pending orders
Running on localhost:3000. Used for monitoring
Running on localhost:9090. Used for monitoring
Running on localhost:8080. Used for containers resources monitoring
Running on localhost:6379. Used as in-memory database
Running on localhost:9092. Used for microservices communication
Running on localhost:2181. Used along with Kafka
- While the backend was designed with a microservice architecture, all the services use one listening endpoint
- The database is sqlite and local, not scalable for large number of users
- Khaled Soliman
- Under supervision of:
- Dr. Mohamed Shalan
- Mohamed Kassem
- Some of the documentation inspired by this google documentation