Skip to content

epeters-jrmngndr/simple-stack

Repository files navigation

simple-stack

A simple deployment with Kubernetes, including monitoring by Prometheus

Setup

With a local Kubernetes cluster available, one which must have an ingress controlled enabled, it should be enough to apply the manifests:

# For messageapi
kubectl apply -f manifests/messageapi-deployment.yml
kubectl apply -f manifests/messageapi-service.yml
kubectl apply -f manifests/messageapi-ingress.yml

# For Prometheus
kubectl apply -f manifests/prometheus-deployment.yml

You won't be able to use ./utility-scrips/build.sh helper as is, since it relies on my personal dockerhub account.

To build the image with a different tag, you would run docker build -t my-prefix/message-api:latest . --no-cache. You'd then need to replace jrmngndr with my-prefix to use the image, and finally push it to dockerhub (or another registry) with docker push my-prefix/message-api:latest.

You would also need to change the values in the manifests to my-prefix, as otherwise the cluster will not build with the image you've created locally.

Metrics

The server exposes the default metrics captured and updated by the prometheus_fastapi_instrumentator package. One of those metric is http_requests_total, which is simple enough to serve as a good example.

Testing the MessageAPI

Using the following command, you can expose the MessageAPI ports on your local system: kubectl port-forward deployment/messageapi-deployment 8080:8080

You could also set <my-cluster-ip> local-aliased-domain.com in /etc/hosts or elsewhere in your name resolution path. This will work since local-aliased-domain.com is configured in messageapi-ingress.yml as an origin point of traffic to our API.

Whichever path you use, you can then interact with the API via cURL:

curl localhost:8080/message # loads default message
curl -X PUT localhost:8080/message -d 'My Manually Set Message'
curl localhost:8080/message # now shows the manually set message

You can also tail (JSON) logs from the service with kubectl logs -l app=messageapi -f.

Testing and accessing Prometheus

To access Prometheus, the easiest way is to go via kubectl port-forward deployment/prometheus-deployment 9090:9090

Port 9090 on your local machine will then expose key Prometheus views, like /targets and /config, which are essential to verifying your Prometheus setup.

Troubleshooting

Internal Connectivity

Pick a pod, and confirm that it can fetch the MessageAPI message locally:

kubectl exec -it <pod_id> -- curl http://localhost:8080/message

Service connectivity

When using Minikube, get the service IP with minikube service --all.

Then you can make requests directly to the service, such as curl http://<service_ip>:30000/message. Otherwise, port forwarding could be used.

Once the service manifest has been applied to the cluster, you can also use the service IP. You find the ip with minikube ip, and can then sat an alias in the file below:

<service_ip>  local-aliased-domain.com

Subsequently, the command curl http://local-aliased-domain.com:30000/message will allow you to fetch the message from the API.

Ingress

If using Minikube, you must enable the ingress addon: minikube addons enable ingress

Pending Improvements

A list of desired changes that I would like to make to this project.

  • Use a Persistent Volume to hold metrics generated by Prometheus, so that they will persist.
  • Integrate Thanos with Prometheus for improved performance.
  • Set up CI for the Python code, so that it is built and verified with each pull request (PR). There should also be unit tests.
  • Set up CI for the Docker image, and automatically build and push to dockerhub when a change is merged. Automatically build and push to a test registry when a PR is created.
  • As part of CI for Kubernetes itself, have the runner spawn a fresh Minikube and apply the manifests for each opened PR.
    • This helps to confirm that the manifests build a functional infrastructure.
    • Add automated functional and integration tests to run after that point (Likely in python). They would confirm that exposed endpoints work as expected, as well as make assertions about the state of the cluster.

About

A simple deployment with Kubernetes, including monitoring

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published