This module creates an Amazon Managed Service for Prometheus workspace, as well as a Kubernetes cluster with a demo service that exposes Prometheus metrics, and a load generation script to generate traffic and metric data. Prometheus is deployed in the cluster, and writes the gathered metrics to Amazon Managed Service for Prometheus. This data can then be visualized using the Grafana instance that is deployed into the cluster. It it configured to use the the Amazon Managed Service for Prometheus workspace that is created.
This module requires the AWS CLI v2
NOTE This is a demo, and meant to be used in development and testing. Please to do not use this in a production deployment
To create the resources, take a look at the example in
./examples/complete
. From the example directory, you can run
terraform apply
or add the module to your own project. Refer to the
documentation
for more information on configuring the AWS provider.
Once the resources are created, follow the documentation on creating a kubeconfig file in order to connect to the cluster. Once that is created, you can connect to the Prometheus server by forwarding the port to your local, ex:
export POD_NAME=$(kubectl get pods --namespace observability-demo-prometheus -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace observability-demo-prometheus port-forward $POD_NAME 9090
Open up https://localhost:9090
in a browser to access the Prometheus server.
To access the Grafana server, first get the password, ex:
kubectl get secret --namespace observability-demo-grafana observability-demo-complete-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
Then forward the port to your local, ex:
export POD_NAME=$(kubectl get pods --namespace observability-demo-grafana -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=observability-demo-complete-grafana" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace observability-demo-grafana port-forward $POD_NAME 3000
Opening up https://localhost:3030
will bring up the Grafana login page.
Log in with admin
and the password from the previous step.
Once you are done, you can call terraform destroy
to clean up all created
resources.
By default, the custom application exposes four metrics:
The following requirements are needed by this module:
-
terraform (>= 1.0.11)
-
aws (>= 4.21.0)
-
helm (>= 2.6.0)
-
kubernetes (>= 2.12.1)
The following providers are used by this module:
-
aws (>= 4.21.0)
-
helm (>= 2.6.0)
-
kubernetes (>= 2.12.1)
The following Modules are called:
Source: terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks
Version: 5.2.0
Source: terraform-aws-modules/eks/aws
Version: 18.26.3
Source: terraform-aws-modules/vpc/aws
Version: ~> 3.0
The following resources are used by this module:
- aws_prometheus_workspace.demo (resource)
- aws_security_group.additional (resource)
- helm_release.grafana (resource)
- helm_release.metrics_server (resource)
- helm_release.prometheus (resource)
- kubernetes_deployment.load (resource)
- kubernetes_deployment.server (resource)
- kubernetes_namespace.load (resource)
- kubernetes_namespace.server (resource)
- kubernetes_service.server (resource)
- aws_caller_identity.current (data source)
No required inputs.
The following input variables are optional (have default values):
Description: The kubernetes namespace to use
Type: string
Default: "observability-demo"
Description: The name for this project
Type: string
Default: ""
Description: The region to target for the creation of resources
Type: string
Default: "us-west-2"
The following outputs are exported:
Description: Base64 encoded certificate data required to communicate with the cluster
Description: Endpoint for your Kubernetes API server
Description: The name/id of the EKS cluster. Will block on cluster creation until the cluster is really ready
Description: ARN of IAM role
Description: Name of IAM role
Description: Path of IAM role
Description: Unique ID of IAM role