StreamsHub Console is a web application designed to facilitate interactions with Apache Kafka® instances, optionally leveraging the Strimzi Cluster Operator for Kafka® instances running on Kubernetes. It is composed of three main parts:
- a REST API backend developed with Java and Quarkus
- a user interface (UI) built with Next.js and PatternFly
- a Kubernetes operator developed with Java and Quarkus
The future goals of this project are to provide a user interface to interact with and manage additional data streaming components such as:
- Apicurio Registry for message serialization and de-serialization + validation
- Kroxylicious for introducing additional behaviors to Kafka-based systems
- Apache Flink for processing real-time data streams and batch data sets
Contributions and discussions around use cases for these (and other relevant) components are both welcome and encouraged.
Deploy the console using one of the following methods:
- Through its dedicated operator using the Operator Lifecycle Manager (OLM)
- Using the operator with plain Kubernetes resources
- Directly with Kubernetes resources, without the operator
Note, if you are using minikube with the ingress
addon as your Kubernetes cluster, SSL pass-through must be enabled on the nginx controller:
# Enable TLS passthrough on the ingress deployment
kubectl patch deployment -n ingress-nginx ingress-nginx-controller \
--type='json' \
-p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value":"--enable-ssl-passthrough"}]'
The instructions below assume an existing Apache Kafka® cluster is available to use from the console. We recommend using Strimzi to create and manage your Apache Kafka® clusters - plus the console provides additional features and insights for Strimzi Apache Kafka® clusters.
If you already have Strimzi installed but would like to create an Apache Kafka® cluster for use with the console, example deployment resources are available to get started. The resources create an Apache Kafka® cluster in KRaft mode with SCRAM-SHA-512 authentication, a Strimzi KafkaNodePool
resource to manage the cluster nodes, and a Strimzi KafkaUser
resource that may be used to connect to the cluster.
Modify the CLUSTER_DOMAIN
to match the base domain of your Kubernetes cluster (used for ingress configuration), use either route
(OpenShift) or ingress
(vanilla Kubernetes) for LISTENER_TYPE
, and set NAMESPACE
to be the namespace where the Apache Kafka® cluster will be created.
export CLUSTER_DOMAIN=apps-crc.testing
export NAMESPACE=kafka
export LISTENER_TYPE=route
cat examples/kafka/*.yaml | envsubst | kubectl apply -n ${NAMESPACE} -f -
To ensure the console has the necessary access to function, a minimum level of authorization must be configured for the principal used in each Kafka cluster connection. The specific permissions may vary based on the authorization framework in use, such as ACLs, Keycloak authorization, OPA, or a custom solution. However, the minimum ACL types required are as follows:
DESCRIBE
,DESCRIBE_CONFIGS
for theCLUSTER
resourceREAD
,DESCRIBE
,DESCRIBE_CONFIGS
for allTOPIC
resourcesREAD
,DESCRIBE
for allGROUP
resources
Prometheus is an optional dependency of the console if cluster metrics are to be displayed. The console supports gathering metrics in several ways.
- OpenShift-managed Prometheus instances. Monitoring of user-defined projects must be enabled in OpenShift.
- User-supplied Prometheus instances
- Private Prometheus instance for each
Console
. The operator creates a managed Prometheus deployment for use only by the console.
The console may be configured to use an OpenID Connect (OIDC) provider for user authentication. An example using dex for OIDC with an OpenShift identity provider is available in examples/dex-openshift.
The preferred way to deploy the console is using the Operator Lifecycle Manager, or OLM. The sample install files in install/operator-olm
will install the operator with cluster-wide scope. This means that Console
instances may be created in any namespace. If you wish to limit the scope of the operator, the OperatorGroup
resource may be modified to specify only the namespace that should be watched by the operator.
This example will create the operator's OLM resources in the default
namespace. Modify the NAMESPACE
variable according to your needs.
export NAMESPACE=default
cat install/operator-olm/*.yaml | envsubst | kubectl apply -n ${NAMESPACE} -f -
Once the operator is ready, you may then create a Console
resource in the namespace where the console should be deployed. This example Console
is based on the example Apache Kafka® cluster deployed above in the prerequisites section. Also see examples/console/010-Console-example.yaml.
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: example
spec:
hostname: example-console.cloud.example.com # Hostname where the console will be accessed via HTTPS
metricsSources:
# A `standalone` Prometheus instance must already exist and be accessible from the console Pod
- name: custom-prometheus
type: standalone
url: https://custom-prometheus.cloud.example.com
# Prometheus API authentication may also be provided
kafkaClusters:
- name: console-kafka # Name of the `Kafka` CR representing the cluster
namespace: kafka # Namespace of the `Kafka` CR representing the cluster
listener: secure # Listener on the `Kafka` CR to connect from the console
metricsSource: custom-prometheus
properties:
values: [] # Array of name/value for properties to be used for connections
# made to this cluster
valuesFrom: [] # Array of references to ConfigMaps or Secrets with properties
# to be used for connections made to this cluster
credentials:
kafkaUser:
name: console-kafka-user1 # Name of the `KafkaUser` resource used to connect to Kafka
# This is optional if properties are used to configure the user
Deploying the operator without the use of OLM requires applying the component Kubernetes resources for the operator directly. These resources are bundled and attached to each StreamsHub Console release. The latest release can be found here. The resource file is named streamshub-console-operator.yaml
.
This example will create the operator's resources in the default
namespace. Modify the NAMESPACE
variable according to your needs and set VERSION
to the latest release.
export NAMESPACE=default
export VERSION=0.3.3
curl -sL https://github.com/streamshub/console/releases/download/${VERSION}/streamshub-console-operator.yaml \
| envsubst \
| kubectl apply -n ${NAMESPACE} -f -
Note: if you are not using the Prometheus operator you may see an error about a missing ServiceMonitor
custom resource type. This error may be ignored.
With the operator resources created, you may create a Console
resource like the one shown in Console Custom Resource Example.
Running the console locally requires configuration of any Apache Kafka® clusters that will be accessed from the console and (optionally) the use of a Kubernetes cluster that hosts the Strimzi Kafka operator. To get started, you will need to provide a console configuration file and (optionally) credentials to connect to the Kubernetes cluster where Strimzi is operating.
-
Using the console-config-example.yaml file as an example, create your own configuration in a file
console-config.yaml
in the repository root. Thecompose.yaml
file expects this location to be used and any difference in name or location requires an adjustment to the compose file. -
Install the prerequisite software into the Kubernetes cluster.
- Install the Strimzi operator
- Install the Prometheus operator and create a
Prometheus
instance (optional, only if you want to see metrics in the console) - Create an Apache Kafka® cluster. See the example above This step is only required if you do not already have an existing cluster you would like to use with the console.
-
(Skip this step if you are not using Kubernetes and Prometheus) Provide the Prometheus endpoint, the API server endpoint, and the service account token that you would like to use to connect to the Kubernetes cluster. These may be placed in a
compose.env
file that will be detected when starting the console.CONSOLE_API_SERVICE_ACCOUNT_TOKEN=<TOKEN> CONSOLE_API_KUBERNETES_API_SERVER_URL=https://my-kubernetes-api.example.com:6443
The service account token may be obtained using the
kubectl create token
command. For example, to create a service account named "console-server" with the correct permissions and a token that expires in 1 year (yq required):export NAMESPACE=<service account namespace> kubectl apply -n ${NAMESPACE} -f ./install/console/010-ServiceAccount-console-server.yaml kubectl apply -n ${NAMESPACE} -f ./install/console/020-ClusterRole-console-server.yaml cat ./install/console/030-ClusterRoleBinding-console-server.yaml | envsubst | kubectl apply -n ${NAMESPACE} -f - kubectl create token console-server -n ${NAMESPACE} --duration=$((365*24))h
-
By default, the provided configuration will use the latest console release container images. If you would like to build your own images with changes you've made locally, you may also set the
CONSOLE_API_IMAGE
andCONSOLE_UI_IMAGE
in yourcompose.env
and build them withmake container-images
-
Start the environment with
make compose-up
. -
When finished with the local console process, you may run
make compose-down
to clean up.
We welcome contributions of all forms. Please see the CONTRIBUTING file for how to get started. Join us in enhancing the capabilities of this console for Apache Kafka® on Kubernetes.
Each release requires an open milestone that includes the issues/pull requests that are part of the release. All issues in the release milestone must be closed. The name of the milestone must match the version number to be released.
The release action flow requires that the following secrets are configured in the repository:
IMAGE_REPO_HOSTNAME
- the host (optionally including a port number) of the image repository where images will be pushedIMAGE_REPO_NAMESPACE
- namespace/library/user where the image will be pushedIMAGE_REPO_USERNAME
- user name for authentication to serverIMAGE_REPO_HOSTNAME
IMAGE_REPO_PASSWORD
- password for authentication to serverIMAGE_REPO_HOSTNAME
These credentials will be used to push the release image to the repository configured in the.github/workflows/release.yml
workflow.
Releases are performed by modifying the .github/project.yml
file, setting current-version
to the release version and next-version
to the next SNAPSHOT. Open a pull request with the changed project.yml
to initiate the pre-release workflows. At this phase, the project milestone will be checked and it will be verified that no issues for the release milestone are still open. Additionally, the project's integration test will be run.
Once approved and the pull request is merged, the release action will execute. This action will execute the Maven release plugin to tag the release commit, build the application artifacts, create the build image, and push the image to (currently) quay.io. If successful, the action will push the new tag to the Github repository and generate release notes listing all of the closed issues included in the milestone. Finally, the milestone will be closed.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.