K8s Launch Kit (l8k) is a CLI tool for deploying and managing NVIDIA cloud-native solutions on Kubernetes. The tool helps provide flexible deployment workflows for optimal network performance with SR-IOV, RDMA, and other networking technologies.
Deploy a minimal Network Operator profile to automatically discover your cluster's network capabilities and hardware configuration. This phase can be skipped if you provide your own configuration file.
Specify the desired deployment profile via CLI flags or with the natural language prompt for the LLM.
Based on the discovered/provided configuration, generate a complete set of YAML deployment files tailored to your selected network profile.
git clone <repository-url>
cd launch-kubernetes
make buildThe binary will be available at build/l8k.
Build the Docker image:
make docker-build
K8s Launch Kit (l8k) is a CLI tool for deploying and managing NVIDIA cloud-native solutions on Kubernetes. The tool helps provide flexible deployment workflows for optimal network performance with SR-IOV, RDMA, and other networking technologies.
### Discover Cluster Configuration
Deploy a minimal Network Operator profile to automatically discover your cluster's
network capabilities and hardware configuration by using --discover-cluster-config.
This phase can be skipped if you provide your own configuration file by using --user-config.
This phase requires --kubeconfig to be specified.
### Generate Deployment Files
Based on the discovered or provided configuration,
generate a complete set of YAML deployment files for the selected network profile.
Files can be saved to disk using --save-deployment-files.
The profile can be defined manually with --fabric, --deployment-type and --multirail flags,
OR generated by an LLM-assisted profile generator with --prompt (requires --llm-api-key and --llm-vendor).
### Deploy to Cluster
Apply the generated deployment files to your Kubernetes cluster by using --deploy. This phase requires --kubeconfig and can be skipped if --deploy is not specified.
Usage:
l8k [flags]
l8k [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
help Help about any command
version Print the version number
Flags:
--ai Enable AI deployment
--deploy Deploy the generated files to the Kubernetes cluster
--deployment-type string Select the deployment type (sriov, rdma_shared, host_device)
--discover-cluster-config Deploy a thin Network Operator profile to discover cluster capabilities
--enabled-plugins string Comma-separated list of plugins to enable (default "network-operator")
--fabric string Select the fabric type to deploy (infiniband, ethernet)
-h, --help help for l8k
--kubeconfig string Path to kubeconfig file for cluster deployment (required when using --deploy)
--llm-api-key string API key for the LLM API (required when using --prompt)
--llm-api-url string API URL for the LLM API (required when using --prompt)
--llm-vendor string Vendor of the LLM API (required when using --prompt) (default "openai-azure")
--log-level string Log level (debug, info, warn, error) (default "info")
--multirail Enable multirail deployment
--prompt string Path to file with a prompt to use for LLM-assisted profile generation
--save-cluster-config string Save discovered cluster configuration to the specified path (default "/opt/nvidia/k8s-launch-kit/cluster-config.yaml")
--save-deployment-files string Save generated deployment files to the specified directory (default "/opt/nvidia/k8s-launch-kit/deployment")
--spectrum-x Enable Spectrum X deployment
--user-config string Use provided cluster configuration file instead of auto-discovery (skips cluster discovery)
Use "l8k [command] --help" for more information about a command.
Discover cluster config, generate files, and deploy:
l8k --discover-cluster-config --save-cluster-config ./cluster-config.yaml \
--fabric ethernet --deployment-type sriov --multirail \
--save-deployment-files ./deployments \
--deploy --kubeconfig ~/.kube/configl8k --discover-cluster-config --save-cluster-config ./my-cluster-config.yamlGenerate and deploy with pre-existing config:
l8k --user-config ./existing-config.yaml \
--fabric ethernet --deployment-type sriov --multirail \
--deploy --kubeconfig ~/.kube/configl8k --user-config ./config.yaml \
--fabric ethernet --deployment-type sriov --multirail \
--save-deployment-files ./deploymentsecho "I want to enable multirail networking in my AI cluster" > requirements.txt
l8k --user-config ./config.yaml \
--prompt requirements.txt --llm-vendor openai-azure --llm-api-key <OPENAI_AZURE_KEY> \
--save-deployment-files ./deploymentsDuring cluster discovery stage, Kubernetes Launch Kit creates a configuration file, which it later uses to generate deployment manifests from the templates. This config file can be edited by the user to customize their deployment configuration. The user can provide the custom config file to the tool using the --user-config cli flag.
Example of the configuration file discovered from the cluster:
networkOperator:
version: v25.7.0
componentVersion: network-operator-v25.7.0
repository: nvcr.io/nvidia/mellanox
namespace: nvidia-network-operator
nvIpam:
poolName: nv-ipam-pool
subnets:
- subnet: 192.168.2.0/24
gateway: 192.168.2.1
- subnet: 192.168.3.0/24
gateway: 192.168.3.1
- subnet: 192.168.4.0/24
gateway: 192.168.4.1
- subnet: 192.168.5.0/24
gateway: 192.168.5.1
- subnet: 192.168.6.0/24
gateway: 192.168.6.1
- subnet: 192.168.7.0/24
gateway: 192.168.7.1
- subnet: 192.168.8.0/24
gateway: 192.168.8.1
- subnet: 192.168.9.0/24
gateway: 192.168.9.1
- subnet: 192.168.10.0/24
gateway: 192.168.10.1
sriov:
mtu: 9000
numVfs: 8
priority: 90
resourceName: sriov_resource
networkName: sriov_network
hostdev:
resourceName: hostdev-resource
networkName: hostdev-network
rdmaShared:
resourceName: rdma_shared_resource
hcaMax: 63
ipoib:
networkName: ipoib-network
macvlan:
networkName: macvlan-network
clusterConfig:
capabilities:
nodes:
sriov: true
rdma: true
ib: true
pfs:
- rdmaDevice: mlx5_0
pciAddress: "0000:03:00.0"
networkInterface: enp3s0f0np0
traffic: east-west
- rdmaDevice: mlx5_1
pciAddress: "0000:03:00.1"
networkInterface: enp3s0f1np1
traffic: east-west
- rdmaDevice: mlx5_2
pciAddress: 0000:81:00.0
networkInterface: enp129s0np0
traffic: east-west
workerNodes:
- worker-node-1
- worker-node-2
- worker-node-3
nodeSelector:
feature.node.kubernetes.io/pci-15b3.present: "true"You can run the l8k tool as a docker container:
docker run -v ~/launch-kubernetes/user-prompt:/user-prompt -v ~/remote-cluster/:/remote-cluster -v /tmp:/output --net=host harbor.mellanox.com/k8s-launch-kit:poc --discover-cluster-config --kubeconfig /remote-cluster/kubeconf.yaml --save-cluster-config /output/config.yaml --log-level debug --save-deployment-files /output --fabric infiniband --deployment-type rdma_shared --multirailDon't forget to enable --net=host and mount the necessary directories for input and output files with -v.
make build # Build for current platform
make build-all # Build for all platforms
make clean # Clean build artifactsmake test # Run tests
make coverage # Run tests with coveragemake lint # Run linter
make lint-check # Install and run lintermake docker-build # Build Docker image
make docker-run # Run Docker container