PLEASE NOTE: This project is considered ALPHA quality and should NOT be used for production, as it is currently in active development. Use at your own risk. APIs, configuration file formats, and functionality are all subject to change frequently. That said, please try it out in your development and test environments and let us know if it works. Contributions welcome! Thanks!
Table of contents:
The LKE Karpenter Provider enables node autoprovisioning using Karpenter on your LKE cluster. Karpenter improves the efficiency and cost of running workloads on Kubernetes clusters by:
- Watching for pods that the Kubernetes scheduler has marked as unschedulable
- Evaluating scheduling constraints (resource requests, nodeselectors, affinities, tolerations, and topology spread constraints) requested by the pods
- Provisioning nodes that meet the requirements of the pods
- Removing the nodes when the nodes are no longer needed
This provider supports two operating modes:
- LKE Mode (Default): Creates LKE Node Pools for each provisioned node. This is the simplest method and recommended for most users.
- Instance Mode: Creates standard Linode Instances. This offers granular control over instance settings (SSH keys, placement groups, etc.) but requires more manual configuration. This is currently in development and not yet fully functional.
See Configuration Documentation for full details on modes and available settings.
Install these tools before proceeding:
- Create a new LKE cluster with any amount of nodes in any region. This can be easily done in Linode Cloud Manager or via the Linode CLI.
- Download the cluster's kubeconfig when ready.
The Karpenter Helm chart requires specific configuration values to work with an LKE cluster.
-
Create a Linode PAT if you don't already have a
LINODE_TOKENenv var set. Karpenter will use this for managing nodes in the LKE cluster. -
Set the variables:
export CLUSTER_NAME=<cluster name> export KUBECONFIG=<path to your LKE kubeconfig> export KARPENTER_NAMESPACE=kube-system export LINODE_TOKEN=<your api token> # Optional: specify region explicitly (auto-discovered in LKE mode if not set) # export LINODE_REGION=<region> # Optional: Set mode directly (default is lke) # export KARPENTER_MODE=lke
Note: In LKE mode (default), Karpenter automatically discovers the cluster region from the Linode API using the cluster name. You can optionally set LINODE_REGION to override this behavior.
Use the configured environment variables to install Karpenter using Helm:
helm upgrade --install --namespace "${KARPENTER_NAMESPACE}" --create-namespace karpenter-crd charts/karpenter-crd
helm upgrade --install --namespace "${KARPENTER_NAMESPACE}" --create-namespace karpenter charts/karpenter \
--set settings.clusterName=${CLUSTER_NAME} \
--set apiToken=${LINODE_TOKEN} \
--waitOptional Configuration:
-
Region: Specify the region explicitly (only required for
instancemode):--set region=${LINODE_REGION} -
Mode: Choose the operating mode (default is
lke):lke: Provisions nodes using LKE NodePools (recommended for LKE clusters)instance: Provisions nodes as direct Linode instances
--set settings.mode=lke
Check karpenter deployed successfully:
kubectl get pods --namespace "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenterCheck its logs:
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controllerA single Karpenter NodePool is capable of handling many different pod shapes. Karpenter makes scheduling and provisioning decisions based on pod attributes such as labels and affinity. In other words, Karpenter eliminates the need to manage many different node groups.
Create a default NodePool using the command below. (Additional examples available in the repository under examples/v1.) The consolidationPolicy set to WhenUnderutilized in the disruption block configures Karpenter to reduce cluster cost by removing and replacing nodes. As a result, consolidation will terminate any empty nodes on the cluster. This behavior can be disabled by setting consolidateAfter to Never, telling Karpenter that it should never consolidate nodes.
Note: This NodePool will create capacity as long as the sum of all created capacity is less than the specified limit.
cat <<EOF | kubectl apply -f -
---
apiVersion: karpenter.k8s.linode/v1alpha1
kind: LinodeNodeClass
metadata:
name: default
spec:
image: "linode/ubuntu22.04"
---
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
requirements:
- key: kubernetes.io/arch
operator: In
values: ["amd64"]
- key: kubernetes.io/os
operator: In
values: ["linux"]
nodeClassRef:
group: karpenter.k8s.linode
kind: LinodeNodeClass
name: default
expireAfter: 720h # 30 * 24h = 720h
limits:
cpu: 1000
EOFKarpenter is now active and ready to begin provisioning nodes.
This deployment uses the pause image and starts with zero replicas.
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 0
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
terminationGracePeriodSeconds: 0
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
resources:
requests:
cpu: 1
securityContext:
allowPrivilegeEscalation: false
EOF
kubectl scale deployment inflate --replicas 5
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controllerNow, delete the deployment. After a short amount of time, Karpenter should terminate the empty nodes due to consolidation.
kubectl delete deployment inflate
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controllerIf you delete a node with kubectl, Karpenter will gracefully cordon, drain, and shutdown the corresponding instance. Under the hood, Karpenter adds a finalizer to the node object, which blocks deletion until all pods are drained and the instance is terminated. Keep in mind, this only works for nodes provisioned by Karpenter.
kubectl delete node $NODE_NAMETo avoid additional charges, remove the demo infrastructure from your Linode account.
helm uninstall karpenter --namespace "${KARPENTER_NAMESPACE}"
linode-cli lke cluster-delete --label "${CLUSTER_NAME}"A duplicate LKENodePool is temporarily provisioned until Karpenter detects the original was able to register the original successfully. The duplicate LKENodePool does get cleaned up after about a minute when Karpenter realizes it's not needed. You will see something like this in the Karpenter controller logs:
{"level":"INFO","time":"2026-01-26T19:42:55.639Z","logger":"controller","message":"found provisionable pod(s)","commit":"237f3a9","controller":"provisioner","namespace":"","name":"","reconcileID":"aee4f6fd-867e-4221-a261-fdb49b9ff126","Pods":"default/inflate-7bb66b64f-ks9ll, default/inflate-7bb66b64f-5hfbb, default/inflate-7bb66b64f-dlnbm, default/inflate-7bb66b64f-lf7tb, default/inflate-7bb66b64f-9t27c","duration":"8.03501261s"}
{"level":"INFO","time":"2026-01-26T19:42:55.642Z","logger":"controller","message":"computed new nodeclaim(s) to fit pod(s)","commit":"237f3a9","controller":"provisioner","namespace":"","name":"","reconcileID":"aee4f6fd-867e-4221-a261-fdb49b9ff126","nodeclaims":1,"pods":5}
{"level":"INFO","time":"2026-01-26T19:42:55.656Z","logger":"controller","message":"created nodeclaim","commit":"237f3a9","controller":"provisioner","namespace":"","name":"","reconcileID":"aee4f6fd-867e-4221-a261-fdb49b9ff126","NodePool":{"name":"default"},"NodeClaim":{"name":"default-v2blg"},"requests":{"cpu":"5250m","pods":"8"},"instance-types":"g1-accelerated-netint-vpu-t1u1-m, g1-accelerated-netint-vpu-t1u1-s, g1-accelerated-netint-vpu-t1u2-s, g1-gpu-rtx6000-1, g1-gpu-rtx6000-2 and 49 other(s)"}
{"level":"INFO","time":"2026-01-26T19:42:57.156Z","logger":"controller","message":"launched nodeclaim","commit":"237f3a9","controller":"nodeclaim.lifecycle","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","NodeClaim":{"name":"default-v2blg"},"namespace":"","name":"default-v2blg","reconcileID":"e85e6c72-8da1-4fea-af26-1fb0e676d502","provider-id":"linode://90601036","instance-type":"g6-standard-6","zone":"","capacity-type":"on-demand","allocatable":{"cpu":"5915m","memory":"13590Mi","pods":"110"}}
{"level":"ERROR","time":"2026-01-26T19:44:22.686Z","logger":"controller","message":"node claim registration error","commit":"237f3a9","controller":"nodeclaim.lifecycle","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","NodeClaim":{"name":"default-v2blg"},"namespace":"","name":"default-v2blg","reconcileID":"e3ab7681-c8fb-489a-9f6b-63036fa52090","provider-id":"linode://90601036","taint":"karpenter.sh/unregistered","error":"missing taint prevents registration-related race conditions on Karpenter-managed nodes"}
{"level":"INFO","time":"2026-01-26T19:44:22.705Z","logger":"controller","message":"registered nodeclaim","commit":"237f3a9","controller":"nodeclaim.lifecycle","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","NodeClaim":{"name":"default-v2blg"},"namespace":"","name":"default-v2blg","reconcileID":"e3ab7681-c8fb-489a-9f6b-63036fa52090","provider-id":"linode://90601036","Node":{"name":"lke561146-819072-4a8e5fd50000"}}
Notice: Files in this source code originated from a fork of https://github.com/aws/karpenter-provider-aws which is under an Apache 2.0 license. Those files have been modified to reflect environmental requirements in LKE and Linode.
This project follows the Linode Community Code of Conduct.
Come discuss Karpenter in the #karpenter channel in the Kubernetes slack!
Check out the Docs to learn more.