This is a Kubernetes device plugin implementation that enables the registration of AMD GPU in a container cluster for compute workload. With the approrpriate hardware and this plugin deployed in your Kubernetes cluster, you will be able to run jobs that require AMD GPU.
More information about RadeonOpenCompute (ROCm)
- ROCm capable machines
- kubeadm capable machines (if you are using kubeadm to deploy your k8s cluster)
- ROCm kernel (Installation guide) or latest AMD GPU Linux driver (Installation guide)
- A Kubernetes deployment
--allow-privileged=true
for both kube-apiserver and kubelet (only needed if the device plugin is deployed via DaemonSet since the device plugin container requires privileged security context to access/dev/kfd
for device health check)
- This is an early prototype/alpha.
- This plugin currently support device plugin API v1 only. This means it will only work with k8s v1.16+.
The device plugin needs to be run on all the nodes that are equipped with AMD GPU. The simplist way of doing so is to create a Kubernetes DaemonSet, which run a copy of a pod on all (or some) Nodes in the cluster. We have a pre-built Docker image on DockerHub that you can use for with your DaemonSet. This repository also have a pre-defined yaml file named k8s-ds-amdgpu-dp.yaml
. You can create a DaemonSet in your Kubernetes cluster by running this command:
$ kubectl create -f k8s-ds-amdgpu-dp.yaml
or directly pull from the web using
kubectl create -f https://raw.githubusercontent.com/RadeonOpenCompute/k8s-device-plugin/master/k8s-ds-amdgpu-dp.yaml
If you want to enable the experimental device health check, please use k8s-ds-amdgpu-dp-health.yaml
after --allow-privileged=true
is set for kube-apiserver and kublet.
You can restrict work to a node with GPU by adding resources.limits
to the pod definition. An example pod definition is provided in example/pod/alexnet-gpu.yaml
. This pod runs the timing benchmark for AlexNet on AMD GPU and then go to sleep. You can create the pod by running:
$ kubectl create -f alexnet-gpu.yaml
or
$ kubectl create -f https://raw.githubusercontent.com/RadeonOpenCompute/k8s-device-plugin/master/example/pod/alexnet-gpu.yaml
and then check the pod status by running
$ kubectl describe pods
After the pod is created and running, you can see the benchmark result by running:
$ kubectl logs alexnet-tf-gpu-pod alexnet-tf-gpu-container
For comparison, an example pod definition of running the same benchmark with CPU is provided in example/pod/alexnet-cpu.yaml
.
Please see AMD GPU Kubernetes Node Labeller for details. An example configuration is in k8s-ds-amdgpu-labeller.yaml:
$ kubectl create -f k8s-ds-amdgpu-labeller.yaml
or
$ kubectl create -f https://raw.githubusercontent.com/RadeonOpenCompute/k8s-device-plugin/master/k8s-ds-amdgpu-labeller.yaml
- This plugin uses
go dep
for dependencies management - Please consult the
Dockerfile
on how to build and use this plugin independent of a docker image
- Add proper GPU health check (health check without
/dev/kfd
access.)