Kube-DNS is a Kubernetes cluster add-on that is launched as part of the cluster creation. It schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell other pods and containers to use the DNS service IP’s to resolve DNS names. This DNS server utilizes the libraries from SkyDNS to serve DNS requests for Kubernetes Pods and Services.
There is no tight binding between kubelet
and Kube-DNS as it runs as another service in Kubernetes. You just need to pass the DNS service IP address and domain into kubelet
, and Kubernetes doesn’t really care who is actually servicing the requests at that IP. In addition, CoreDNS implements the spec defined for Kubernetes DNS-based service discovery. This allows Kube-DNS to be replaced by CoreDNS.
Read CoreDNS for Kubernetes Service Discovery and CoreDNS for Kubernetes Service Discovery, Take 2 for more details.
CoreDNS integrates with Kubernetes via the Kubernetes plugin.
This chapter will explain how to update an existing Kubernetes cluster to use CoreDNS instead of the default Kube-DNS.
In order to perform exercises in this chapter, you’ll need to deploy configurations to a Kubernetes cluster. To create an EKS-based Kubernetes cluster, use the AWS CLI (recommended). If you wish to create a Kubernetes cluster without EKS, you can instead use kops.
All configuration files for this chapter are in the coredns
directory. Make sure you change to that directory before giving any commands in this chapter.
Each Kubernetes cluster has a service named kube-dns
that forwards all DNS requests to the pod responsible for handling DNS requests. This service can be seen as:
$ kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 100.64.0.10 <none> 53/UDP,53/TCP 8m
More details about the service can be found:
$ kubectl describe svc kube-dns -n kube-system Name: kube-dns Namespace: kube-system Labels: k8s-addon=kube-dns.addons.k8s.io k8s-app=kube-dns kubernetes.io/cluster-service=true kubernetes.io/name=KubeDNS Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-addon":"kube-dns.addons.k8s.io","k8s-app":"kube-dns","kubernetes.io/clu... Selector: k8s-app=kube-dns Type: ClusterIP IP: 100.64.0.10 Port: dns 53/UDP TargetPort: 53/UDP Endpoints: 100.96.3.2:53,100.96.4.2:53 Port: dns-tcp 53/TCP TargetPort: 53/TCP Endpoints: 100.96.3.2:53,100.96.4.2:53 Session Affinity: None Events: <none>
Let’s look at the list of Deployments:
$ kubectl get deployments -n kube-system -l k8s-app=kube-dns NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-dns 2 2 2 2 8m
The output shows that 2 replicas of Kube-DNS pods are running.
Let’s get more details about the pods:
$ kubectl get pods -n kube-system -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE kube-dns-1311260920-d8bkh 3/3 Running 0 8m kube-dns-1311260920-v06zv 3/3 Running 0 7m
Get logs from one of the pods:
kubectl logs -n kube-system kube-dns-1311260920-d8bkh --container kubedns
To validate that the current DNS setup is working we first have to deploy a busybox
pod:
$ kubectl create -f templates/busybox.yaml pod "busybox" created
Lets resolve the kubernetes
service in the default
namespace:
$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 100.64.0.10
Address 1: 100.64.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 100.64.0.1 kubernetes.default.svc.cluster.local
The output shows that the kube-dns
service answered the request and the kubernetes
service can be found at 100.64.01
.
Let’s look at the logs from one of the pods:
$ kubectl logs -n kube-system kube-dns-1311260920-v06zv --container sidecar ERROR: logging before flag.Parse: I1103 01:55:35.006798 1 main.go:48] Version v1.14.4-2-g5584e04 ERROR: logging before flag.Parse: I1103 01:55:35.006838 1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns}) ERROR: logging before flag.Parse: I1103 01:55:35.006926 1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} ERROR: logging before flag.Parse: I1103 01:55:35.007006 1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
It shows that kubernetes.default.svc.cluster.local
was resolved.
And, now logs from the other pod:
coredns $ kubectl logs -n kube-system kube-dns-1311260920-d8bkh --container sidecar ERROR: logging before flag.Parse: I1103 01:55:33.917264 1 main.go:48] Version v1.14.4-2-g5584e04 ERROR: logging before flag.Parse: I1103 01:55:33.917308 1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns}) ERROR: logging before flag.Parse: I1103 01:55:33.917386 1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} ERROR: logging before flag.Parse: I1103 01:55:33.917455 1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
And this shows a similar output.
Lets look at the resolv.conf
:
$ kubectl exec busybox cat /etc/resolv.conf nameserver 100.64.0.10 search default.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal options ndots:5
As expected, the nameserver IP corresponds to the IP of the kube-dns
service. Additionally, the search domains also reflect the expected kubernetes schema (get more here).
The CoreDNS deployment relies on information that is specific to you cluster: CIDR of the kubernetes services and cluster domain. CIDR is different based upon whether your cluster was created with kops or minikube.
For kops, CoreDNS can be deployed using the following command:
$ kubectl create -f templates/coredns-kops.yaml serviceaccount "coredns" created clusterrole "system:coredns" created clusterrolebinding "system:coredns" created configmap "coredns" created deployment "coredns" created
Note
|
If you use minikube, run kubectl create -f templates/coredns-kops.yaml instead and replace the clusterIP in templates/coredns-service.yaml with the result from kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP} .
|
If you look at the list of deployments:
$ kubectl get deployment -n kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE coredns 2 2 2 2 1m dns-controller 1 1 1 1 23m kube-dns 2 2 2 2 23m kube-dns-autoscaler 1 1 1 1 23m
Note that two replicas of coredns pods are running.
Wait for the pods to start:
$ kubectl get pods -n kube-system -l k8s-app=coredns NAME READY STATUS RESTARTS AGE coredns-3986650266-1pxb6 1/1 Running 0 1m coredns-3986650266-rmfgm 1/1 Running 0 1m
Logs from the pod can be seen:
$ kubectl logs coredns-3986650266-1pxb6 -n kube-system -f 2017/11/03 01:32:33 [INFO] CoreDNS-0.9.9 2017/11/03 01:32:33 [INFO] linux/amd64, go1.9.1, 4d6e9c38 .:53 CoreDNS-0.9.9 linux/amd64, go1.9.1, 4d6e9c38
The other pod logs shows a similar output as well.
We need to update the Kube-DNS service to use our CoreDNS pod:
$ kubectl apply -f templates/coredns-service.yaml service "kube-dns" configured
Now, when you describe the kube-dns
service, it should look something like this:
$ kubectl describe svc kube-dns -n kube-system Name: kube-dns Namespace: kube-system Labels: k8s-app=coredns kubernetes.io/cluster-service=true kubernetes.io/name=CoreDNS Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"coredns","kubernetes.io/cluster-service":"true","kubernetes.io/na... Selector: k8s-app=coredns Type: ClusterIP IP: 100.64.0.10 Port: dns 53/UDP TargetPort: 53/UDP Endpoints: 100.96.6.2:53,100.96.7.2:53 Port: dns-tcp 53/TCP TargetPort: 53/TCP Endpoints: 100.96.6.2:53,100.96.7.2:53 Port: metrics 9153/TCP TargetPort: 9153/TCP Endpoints: 100.96.6.2:9153,100.96.7.2:9153 Session Affinity: None Events: <none>
The IP address of our CoreDNS pod should match the endpoint IPs in the kube-dns service:
$ kubectl get po -l k8s-app=coredns -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE coredns-3986650266-1pxb6 1/1 Running 0 3m 100.96.6.2 ip-172-20-35-120.us-west-1.compute.internal coredns-3986650266-rmfgm 1/1 Running 0 3m 100.96.7.2 ip-172-20-83-89.us-west-1.compute.internal
Awesome, this fits nicely!
To confirm, let’s run the nslookup
command again:
$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 100.64.0.10
Address 1: 100.64.0.10
Name: kubernetes.default
Address 1: 100.64.0.1
This is giving the same output as earlier, so that is good!
The logs are from the two pods are updated:
$ kubectl logs coredns-3986650266-rmfgm -n kube-system -f 2017/11/03 02:15:44 [INFO] CoreDNS-0.9.9 2017/11/03 02:15:44 [INFO] linux/amd64, go1.9.1, 4d6e9c38 .:53 CoreDNS-0.9.9 linux/amd64, go1.9.1, 4d6e9c38 100.96.5.3 - [03/Nov/2017:02:20:05 +0000] "PTR IN 10.0.64.100.in-addr.arpa. udp 42 false 512" NXDOMAIN qr,rd,ra 101 1.812126ms 100.96.5.3 - [03/Nov/2017:02:20:05 +0000] "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 112 55.755859ms 100.96.5.3 - [03/Nov/2017:02:20:05 +0000] "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd,ra 107 143.14µs 100.96.5.3 - [03/Nov/2017:02:20:05 +0000] "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 112 625.301µs 100.96.5.3 - [03/Nov/2017:02:20:05 +0000] "PTR IN 1.0.64.100.in-addr.arpa. udp 41 false 512" NXDOMAIN qr,rd,ra 100 1.434604ms
And the second pod:
$ kubectl logs coredns-3986650266-1pxb6 -n kube-system -f .:53 CoreDNS-0.9.9 linux/amd64, go1.9.1, 4d6e9c38 2017/11/03 02:15:43 [INFO] CoreDNS-0.9.9 2017/11/03 02:15:43 [INFO] linux/amd64, go1.9.1, 4d6e9c38 100.96.5.3 - [03/Nov/2017:02:20:05 +0000] "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd,ra 115 203.42µs 100.96.5.3 - [03/Nov/2017:02:20:05 +0000] "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd,ra 115 151.898µs 100.96.5.3 - [03/Nov/2017:02:20:05 +0000] "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd,ra 70 129.637µs
The output shows that A records for kubernetes.default.svc.cluster.local
is load balanced across two pod replicas.
You can revert back to use Kube-DNS instead of CoreDNS using the following command:
$ kubectl apply -f templates/kubedns-kops.yaml
We just changed the selector labels from k8s-app: coredns
to k8s-app: kube-dns
.
If you no longer need Kube-DNS, then the Deployment can be deleted:
kubectl delete deployment kube-dns -n kube-system
This will delete the pod, and the CoreDNS pod will continue to serve all the DNS requests.
You are now ready to continue on with the workshop!