Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KubeDNS : Headless Service - Records are overwrited when using deployment replicas #52278

Closed
gunboe opened this issue Sep 11, 2017 · 5 comments
Closed
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/feature Categorizes issue or PR as related to a new feature. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one.

Comments

@gunboe
Copy link

gunboe commented Sep 11, 2017

/area dns

What happened:

KubeDNS overwrite the last replica records (IP/Host) in subdomain, instead of appending the records in KubeDNS.

What you expected to happen:

I'm trying to make a DNS "loadbalance" with Headless Service (https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) with replicas from a deployment.
Trying to resolve a $hostname.$subdomain.$namespace.svc.cluster.local from KubeDNS, I expect to receive a IP list to the same $hostname.

Address 1: 192.168.103.153 hostXYZ.subdomain.nspace.svc.cluster.local
Address 3: 192.168.62.195 hostXYZ.subdomain.nspace.svc.cluster.local
Address 4: 192.168.80.144  hostXYZ.subdomain.nspace.svc.cluster.local

How to reproduce it (as minimally and precisely as possible):

Here you are Yaml file:

kind: Deployment
metadata:
  labels:
    run: busy2
    name: busybox
  name: busy2
  namespace: gsopi-p
spec:
  replicas: 3
  selector:
    matchLabels:
      run: busy2
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: busy2
        name: busybox
    spec:
      hostname: busy2
      subdomain: default-subdomain
      containers:
      - args:
        - sh
        image: busybox
        imagePullPolicy: Always
        name: busy2
        resources: {}
        stdin: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        tty: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: default-subdomain
spec:
  selector:
    name: busybox
  clusterIP: None
  ports:
  - name: foo # Actually, no port is needed.
    port: 80
    targetPort: 80

Testing:

$ kubectl get pods -o wide  
NAME                                 READY     STATUS    RESTARTS   AGE       IP               NODE
busy1-4067030300-vb0vj               1/1       Running   0          4d        192.168.51.172   kn00006
busy2-4281529631-5j3rv               1/1       Running   0          2h        192.168.90.231   kn00005
busy2-4281529631-df66c               1/1       Running   0          2h        192.168.80.146   kn00004
busy2-4281529631-jc4rg               1/1       Running   0          2h        192.168.62.14    kn00007
$ kubectl exec -it busy1-4067030300-vb0vj sh  
/ # nslookup default-subdomain.gsopi-p.svc.cluster.local
Server:    10.36.192.10
Address 1: 10.36.192.10 kube-dns.kube-system.svc.cluster.local

Name:      default-subdomain.gsopi-p.svc.cluster.local
Address 1: 192.168.90.231 busy2.default-subdomain.gsopi-p.svc.cluster.local

Environment:

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.8", GitCommit:"d74e09bb4e4e7026f45becbed8310665ddcb8514", GitTreeState:"clean", BuildDate:"2017-08-03T18:12:08Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.8+coreos.0", GitCommit:"fc34f797fe56c4ab78bdacc29f89a33ad8662f8c", GitTreeState:"clean", BuildDate:"2017-08-05T00:01:34Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}```
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 11, 2017
@k8s-github-robot
Copy link

@gunboe
There are no sig labels on this issue. Please add a sig label by:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <label>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. You can find the group list here and label list here.
The <group-suffix> in the method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Sep 11, 2017
@guillaumerose
Copy link

This issue is due to the dns. See kubernetes/dns#116.

@gunboe
Copy link
Author

gunboe commented Sep 11, 2017

So, will be nice to be implemented...
/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 11, 2017
@dims
Copy link
Member

dims commented Oct 23, 2017

we don't need to track it in both repos. let's close this one since we have 116 in kubernetes/dns repo.

/assign

@dims
Copy link
Member

dims commented Oct 23, 2017

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/feature Categorizes issue or PR as related to a new feature. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

5 participants