Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Real IP not forwarded, tcp-services, metallb #9711

Closed
todeb opened this issue Mar 9, 2023 · 29 comments
Closed

Real IP not forwarded, tcp-services, metallb #9711

todeb opened this issue Mar 9, 2023 · 29 comments
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@todeb
Copy link

todeb commented Mar 9, 2023

Environment:

OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
Ubuntu 20.04.4 LTS   5.4.0-135-generic   docker://20.10.14

Metallb: image: quay.io/metallb/controller:v0.12.1

** NIC:

  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: nginx-ingress-test
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
    helm.sh/chart: ingress-nginx-4.5.2

** Installation:
NIC:

helm upgrade --install nginx-ingress-test ingress-nginx/ingress-nginx -n todetest \
--set controller.kind=DaemonSet \
--set controller.ingressClassResource.name=nginx-test \
--set controller.ingressClass=nginx-test \
--set controller.ingressClassResource.enabled=true \
--set controller.ingressClassResource.controllerValue="k8s.io/ingress-nginx-test" \
--set controller.ingressClassByName=true \
--set controller.service.loadBalancerIP=10.92.3.42 \
--set controller.service.externalTrafficPolicy=Local \
--set tcp.8080="todetest/clusterip:8080"

TESTAPP:

kubectl create deployment source-ip-app --image=registry.k8s.io/echoserver:1.4 -n todetest
kubectl expose deployment source-ip-app --name=clusterip --port=8080 --target-port=8080 -n todetest

** Deployed:

kubectl get po -n todetest -o wide
NAME                                                READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
nginx-ingress-test-ingress-nginx-controller-4n6cg   1/1     Running   0          15m   10.42.6.221   rke-dev-7-dc3-wrk   <none>           <none>
nginx-ingress-test-ingress-nginx-controller-5tfw2   1/1     Running   0          15m   10.42.8.222   rke-dev-9-dc1-wrk   <none>           <none>
nginx-ingress-test-ingress-nginx-controller-ghcn4   1/1     Running   0          13m   10.42.7.53    rke-dev-8-dc3-wrk   <none>           <none>
nginx-ingress-test-ingress-nginx-controller-hdbdq   1/1     Running   0          16m   10.42.4.92    rke-dev-6-dc3-wrk   <none>           <none>
nginx-ingress-test-ingress-nginx-controller-m6bz8   1/1     Running   0          13m   10.42.5.84    rke-dev-4-dc1-wrk   <none>           <none>
nginx-ingress-test-ingress-nginx-controller-rqhpl   1/1     Running   0          14m   10.42.3.157   rke-dev-5-dc2-wrk   <none>           <none>
source-ip-app-57cdb58c68-p2649                      1/1     Running   0          46m   10.42.6.158   rke-dev-7-dc3-wrk   <none>           <none>

** Testing:
curl http://10.92.3.42:8080

** Result

CLIENT VALUES:
client_address=10.42.6.221
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://10.92.3.42:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
host=10.92.3.42:8080
user-agent=curl/7.58.0
BODY:
-no body in request-

Or from: kubectl logs -n todetest deployment/source-ip-app
10.42.6.221 - - [09/Mar/2023:12:44:28 +0000] "GET / HTTP/1.1" 200 388 "-" "curl/7.58.0"

** Expected result:
My IP

** Comment
I see that returned client_address=10.42.6.221 which is nginx-ingress-test-ingress-nginx-controller-4n6cg although I expected to see IP of client from where I send the request.

I had also tried adding additional parameters, although it not help either.

  real-ip-header: proxy_protocol
  use-forwarded-headers: "true"
  use-proxy-protocol: "true"
@todeb todeb added the kind/bug Categorizes issue or PR as related to a bug. label Mar 9, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Mar 9, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@longwuyuan
Copy link
Contributor

@todeb a new issue template asks questions, and the answers are used as data for analyzing the issue. Please look at the template of a new issue and answer those questions. Then re-open this issue.

The info like the details of the controller installation including service, version, logs are useful when they are extracted from the live state of the cluster and can be related to the curl/other request sent to the ingress-controller.

/remove-kind bug
/kind support
/close

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Mar 9, 2023
@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

@todeb a new issue template asks questions, and the answers are used as data for analyzing the issue. Please look at the template of a new issue and answer those questions. Then re-open this issue.

The info like the details of the controller installation including service, version, logs are useful when they are extracted from the live state of the cluster and can be related to the curl/other request sent to the ingress-controller.

/remove-kind bug
/kind support
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@todeb
Copy link
Author

todeb commented Mar 9, 2023

What happened:

ingress nginx forwards controller IP

What you expected to happen:

ingress nginx forwards client real IP

NGINX Ingress controller version

NGINX Ingress controller
  Release:       v1.6.4
  Build:         69e8833858fb6bda12a44990f1d5eaa7b13f4b75
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.21.6

Kubernetes version (use kubectl version):
v1.21.12

Environment:

  • Cloud provider or hardware configuration: baremetal
  • OS (e.g. from /etc/os-release): Ubuntu 20.04.4 LTS
  • Kernel (e.g. uname -a): 5.4.0-135-generic
  • Install tools:
    • rke1 custom cluster
  • Basic cluster related info:
    • kubectl version
    • kubectl get nodes -o wide
  NAME                 STATUS   ROLES               AGE    VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
rke-dev-1-dc1-etcd   Ready    controlplane,etcd   293d   v1.21.12   10.92.3.11    <none>        Ubuntu 20.04.4 LTS   5.4.0-135-generic   docker://20.10.14
rke-dev-2-dc2-etcd   Ready    controlplane,etcd   293d   v1.21.12   10.92.3.12    <none>        Ubuntu 20.04.4 LTS   5.4.0-135-generic   docker://20.10.14
rke-dev-3-dc3-etcd   Ready    controlplane,etcd   293d   v1.21.12   10.92.3.13    <none>        Ubuntu 20.04.4 LTS   5.4.0-135-generic   docker://20.10.14
rke-dev-4-dc1-wrk    Ready    worker              293d   v1.21.12   10.92.3.14    <none>        Ubuntu 20.04.4 LTS   5.4.0-135-generic   docker://20.10.14
rke-dev-5-dc2-wrk    Ready    worker              293d   v1.21.12   10.92.3.15    <none>        Ubuntu 20.04.4 LTS   5.4.0-135-generic   docker://20.10.14
rke-dev-6-dc3-wrk    Ready    worker              293d   v1.21.12   10.92.3.16    <none>        Ubuntu 20.04.4 LTS   5.4.0-135-generic   docker://20.10.14
rke-dev-7-dc3-wrk    Ready    worker              258d   v1.21.12   10.92.3.17    <none>        Ubuntu 20.04.5 LTS   5.4.0-137-generic   docker://20.10.22
rke-dev-8-dc3-wrk    Ready    worker              91d    v1.21.12   10.92.3.18    <none>        Ubuntu 20.04.4 LTS   5.4.0-135-generic   docker://20.10.14
rke-dev-9-dc1-wrk    Ready    worker              50d    v1.21.12   10.92.3.19    <none>        Ubuntu 20.04.5 LTS   5.4.0-137-generic   docker://20.10.22
  • How was the ingress-nginx-controller installed:
    • If helm was used then please show output of helm ls -A | grep -i ingress
 nginx-ingress-test              todetest                        3               2023-03-09 13:29:41.0295324 +0100 CET           deployed        ingress-nginx-4.5.2
                                1.6.4
  • If helm was used then please show output of helm -n <ingresscontrollernamepspace> get values <helmreleasename>
USER-SUPPLIED VALUES:
controller:
  ingressClass: nginx-test
  ingressClassByName: true
  ingressClassResource:
    controllerValue: k8s.io/ingress-nginx-test
    enabled: true
    name: nginx-test
  kind: DaemonSet
  service:
    externalTrafficPolicy: Local
    loadBalancerIP: 10.92.3.42
tcp:
  "8080": todetest/clusterip:8080
  • Current State of the controller:
    • kubectl describe ingressclasses
Name:         nginx-test
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=nginx-ingress-test
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.6.4
              helm.sh/chart=ingress-nginx-4.5.2
Annotations:  meta.helm.sh/release-name: nginx-ingress-test
              meta.helm.sh/release-namespace: todetest
Controller:   k8s.io/ingress-nginx-test
Events:       <none>
  • kubectl -n <ingresscontrollernamespace> get all -A -o wide
NAME                                                    READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
pod/nginx-ingress-test-ingress-nginx-controller-4n6cg   1/1     Running   0          52m   10.42.6.221   rke-dev-7-dc3-wrk   <none>           <none>
pod/nginx-ingress-test-ingress-nginx-controller-5tfw2   1/1     Running   0          52m   10.42.8.222   rke-dev-9-dc1-wrk   <none>           <none>
pod/nginx-ingress-test-ingress-nginx-controller-ghcn4   1/1     Running   0          50m   10.42.7.53    rke-dev-8-dc3-wrk   <none>           <none>
pod/nginx-ingress-test-ingress-nginx-controller-hdbdq   1/1     Running   0          53m   10.42.4.92    rke-dev-6-dc3-wrk   <none>           <none>
pod/nginx-ingress-test-ingress-nginx-controller-m6bz8   1/1     Running   0          50m   10.42.5.84    rke-dev-4-dc1-wrk   <none>           <none>
pod/nginx-ingress-test-ingress-nginx-controller-rqhpl   1/1     Running   0          51m   10.42.3.157   rke-dev-5-dc2-wrk   <none>           <none>
pod/source-ip-app-57cdb58c68-p2649                      1/1     Running   0          83m   10.42.6.158   rke-dev-7-dc3-wrk   <none>           <none>

NAME                                                            TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                     AGE    SELECTOR
service/clusterip                                               ClusterIP      10.43.130.17    <none>        8080/TCP                                                    52m    app=source-ip-app
service/nginx-ingress-test-ingress-nginx-controller             LoadBalancer   10.43.188.35    10.92.3.42    80:30645/TCP,443:32068/TCP,8080:30136/TCP,54321:32218/TCP   161m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress-test,app.kubernetes.io/name=ingress-nginx
service/nginx-ingress-test-ingress-nginx-controller-admission   ClusterIP      10.43.140.228   <none>        443/TCP                                                     161m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress-test,app.kubernetes.io/name=ingress-nginx

NAME                                                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE    CONTAINERS   IMAGES
                                                                        SELECTOR
daemonset.apps/nginx-ingress-test-ingress-nginx-controller   6         6         6       6            6           kubernetes.io/os=linux   161m   controller   registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress-test,app.kubernetes.io/name=ingress-nginx

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                           SELECTOR
deployment.apps/source-ip-app   1/1     1            1           83m   echoserver   registry.k8s.io/echoserver:1.4   app=source-ip-app

NAME                                       DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                           SELECTOR
replicaset.apps/source-ip-app-57cdb58c68   1         1         1       83m   echoserver   registry.k8s.io/echoserver:1.4   app=source-ip-app,pod-template-hash=57cdb58c68
  • kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
Name:             nginx-ingress-test-ingress-nginx-controller-ghcn4
Namespace:        todetest
Priority:         0
Service Account:  nginx-ingress-test-ingress-nginx
Node:             rke-dev-8-dc3-wrk/10.92.3.18
Start Time:       Thu, 09 Mar 2023 13:29:35 +0100
Labels:           app.kubernetes.io/component=controller
                  app.kubernetes.io/instance=nginx-ingress-test
                  app.kubernetes.io/name=ingress-nginx
                  controller-revision-hash=64c8bcb68f
                  pod-template-generation=4
Annotations:      cni.projectcalico.org/containerID: 58fe143549f1637e4eeb3ceb2f3c1090f7d968e9e073680042fad85a247836a5
                  cni.projectcalico.org/podIP: 10.42.7.53/32
                  cni.projectcalico.org/podIPs: 10.42.7.53/32
                  kubectl.kubernetes.io/restartedAt: 2023-03-09T12:26:38+01:00
Status:           Running
IP:               10.42.7.53
IPs:
  IP:           10.42.7.53
Controlled By:  DaemonSet/nginx-ingress-test-ingress-nginx-controller
Containers:
  controller:
    Container ID:  docker://4a05f4f8f93f6bd7f54a70505fbc0070a6dbc6530e2a773e48c847a95b8cf567
    Image:         registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f
    Image ID:      docker-pullable://registry.k8s.io/ingress-nginx/controller@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f
    Ports:         80/TCP, 443/TCP, 8443/TCP, 8080/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/nginx-ingress-test-ingress-nginx-controller
      --election-id=nginx-ingress-test-ingress-nginx-leader
      --controller-class=k8s.io/ingress-nginx-test
      --ingress-class=nginx-test
      --configmap=$(POD_NAMESPACE)/nginx-ingress-test-ingress-nginx-controller
      --tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-test-ingress-nginx-tcp
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --ingress-class-by-name=true
    State:          Running
      Started:      Thu, 09 Mar 2023 13:29:36 +0100
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       nginx-ingress-test-ingress-nginx-controller-ghcn4 (v1:metadata.name)
      POD_NAMESPACE:  todetest (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-95mtm (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nginx-ingress-test-ingress-nginx-admission
    Optional:    false
  kube-api-access-95mtm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From                      Message
  ----    ------     ----  ----                      -------
  Normal  Scheduled  50m   default-scheduler         Successfully assigned todetest/nginx-ingress-test-ingress-nginx-controller-ghcn4 to rke-dev-8-dc3-wrk
  Normal  Pulled     50m   kubelet                   Container image "registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f" already present on machine
  Normal  Created    50m   kubelet                   Created container controller
  Normal  Started    50m   kubelet                   Started container controller
  Normal  RELOAD     50m   nginx-ingress-controller  NGINX reload triggered due to a change in configuration
  • kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Name:              clusterip
Namespace:         todetest
Labels:            app=source-ip-app
Annotations:       <none>
Selector:          app=source-ip-app
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.43.130.17
IPs:               10.43.130.17
Port:              <unset>  8080/TCP
TargetPort:        8080/TCP
Endpoints:         10.42.6.158:8080
Session Affinity:  None
Events:            <none>


Name:                     nginx-ingress-test-ingress-nginx-controller
Namespace:                todetest
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=nginx-ingress-test
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.6.4
                          helm.sh/chart=ingress-nginx-4.5.2
Annotations:              field.cattle.io/publicEndpoints:
                            [{"addresses":["10.92.3.42"],"port":80,"protocol":"TCP","serviceName":"todetest:nginx-ingress-test-ingress-nginx-controller","allNodes":fa...
                          meta.helm.sh/release-name: nginx-ingress-test
                          meta.helm.sh/release-namespace: todetest
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress-test,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.43.188.35
IPs:                      10.43.188.35
IP:                       10.92.3.42
LoadBalancer Ingress:     10.92.3.42
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  30645/TCP
Endpoints:                10.42.3.157:80,10.42.4.92:80,10.42.5.84:80 + 3 more...
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  32068/TCP
Endpoints:                10.42.3.157:443,10.42.4.92:443,10.42.5.84:443 + 3 more...
Port:                     8080-tcp  8080/TCP
TargetPort:               8080-tcp/TCP
NodePort:                 8080-tcp  30136/TCP
Endpoints:                10.42.3.157:8080,10.42.4.92:8080,10.42.5.84:8080 + 3 more...
Port:                     proxied-tcp-54321  54321/TCP
TargetPort:               54321/TCP
NodePort:                 proxied-tcp-54321  32218/TCP
Endpoints:                10.42.3.157:54321,10.42.4.92:54321,10.42.5.84:54321 + 3 more...
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     31775
Events:
  Type    Reason        Age                  From             Message
  ----    ------        ----                 ----             -------
  Normal  nodeAssigned  51m (x18 over 86m)   metallb-speaker  announcing from node "rke-dev-7-dc3-wrk"
  Normal  nodeAssigned  21m (x10 over 112m)  metallb-speaker  announcing from node "rke-dev-9-dc1-wrk"
  Normal  nodeAssigned  19m (x2 over 21m)    metallb-speaker  announcing from node "rke-dev-7-dc3-wrk"
  Normal  nodeAssigned  14m (x2 over 14m)    metallb-speaker  announcing from node "rke-dev-7-dc3-wrk"


Name:              nginx-ingress-test-ingress-nginx-controller-admission
Namespace:         todetest
Labels:            app.kubernetes.io/component=controller
                   app.kubernetes.io/instance=nginx-ingress-test
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=ingress-nginx
                   app.kubernetes.io/part-of=ingress-nginx
                   app.kubernetes.io/version=1.6.4
                   helm.sh/chart=ingress-nginx-4.5.2
Annotations:       meta.helm.sh/release-name: nginx-ingress-test
                   meta.helm.sh/release-namespace: todetest
Selector:          app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress-test,app.kubernetes.io/name=ingress-nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.43.140.228
IPs:               10.43.140.228
Port:              https-webhook  443/TCP
TargetPort:        webhook/TCP
Endpoints:         10.42.3.157:8443,10.42.4.92:8443,10.42.5.84:8443 + 3 more...
Session Affinity:  None
Events:            <none>
  • Others:
    • Any other related information like ;
      • copy/paste of the snippet (if applicable)
      • kubectl describe ... of any custom configmap(s) created and in use
      • Any other related information that may help

How to reproduce this issue:

helm upgrade --install nginx-ingress-test ingress-nginx/ingress-nginx -n todetest \
--set controller.kind=DaemonSet \
--set controller.ingressClassResource.name=nginx-test \
--set controller.ingressClass=nginx-test \
--set controller.ingressClassResource.enabled=true \
--set controller.ingressClassResource.controllerValue="k8s.io/ingress-nginx-test" \
--set controller.ingressClassByName=true \
--set controller.service.loadBalancerIP=10.92.3.42 \
--set controller.service.externalTrafficPolicy=Local \
--set tcp.8080="todetest/clusterip:8080"
kubectl create deployment source-ip-app --image=registry.k8s.io/echoserver:1.4 -n todetest
kubectl expose deployment source-ip-app --name=clusterip --port=8080 --target-port=8080 -n todetest
curl http://10.92.3.42:8080

@longwuyuan @k8s-ci-robot Could you /open ?
/open

@longwuyuan
Copy link
Contributor

/reopen

@longwuyuan
Copy link
Contributor

/re-open

@k8s-ci-robot k8s-ci-robot reopened this Mar 9, 2023
@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@longwuyuan
Copy link
Contributor

I can get client_real_ip so I don't think there is a problem with the controller. See screenshot

image

@todeb
Copy link
Author

todeb commented Mar 9, 2023

I don't really understand your configuration and output.
Although I do not see anything more in between that can cause issues. I will check if I can reproduce it with minikube.

@todeb
Copy link
Author

todeb commented Mar 9, 2023

I had even change the type loadbalancer service to ClusterIP:
with
--set controller.service.type=ClusterIP \

so:

k get all -n todetest -o wide
NAME                                                    READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
pod/nginx-ingress-test-ingress-nginx-controller-2k7kh   1/1     Running   0          2m50s   10.42.6.171   rke-dev-7-dc3-wrk   <none>           <none>
pod/nginx-ingress-test-ingress-nginx-controller-4bc5n   1/1     Running   0          3m25s   10.42.8.8     rke-dev-9-dc1-wrk   <none>           <none>
pod/nginx-ingress-test-ingress-nginx-controller-ghcn4   1/1     Running   0          10h     10.42.7.53    rke-dev-8-dc3-wrk   <none>           <none>
pod/nginx-ingress-test-ingress-nginx-controller-hdbdq   1/1     Running   0          10h     10.42.4.92    rke-dev-6-dc3-wrk   <none>           <none>
pod/nginx-ingress-test-ingress-nginx-controller-m6bz8   1/1     Running   0          10h     10.42.5.84    rke-dev-4-dc1-wrk   <none>           <none>
pod/nginx-ingress-test-ingress-nginx-controller-rqhpl   1/1     Running   0          10h     10.42.3.157   rke-dev-5-dc2-wrk   <none>           <none>
pod/source-ip-app-57cdb58c68-ztkrx                      1/1     Running   0          9h      10.42.8.233   rke-dev-9-dc1-wrk   <none>           <none>

NAME                                                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                             AGE   SELECTOR
service/clusterip                                               ClusterIP   10.43.130.17    <none>        8080/TCP                            10h   app=source-ip-app
service/nginx-ingress-test-ingress-nginx-controller             ClusterIP   10.43.188.35    <none>        80/TCP,443/TCP,8080/TCP,54321/TCP   12h   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress-test,app.kubernetes.io/name=ingress-nginx
service/nginx-ingress-test-ingress-nginx-controller-admission   ClusterIP   10.43.140.228   <none>        443/TCP                             12h   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress-test,app.kubernetes.io/name=ingress-nginx

NAME                                                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS   IMAGES
                                                                       SELECTOR
daemonset.apps/nginx-ingress-test-ingress-nginx-controller   6         6         6       6            6           kubernetes.io/os=linux   12h   controller   registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress-test,app.kubernetes.io/name=ingress-nginx

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                           SELECTOR
deployment.apps/source-ip-app   1/1     1            1           10h   echoserver   registry.k8s.io/echoserver:1.4   app=source-ip-app

NAME                                       DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                           SELECTOR
replicaset.apps/source-ip-app-57cdb58c68   1         1         1       10h   echoserver   registry.k8s.io/echoserver:1.4   app=source-ip-app,pod-template-hash=57cdb58c68

and did:
k exec deployment/source-ip-app -n todetest -- curl 10.43.188.35:8080
but the result in clientIP is still as ingress...

k exec deployment/source-ip-app -n todetest -- curl 10.43.188.35:8080
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   293    0   293    0     0   4596      0 --:--:-- --:--:-- --:--:--  472CLIENT VALUES:
client_address=10.42.4.92
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://10.43.188.35:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
host=10.43.188.35:8080
user-agent=curl/7.47.0
BODY:
-no body in request-5

@todeb
Copy link
Author

todeb commented Mar 9, 2023

and when I execute curl directly to clusterip svc it shows clientIP correct.

k exec deployment/source-ip-app -n todetest -- curl 10.43.130.17:8080
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0CLIENT VALUES:
client_address=10.92.3.19
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://10.43.130.17:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
host=10.43.130.17:8080
user-agent=curl/7.47.0
BODY:
100   293    0   293    0     0   4600      0 --:--:-- --:--:-- --:--:--  4803

So when it is not working, the only additional hop is ingress controller..

@todeb
Copy link
Author

todeb commented Mar 9, 2023

I just replicated same behavior in minikube:
To reproduce:

kubectl create ns todetest
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

helm upgrade --install nginx-ingress-test ingress-nginx/ingress-nginx -n todetest \
--set controller.kind=DaemonSet \
--set controller.ingressClassResource.name=nginx-test \
--set controller.ingressClass=nginx-test \
--set controller.ingressClassResource.enabled=true \
--set controller.ingressClassResource.controllerValue="k8s.io/ingress-nginx-test" \
--set controller.ingressClassByName=true \
--set controller.service.type=ClusterIP \
--set tcp.8080="todetest/clusterip:8080"

kubectl create deployment source-ip-app --image=registry.k8s.io/echoserver:1.4 -n todetest
kubectl expose deployment source-ip-app --name=clusterip --port=8080 --target-port=8080 -n todetest
kubectl get po -n todetest -o wide
NAME                                                READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES
nginx-ingress-test-ingress-nginx-controller-svsqr   1/1     Running   0          5m9s    10.244.0.4   minikube   <none>           <none>
source-ip-app-75dbbff4f-6h9hv                       1/1     Running   0          4m34s   10.244.0.6   minikube   <none>           <none>
kubectl get svc -n todetest
NAME                                                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                   AGE
clusterip                                               ClusterIP   10.99.189.18     <none>        8080/TCP                  5m37s
nginx-ingress-test-ingress-nginx-controller             ClusterIP   10.102.76.61     <none>        80/TCP,443/TCP,8080/TCP   6m16s
nginx-ingress-test-ingress-nginx-controller-admission   ClusterIP   10.111.225.152   <none>        443/TCP                   6m16s

Ćurl against clusterip service of ingress nginx controller return ingress nginx controller IP.

kubectl exec deployment/source-ip-app -n todetest -- curl nginx-ingress-test-ingress-nginx-controller:8080
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0CLIENT VALUES:
client_address=10.244.0.4
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://nginx-ingress-test-ingress-nginx-controller:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
host=nginx-ingress-test-ingress-nginx-controller:8080
user-agent=curl/7.47.0
BODY:
100   355    0   355    0     0  71285      0 --:--:-- --:--:-- --:--:-- 88750

@todeb
Copy link
Author

todeb commented Mar 10, 2023

kubectl get no -o wide
NAME       STATUS   ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
minikube   Ready    control-plane   12m   v1.26.1   192.168.49.2   <none>        Ubuntu 20.04.5 LTS   5.4.0-135-generic   docker://20.10.23

@longwuyuan
Copy link
Contributor

longwuyuan commented Mar 10, 2023 via email

@Azbesciak
Copy link

@longwuyuan Are you sure that this is not connected with #9685

BTW the title you set there is not relevant IMO.

@longwuyuan
Copy link
Contributor

@Azbesciak post the output of commands like kubectl describe ... or kubectl logs and curl etc etc and point your comments to that data and I can comment back on that.

@todeb
Copy link
Author

todeb commented Mar 10, 2023

@longwuyuan I just deployed clusterIP to say you that it even is replaced when you dont use loadbalancer.
With the loadbalancer in minikube it is replaced as well.

Here is installation of metallb:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.9/config/manifests/metallb-native.yaml

kubectl apply -f - <<EOF
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: pool1
  namespace: metallb-system
spec:
  addresses:
  - 192.168.49.10/24

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - pool1
EOF

Here is upgraded NIC with use of loadbalancer:

helm upgrade --install nginx-ingress-test ingress-nginx/ingress-nginx -n todetest \
--set controller.kind=DaemonSet \
--set controller.ingressClassResource.name=nginx-test \
--set controller.ingressClass=nginx-test \
--set controller.ingressClassResource.enabled=true \
--set controller.ingressClassResource.controllerValue="k8s.io/ingress-nginx-test" \
--set controller.ingressClassByName=true \
--set controller.service.loadBalancerIP=192.168.49.10 \
--set controller.service.externalTrafficPolicy=Local \
--set tcp.8080="todetest/clusterip:8080"

Here is the result of curl from host that is running minikube:

curl 192.168.49.10:8080
CLIENT VALUES:
client_address=10.244.0.4
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://192.168.49.10:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
host=192.168.49.10:8080
user-agent=curl/7.68.0

Host running minikube IP:

ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:ab:cb:5b brd ff:ff:ff:ff:ff:ff
    inet 10.92.100.20/24 brd 10.92.100.255 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:feab:cb5b/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:e6:36:d1:a8 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:e6ff:fe36:d1a8/64 scope link
       valid_lft forever preferred_lft forever
4: br-11abc36a829c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:37:4c:71:d4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.49.1/24 brd 192.168.49.255 scope global br-11abc36a829c
       valid_lft forever preferred_lft forever
    inet6 fe80::42:37ff:fe4c:71d4/64 scope link
       valid_lft forever preferred_lft forever
10: veth4559d3a@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-11abc36a829c state UP group default
    link/ether 1a:8d:bf:6b:18:6e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::188d:bfff:fe6b:186e/64 scope link
       valid_lft forever preferred_lft forever

It is reproducible anytime. And I give you all command to reproduce it.

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 10, 2023
@todeb
Copy link
Author

todeb commented Mar 10, 2023

You can also check documentation, which says ClusterIP is not source NATed, same as LoadBalancer with service.spec.externalTrafficPolicy=Local
https://kubernetes.io/docs/tutorials/services/source-ip/

So for me it does looks like a bug in NIC that it replaces original source IP with its own.

@todeb
Copy link
Author

todeb commented Mar 10, 2023

clusterIP is not reachable from a client outside the cluster.

You can deploy source-ip-app in same cluster as I do and execute curl from it:
kubectl exec deployment/source-ip-app -n todetest -- curl nginx-ingress-test-ingress-nginx-controller:8080

Or you have a 2nd option with deploying loadbalancer as in above example and then curl its IP and port that are porinting in NIC to source-ip-app:
curl 192.168.49.10:8080

@longwuyuan
Copy link
Contributor

longwuyuan commented Mar 10, 2023 via email

@todeb
Copy link
Author

todeb commented Mar 10, 2023

What do you mean by that? I can run any command if you provide.
On the controller logs IPs looks correct.

kubectl logs nginx-ingress-test-ingress-nginx-controller-svsqr -n todetest
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.6.4
  Build:         69e8833858fb6bda12a44990f1d5eaa7b13f4b75
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.21.6

-------------------------------------------------------------------------------

W0309 23:50:12.926879       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0309 23:50:12.927245       7 main.go:209] "Creating API client" host="https://10.96.0.1:443"
I0309 23:50:12.934712       7 main.go:253] "Running in Kubernetes cluster" major="1" minor="26" git="v1.26.1" state="clean" commit="8f94681cd294aa8cfd3407b8191f6c70214973a4" platform="linux/amd64"
I0309 23:50:13.294427       7 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0309 23:50:13.329453       7 ssl.go:533] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0309 23:50:13.341832       7 nginx.go:261] "Starting NGINX Ingress controller"
I0309 23:50:13.348680       7 store.go:520] "adding ingressclass as ingress-class-by-name is configured" ingressclass="nginx-test"
I0309 23:50:13.351278       7 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"todetest", Name:"nginx-ingress-test-ingress-nginx-controller", UID:"0b866832-a305-4cca-8137-660dab0cecff", APIVersion:"v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap todetest/nginx-ingress-test-ingress-nginx-controller
I0309 23:50:13.357042       7 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"todetest", Name:"nginx-ingress-test-ingress-nginx-tcp", UID:"a08f694d-8bdc-4abd-9d91-af0640e0e75e", APIVersion:"v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap todetest/nginx-ingress-test-ingress-nginx-tcp
I0309 23:50:14.543461       7 nginx.go:304] "Starting NGINX process"
I0309 23:50:14.543635       7 leaderelection.go:248] attempting to acquire leader lease todetest/nginx-ingress-test-ingress-nginx-leader...
I0309 23:50:14.543965       7 nginx.go:324] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0309 23:50:14.544448       7 controller.go:188] "Configuration changes detected, backend reload required"
I0309 23:50:14.568886       7 leaderelection.go:258] successfully acquired lease todetest/nginx-ingress-test-ingress-nginx-leader
I0309 23:50:14.569262       7 status.go:84] "New leader elected" identity="nginx-ingress-test-ingress-nginx-controller-svsqr"
I0309 23:50:14.631555       7 controller.go:205] "Backend successfully reloaded"
I0309 23:50:14.631800       7 controller.go:216] "Initial sync, sleeping for 1 second"
I0309 23:50:14.632263       7 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"todetest", Name:"nginx-ingress-test-ingress-nginx-controller-svsqr", UID:"24f3b71c-ed09-4e0e-aa8d-193a50780851", APIVersion:"v1", ResourceVersion:"572", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0309 23:50:41.143642       7 controller.go:188] "Configuration changes detected, backend reload required"
I0309 23:50:41.234270       7 controller.go:205] "Backend successfully reloaded"
I0309 23:50:41.234659       7 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"todetest", Name:"nginx-ingress-test-ingress-nginx-controller-svsqr", UID:"24f3b71c-ed09-4e0e-aa8d-193a50780851", APIVersion:"v1", ResourceVersion:"572", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
[10.244.0.6] [09/Mar/2023:23:51:55 +0000] TCP 200 547 81 0.008
[10.244.0.6] [09/Mar/2023:23:55:45 +0000] TCP 200 547 81 0.004
[10.244.0.6] [09/Mar/2023:23:57:17 +0000] TCP 200 609 112 0.001
[192.168.49.1] [10/Mar/2023:10:58:34 +0000] TCP 200 549 82 0.002
[192.168.49.1] [10/Mar/2023:11:46:21 +0000] TCP 200 549 82 0.003

But they doesnt seem to be propagated / forwarded to the source-ip-app.

@longwuyuan
Copy link
Contributor

I think that is because TCP connections are not implemented like HTTP/HTTPS. The ingress-nginx controller just opens a port to the backend.

If it was a feature rich LB like AWS, there may be options/flags in the LB to retain the real_client_ip, all the way to the backend pod. Also I think a developer expert has to comment if other well known options like X-Forwarded-For and Proxy-Protocol work with TCP.

@todeb
Copy link
Author

todeb commented Mar 10, 2023

If Im doing just telnet to LB IP:

telnet 192.168.49.10 8080
Trying 192.168.49.10...
Connected to 192.168.49.10.
Escape character is '^]'.

I also see that connection is established from nginx ingress controller IP and not real client IP.

netstat -tupan
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      1/nginx: master pro
tcp        0      0 10.244.0.6:8080         10.244.0.20:37566       ESTABLISHED -
tcp        0      0 10.244.0.6:58712        91.189.91.38:80         TIME_WAIT   -
tcp        0      0 10.244.0.6:40650        91.189.91.38:80         TIME_WAIT   -

Ok so waiting for developer comment on that.

@Azbesciak
Copy link

@todeb I am not a nginx master, but did you look into generated nginx config inside the controller? Maybe you will spot something there.

@todeb
Copy link
Author

todeb commented Mar 10, 2023

what i can see in config files:

stream {
        lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;";

        lua_shared_dict tcp_udp_configuration_data 5M;

        resolver 169.254.20.10 valid=30s ipv6=off;

        init_by_lua_block {
                collectgarbage("collect")

                -- init modules
                local ok, res

                ok, res = pcall(require, "configuration")
                if not ok then
                error("require failed: " .. tostring(res))
                else
                configuration = res
                end

                ok, res = pcall(require, "tcp_udp_configuration")
                if not ok then
                error("require failed: " .. tostring(res))
                else
                tcp_udp_configuration = res
                tcp_udp_configuration.prohibited_localhost_port = '10246'

                end

                ok, res = pcall(require, "tcp_udp_balancer")
                if not ok then
                error("require failed: " .. tostring(res))
                else
                tcp_udp_balancer = res
                end
        }

        init_worker_by_lua_block {
                tcp_udp_balancer.init_worker()
        }

        lua_add_variable $proxy_upstream_name;

        log_format log_stream '[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time';

        access_log /var/log/nginx/access.log log_stream ;

        error_log  /var/log/nginx/error.log notice;

        upstream upstream_balancer {
                server 0.0.0.1:1234; # placeholder

                balancer_by_lua_block {
                        tcp_udp_balancer.balance()
                }
        }

        server {
                listen 127.0.0.1:10247;

                access_log off;

                content_by_lua_block {
                        tcp_udp_configuration.call()
                }
        }

        # TCP services

        server {
                preread_by_lua_block {
                        ngx.var.proxy_upstream_name="tcp-todetest-clusterip-8080";
                }

                listen                  8080;

                proxy_timeout           600s;
                proxy_next_upstream     on;
                proxy_next_upstream_timeout 600s;
                proxy_next_upstream_tries   3;

                proxy_pass              upstream_balancer;

        }

And probably this is a function executed, although I just suspect. I do not know anything how those .lua scripts are working.

function _M.balance()
  local balancer = get_balancer()
  if not balancer then
    return
  end

  local peer = balancer:balance()
  if not peer then
    ngx.log(ngx.WARN, "no peer was returned, balancer: " .. balancer.name)
    return
  end

  if peer:match(PROHIBITED_PEER_PATTERN) then
    ngx.log(ngx.ERR, "attempted to proxy to self, balancer: ", balancer.name, ", peer: ", peer)
    return
  end

  ngx_balancer.set_more_tries(1)

  local ok, err = ngx_balancer.set_current_peer(peer)
  if not ok then
    ngx.log(ngx.ERR, "error while setting current upstream peer ", peer,
            ": ", err)
  end
end

@longwuyuan
Copy link
Contributor

@todeb , your expectation is well understood but kindly do not mark this as a bug.

The reason for that is there is no data here that shows that the client information will traverse to the backend pod as per implementation of the current TCP/UDP connections, from client outside cluster to backend pod inside cluster.

There are 2 simple facts that matter here.

(1) When the request is over HTTP/HTTPS protocols, If the client information is in a header of the client request, then retaining those headers can occur by terminating HTTP/HTTPS on the controller and directly sending the connection to the backend pod without extra hops that the kube-proxy conducts (externalTrafficPolicy: Local).

(2) If proxy-protocol https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol is enabled on all the proxies between the client and the backend pod, then the client information like source-ip-address is not lost when packet travels, over the different hops/proxies.

In this issue, I have seen you post curl over HTTPS and also telnet so I am not even certain if you are looking for client-real-ip-adress for a HTTP/HTTPS request or a TCP connection.

And on top of that there is no data here that you enabled proxy-protocol on all the proxies/hops. Metallb is not even capable of enabling proxy-protocol so as far as my assessment is concerned, there are several users who get real-client-ip-address by enabling proxy-protocol, so there is no problem with the controller code, in that regard. If getting real-client-ip-address was broken by any recent release of the controller, then there would be many many users reporting it.

@longwuyuan
Copy link
Contributor

/remove-kind bug

@k8s-ci-robot k8s-ci-robot removed the kind/bug Categorizes issue or PR as related to a bug. label Mar 10, 2023
@todeb
Copy link
Author

todeb commented Mar 13, 2023

In this issue, I have seen you post curl over HTTPS and also telnet so I am not even certain if you are looking for client-real-ip-adress for a HTTP/HTTPS request or a TCP connection.

Im doing curl via http and telnet although I did it through 8080 port which configured as tcp stream not http proxy on INgress Nginx.

And on top of that there is no data here that you enabled proxy-protocol on all the proxies/hops. Metallb is not even capable of enabling proxy-protocol so as far as my assessment is concerned, there are several users who get real-client-ip-address by enabling proxy-protocol, so there is no problem with the controller code, in that regard. If getting real-client-ip-address was broken by any recent release of the controller, then there would be many many users reporting it.

There is no point of metallb issue here as Im using ExternalTrafficPolicy = Local which naturally traverse original IP.
If I will be using Cluster Policy then I agree that it should support proxy-protocol.

So from what you saying we should use the proxy-protocol, so then the backend app should support it, and there is no like default behavior of traversing real IP on ingress.

Just to be sure it is just setting?
use-proxy-protocol: "true"
in ingress cm? Or some more properties are also needed?

@todeb
Copy link
Author

todeb commented Mar 13, 2023

seems that using proxy protocol, client ip is kept.
Not needing even to set:
use-proxy-protocol: "true"
eg:

curl --haproxy-protocol 10.92.3.42:8080

and backend client config (nginx example):

log_format proxy '$proxy_protocol_addr - $remote_user [$time_local] '
                        '"$request" $status $body_bytes_sent '
                        '"$http_referer" "$http_user_agent"';

        access_log  /var/log/nginx/access.log  proxy;
server {
          listen 8080 proxy_protocol;
          server_name _;
          root /usr/share/nginx/html;
          index index.html;
          
          location / {
            proxy_set_header Proxy-Protocol $proxy_protocol_addr;
            try_files $uri $uri/ /index.html;
          }
        }
      }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Archived in project
Development

No branches or pull requests

4 participants