Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to replicate nginx.conf in a docker compose environment to ingress resource in a kubernetes (GKE) environment #12022

Open
LaraibSaleem opened this issue Sep 26, 2024 · 7 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@LaraibSaleem
Copy link

I have following nginx.conf file which is working perfectly fine in a docker compose setup, that is, /v1/users requests go to https://DNS.com

worker_processes 1;

user nobody nogroup;
pid /tmp/nginx.pid;
error_log /tmp/nginx.error.log;

events {
  worker_connections 1024; # increase if you have lots of clients
  accept_mutex off; # set to 'on' if nginx worker_processes > 1
}
error_log /dev/stdout info;
http {
      include mime.types;
      default_type application/octet-stream;
      access_log /tmp/nginx.access.log combined;
      access_log /dev/stdout;
      sendfile off;
      server_tokens off;

      upstream app_server {
        server unix:/tmp/gunicorn.sock fail_timeout=0;
      }

     server {
        # if no Host match, close the connection to prevent host spoofing
        listen 8008 default_server;
        return 444;
      }
     server {
        listen 80 default_server;
        listen 443 ssl; # comment this line for local deployment
        ssl_certificate           /etc/ssl/certs/server.pem;
        ssl_certificate_key       /etc/ssl/certs/my-server.key.pem;
        ssl_session_cache  builtin:1000  shared:SSL:10m;
        ssl_protocols  TLSv1.2 TLSv1.3;
        ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
        ssl_prefer_server_ciphers on;
        keepalive_timeout 5;
        underscores_in_headers on;
            location /v1/users/ {
                client_max_body_size 4M;
                proxy_connect_timeout 600s;
                proxy_send_timeout 600s;
                proxy_read_timeout 600s;
                send_timeout 600s;
                proxy_redirect off;
                proxy_pass https://DNS.com;
                proxy_ssl_server_name on;
                proxy_ssl_verify_depth 2;
                proxy_request_buffering off;
            }

            location /v1/web/ {
                client_max_body_size 4M;
                proxy_connect_timeout 600s;
                proxy_send_timeout 600s;
                proxy_read_timeout 600s;
                send_timeout 600s;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto https;
                proxy_set_header Host $http_host;
                proxy_redirect off;
                proxy_pass http://web-container:8008;
                proxy_ssl_verify_depth 2;
                proxy_request_buffering off;
            }

            location / {
                client_max_body_size 4M;
                proxy_connect_timeout 600s;
                proxy_send_timeout 600s;
                proxy_read_timeout 600s;
                send_timeout 600s;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto https;
                proxy_set_header Host $http_host;
                proxy_redirect off;
                proxy_pass http://fe-container:3000; 
                proxy_ssl_verify_depth 2;
                proxy_request_buffering off;
            }

     }
}

I'm deploying this app on GKE and using nginx as ingress controller
Following is my ingress resource with all its annotations

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-resource
  namespace: ingress
  annotations:
    kubernetes.io/ingress.allow-http: "true"
    nginx.ingress.kubernetes.io/proxy-body-size: "4m"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"  # Allow both HTTP and HTTPS
    nginx.ingress.kubernetes.io/proxy-ssl-server-name: "on"
    nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: "2"
    nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
spec:
  tls:
  - hosts: 
    - {{ .Values.host }}
    secretName: ingress-tls-secrets
  ingressClassName: nginx
  rules:
  - host: {{ .Values.host }}
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend-external-service
            port:
              number: 3000
      - path: /v1/web
        pathType: Prefix
        backend:
          service:
            name: web-external-service
            port:
              number: 8008
      - path: /v1/users
        pathType: Prefix
        backend:
          service:
            name: dns-external-service
            port:
              number: 443

Following is the external service

kind: Service
apiVersion: v1
metadata:
  name: dns-external-service
  namespace: ingress
spec:
  type: ExternalName
  externalName: dns.com
  ports:
  - port: 443 

The issue I'm currently facing with all these configurations is
400 Bad Request The plain HTTP request was sent to HTTPS port cloudflare
Image

What's expected:
On loading the ingress host/domain, it should redirect to https://DNS.com

Note:
1- My ingress cotroller external IP and dns.com are registered in the same domain on cloudflare so I'm using same tls certs in both, nginx.conf and ingress secrets.
2- web and FE are deployed on same cluster
3- dns.com is itself a private IKS (IBMCloud Kubernetes Environment)

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Sep 26, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority labels Sep 26, 2024
@longwuyuan
Copy link
Contributor

You have not explained more details so does the below suggestion work for you

  • Remove all these annotations
 kubernetes.io/ingress.allow-http: "true"
    nginx.ingress.kubernetes.io/proxy-body-size: "4m"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"  # Allow both HTTP and HTTPS
    nginx.ingress.kubernetes.io/proxy-ssl-server-name: "on"
    nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: "2"
    nginx.ingress.kubernetes.io/proxy-request-buffering: "off"

@LaraibSaleem
Copy link
Author

When I access the /v1/users endpoint for authentication, I receive a 302 redirect, after which the authentication process completes, and I'm returned to our ingress host/domain.
I do not want to redirect everything to the https://DNS.com. I have tried the annotation you provided but this does not solve the problem.

@longwuyuan
Copy link
Contributor

There seem similarities between you sendng request to AUTH outside the cluster like here https://kubernetes.github.io/ingress-nginx/examples/customization/external-auth-headers/

@Gacko
Copy link
Member

Gacko commented Sep 28, 2024

Hello @LaraibSaleem,

I assume https://DNS.com is pointing to something hosted on CloudFlare. The error message you're receiving tells, that you're trying to send a plain unencrypted HTTP request to the HTTPS port of your target host. Some servers still reply in HTTP and try to tell you that you first need to establish an SSL/TLS connection.

Ingress NGINX by default connects to backends without SSL/TLS. To enable HTTPS on your backend, you need to set the following annotation:

https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol

TIL: There is AUTO_HTTP as a value for this annotation. According to the code this would lead to using the same protocol as you're using on the frontend. As you are defining 443 in your external service, you'll probably need to use HTTPS here.

Regards
Marco

@LaraibSaleem
Copy link
Author

LaraibSaleem commented Oct 3, 2024

Hey @Gacko
Thanks for your input.
When I tried the suggested behaviour and accessed my ingress host on the browser, I receive 502.

Regarding the logs on ingress-nginx pod,
When I use https, we get errors for frontend upstream server, but with default, that is http, I get error for dns-external-service.

Copy link

github-actions bot commented Nov 3, 2024

This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev on Kubernetes Slack.

@github-actions github-actions bot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Nov 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

4 participants