Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upstream Prematurely Closed Connection While Reading Response Header from Upstream - 502 Gateway errpr #12285

Closed
anjanaprasd opened this issue Nov 3, 2024 · 2 comments
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@anjanaprasd
Copy link

Hi Team,

I'm running an OpenSearch cluster and trying to expose it through an NGINX Ingress resource. When I attempt to access the OpenSearch cluster via the NGINX Ingress, I get a 502 Bad Gateway error. However, if I access the service through port-forwarding, it works without any issues—although there is a bit of latency, the login page loads and functions correctly.

I've tried various solutions, but none have worked so far. My NGINX Ingress setup seems fine, as I deployed a simple web server and was able to access it without any problems.

I also checked the Ingress logs and found an error message, but I'm not sure how to resolve this. Any help would be greatly appreciated!

Ingress Logs
``
2024/11/03 06:09:29 [error] 184#184: *94 upstream prematurely closed connection while reading response header from upstream, client: 192.168.58.11, server: opensearch-dashboard.hi.local, request: "GET / HTTP/1.1", upstream: "http://10.244.3.84:5601/", host: "opensearch-dashboard.hi.local"
192.168.58.11 - - [03/Nov/2024:06:09:29 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"
2024/11/03 06:09:30 [error] 184#184: *94 upstream prematurely closed connection while reading response header from upstream, client: 192.168.58.11, server: opensearch-dashboard.hi.local, request: "GET /favicon.ico HTTP/1.1", upstream: "http://10.244.3.84:5601/favicon.ico", host: "opensearch-dashboard.hi.local", referrer: "https://demo-dashboard.example.com"
192.168.58.11 - - [03/Nov/2024:06:09:30 +0000] "GET /favicon.ico HTTP/1.1" 502 559 "https://demo-dashboard.example.com" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" "-"



Then I logged into one of my Ingress pods and tried to access the endpoint. I noticed that the Ingress was trying to send the request to the endpoint.

nginx@nginx-ingress-6mvw9:/$ curl -k https://10.244.3.84:5601
nginx@nginx-ingress-6mvw9:/$ curl -kI https://10.244.3.84:5601
HTTP/1.1 302 Found
location: /app/login?
osd-name: new-dashboards
cache-control: private, no-cache, no-store, must-revalidate
set-cookie: security_authentication=; Max-Age=0; Expires=Thu, 01 Jan 1970 00:00:00 GMT; HttpOnly; Path=/
content-length: 0
Date: Sun, 03 Nov 2024 06:25:45 GMT
Connection: keep-alive
Keep-Alive: timeout=120

nginx@nginx-ingress-6mvw9:/$ curl -kI https://10.244.3.84:5601/app/login
HTTP/1.1 200 OK
set-cookie: security_authentication=; Max-Age=0; Expires=Thu, 01 Jan 1970 00:00:00 GMT; HttpOnly; Path=/
set-cookie: security_authentication=; Max-Age=0; Expires=Thu, 01 Jan 1970 00:00:00 GMT; HttpOnly; Path=/
content-security-policy: script-src 'unsafe-eval' 'self'; worker-src blob: 'self'; style-src 'unsafe-inline' 'self'
osd-name: new-dashboards
content-type: text/html; charset=utf-8
cache-control: private, no-cache, no-store, must-revalidate
content-length: 117378
vary: accept-encoding
Date: Sun, 03 Nov 2024 06:25:48 GMT
Connection: keep-alive
Keep-Alive: timeout=120

nginx@nginx-ingress-6mvw9:/$


my Ingress configuration file.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: opensearch
annotations:
# Timeout and Buffer Settings
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
nginx.ingress.kubernetes.io/proxy-buffers: "64 64k"
nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
nginx.ingress.kubernetes.io/large-client-header-buffers: "8 256k"

nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /app/login
nginx.ingress.kubernetes.io/service-upstream: "true"
# Session Persistence (Sticky Sessions)
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"


nginx.ingress.kubernetes.io/keep-alive: "300"
nginx.ingress.kubernetes.io/proxy-connection-header: "keep-alive"

nginx.ingress.kubernetes.io/proxy-next-upstream-tries: "3" 
nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: "5s"

spec:
ingressClassName: nginx
tls:
- hosts:
- opensearch-dashboard.example.com
secretName: opensearch-tls-secret
rules:
- host: opensearch-dashboard.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: new-dashboards
port:
number: 5601



I appreciate any feedback or suggestions you might have. Thank you!
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Nov 3, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority labels Nov 3, 2024
@anjanaprasd
Copy link
Author

duplicate ticket 12286

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

2 participants