-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingress nginx TCP service endpint 400 Bad Request #12171
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@creeram If you can provide the information as asked in the issue template, then readers don't have to ask you questions to get info that is actionable for analysis. Help out and answer questions as asked in the template please because the feature works just fine for everyone so your problem is likely caused by a environment factor. /remove-kind bug And in case you have not opened those ports on the LoadBalancer, then you obviously need to do that |
@longwuyuan Why does it manually need to open ports in a load balancer managed by the k8s cluster, and the issue is not related to the ports not being reachable instead it's throwing a 400 Bad Request error. |
So there is no data to analyze. All that is known is that you sent a telnet packet and you received a HTTP 400 response. Saying this because if I opened tcp port for postgres for testing on minikube , and it works just fine. So the controller is not broken for sure. |
I'm not sure if the issue is specific to OVH cloud, as the same configuration works fine on my local kind Kubernetes cluster with a MetalLB load balancer. |
If you post data that can be analyzed, I can comment now and others will comment as soon as they see i guess. the feature is not broken for sure. |
@longwuyuan I am able fix the issue by updating the values.yaml file with below config. Added :PROXY and the end of the TCP config.
|
Thank you very much for your update. It helps any other bitcoin workload
users. You could also consider not setting false for admissionwebhooks.
It's not encouraged as it removes validation.
…On Thu, 17 Oct, 2024, 11:00 creeram, ***@***.***> wrote:
@longwuyuan <https://github.com/longwuyuan> I am able fix the issue by
updating the values.yaml file with below config.
Added :PROXY and the end of the TCP config.
tcp:
"18332": "crypto-nodes/bitcoin:18332:PROXY"
controller:
kind: DaemonSet
admissionWebhooks:
enabled: false
extraArgs:
default-ssl-certificate: "default/<certificate-name>"
service:
externalTrafficPolicy: "Local"
annotations:
service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol: "v2"
config:
use-proxy-protocol: "true"
real-ip-header: "proxy_protocol"
proxy-real-ip-cidr: "***********"
—
Reply to this email directly, view it on GitHub
<#12171 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGZVWS265RVVG5JLVHYPO3Z35DQRAVCNFSM6AAAAABP4WN7IWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJYGUZTIOBQG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
And if the issue does not need support anymore, please close it or confirm that issue is solved. |
What happened:
What you expected to happen:
NGINX Ingress controller version : 1.11.3
Kubernetes version : v1.29.6
Environment:
Deployed as Daemonset and below are the details of the node
Deployed using helm chart
Helm chart values:
Helm created k8s manifests:
Configmaps:
Daemonset:
Service:
The text was updated successfully, but these errors were encountered: