Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

* HTTP 1.0, assume close after body < HTTP/1.0 503 Service Unavailable #2526

Open
4 tasks
aditivw opened this issue Sep 17, 2024 · 0 comments
Open
4 tasks

Comments

@aditivw
Copy link

aditivw commented Sep 17, 2024

System Info

We have deployed 'ghcr.io/huggingface/text-generation-inference:2.2.0' using kubernetes and the application logs say that the server is now sucessfully connected

2024-09-17T09:21:04.948326Z INFO text_generation_router: router/src/main.rs:357: Using config Some(Llama)
2024-09-17T09:21:04.948342Z WARN text_generation_router: router/src/main.rs:384: Invalid hostname, defaulting to 0.0.0.0
2024-09-17T09:21:05.378802Z INFO text_generation_router::server: router/src/server.rs:1572: Warming up model
2024-09-17T09:21:26.517495Z INFO text_generation_launcher: Cuda Graphs are enabled for sizes [32, 16, 8, 4, 2, 1]
2024-09-17T09:21:28.936537Z INFO text_generation_router::server: router/src/server.rs:1599: Using scheduler V3
2024-09-17T09:21:28.936563Z INFO text_generation_router::server: router/src/server.rs:1651: Setting max batch total tokens to 37568
2024-09-17T09:21:28.997788Z INFO text_generation_router::server: router/src/server.rs:1889: Connected

But when we hit the route/endpoint we are unable to consume the LLM.
We have checked the route and service configurations thoroughly and they seem to be correct.

when we hit the route the error message is:

  • IPv6: (none)
  • IPv4: 11.241.137.145
  • Trying 11.241.137.145:443...
  • Connected to *********************************.apps.ocp4.dlocp.prd.eu.bp.aws.cloud.vwgroup.com (11.241.137.145) port 443
  • schannel: disabled automatic use of client certificate
  • ALPN: curl offers http/1.1
  • ALPN: server did not agree on a protocol. Uses default.
  • using HTTP/1.x

GET / HTTP/1.1
Host: *******************************.apps.ocp4.dlocp.prd.eu.bp.aws.cloud.vwgroup.com
User-Agent: curl/8.8.0
Accept: /

  • Request completely sent off

  • schannel: remote party requests renegotiation

  • schannel: renegotiating SSL/TLS connection

  • schannel: SSL/TLS connection renegotiated

  • schannel: remote party requests renegotiation

  • schannel: renegotiating SSL/TLS connection

  • schannel: SSL/TLS connection renegotiated

  • schannel: failed to decrypt data, need more data

  • schannel: server close notification received (close_notify)

  • HTTP 1.0, assume close after body
    < HTTP/1.0 503 Service Unavailable

  • schannel: server indicated shutdown in a prior call

  • Closing connection

  • schannel: shutting down SSL/TLS connection with ***************************.apps.ocp4.dlocp.prd.eu.bp.aws.cloud.vwgroup.com port 443

Information

  • Docker
  • The CLI directly

Tasks

  • An officially supported command
  • My own modifications

Reproduction

  1. We followed https://github.com/rh-aiservices-bu/llm-on-openshift these steps for the deployment of TGI
  2. curl http://(your-host)

Expected behavior

Establish a connection to the LLM endpoint

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant