Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

idleTimeout does not work with Jetty backend #3

Open
mossprescott opened this issue Jul 9, 2015 · 8 comments
Open

idleTimeout does not work with Jetty backend #3

mossprescott opened this issue Jul 9, 2015 · 8 comments
Labels
bug Something isn't working module:jetty-server

Comments

@mossprescott
Copy link

When a service yields a task that takes a long time to produce a response (see /slow in the code below), the Blaze backend will drop the socket after idleTimeout has elapsed. The Jetty backend:

  1. Keeps the connection open for 30 seconds, ignoring idleTimeout.
  2. Then returns a 200 OK with no body.
import scala.concurrent.duration._

import scalaz.concurrent._

import org.http4s.dsl._
import org.http4s.server._

object JettyTest extends App {
  val service = HttpService {
    case GET -> Root / "slow" => for {
      _    <- Task.delay { Thread.sleep(45*1000) }
      resp <- InternalServerError("finally!")
    } yield resp

    case GET -> Root / "fail" => Task.fail(new RuntimeException)
  }

  val js = org.http4s.server.jetty.JettyBuilder
    .withIdleTimeout(1.second)
    .bindHttp(8080, "0.0.0.0")
    .mountService(service)

  val bs = org.http4s.server.blaze.BlazeBuilder
    .withIdleTimeout(1.second)
    .bindHttp(8180, "0.0.0.0")
    .mountService(service)

  js.run
  bs.run
}

Using http4s 0.8.2.

@bryce-anderson bryce-anderson added the bug Something isn't working label Jul 9, 2015
mossprescott referenced this issue in mossprescott/quasar Jul 9, 2015
Fixes #833.

The timeout is now a parameter when running the server, which is hard-coded to infinity for real life, and set to 1 second in tests.

This required switching to the Blaze backend (probably a good thing anyway), because of a bug in the Jetty backend that we were using (see http4s/http4s#336). That required a workaround for the bug in the Blaze backend that was the reason we went with the Jetty backend in the first place (see http4s/http4s#337).

Disabled our `FailSafe` middleware, which was only necessary due to another bug in the Jetty backend (see http4s/http4s#338).

Moved the http4s dependencies to the web sub-project, and bumped the version to 0.8.3 for good measure.
@rossabaker rossabaker self-assigned this Jul 10, 2015
@rossabaker
Copy link
Member

I get this same behavior mounting a raw HttpServlet that sleeps in service, so we can exonerate Http4sServlet.

rossabaker referenced this issue in http4s/http4s Jul 10, 2015
This used to work, but was lost along the way.  We need to set it
on the async context, and then actually run the timeout renderer.

Relates to #336.
@rossabaker
Copy link
Member

This seems more like a use case for asyncTimeout, which is how long the Task has to complete. This had a regression addressed by http4s/http4s#341.

I can't make the idle timeout work, even removing http4s from the picture. That setting doesn't seem to do what I'm reading it to do.

@mossprescott
Copy link
Author

Could be. I observed idleTimeout was getting into the Jetty connector instance, but it didn't seem to affect anything. On the Blaze side, setting idleTimeout does cause the connection to be dropped if the Task hangs.

Part of the problem here is the lack of any documentation of what these parameters are meant to do, and part of the problem is the inconsistent behavior across the backends.

To be safe, should I be using asyncTimeout on Blaze?

@bryce-anderson
Copy link
Member

@mossprescott, I don't think the blaze server supports the notion if an asyncTimeout: I believe that is a servlet concern when initiating an async context. I believe the idleTimeout is the only way to timeout on blaze.

@rossabaker
Copy link
Member

The idea is that idleTimeout is for inactivity on the socket connection, and asyncTimeout for a failure to complete, even if activity is ongoing. asyncTimeout's name is definitely of servlet origin, but I can't offhand see why Blaze couldn't support something similar. This would probably look a lot like the timeout middleware, which races a task against the response, and returns the first completed. (Note: I have a beef with the default status code: http4s/http4s#345).

This is going to be a constant struggle with abstracting the backends. The servlet specification does not have much to say about connection lifecycles, which naturally gives rise to some differences between Jetty and Tomcat. Blaze, being under our control can mimic a servlet container's capabilities where desired. But then which container's? The idea is for the server builders to identify similar behaviors between backends and give them the same name. We'll fill the holes in this abstraction the best we can, but I think it's doomed to leak at least a little.

@bryce-anderson
Copy link
Member

We do need a 'took too long to initiate the response' timeout for blaze that will return a default response of some form, preferable with a good default but can be set by the user.

I don't know if that is what the asyncTimeout is technically about; I was under the impression it was for situations where you start your async context and forget to use it (or error and forget to close it) so the servlet doesn't want to just let it float in the ether indefinitely.

@rossabaker
Copy link
Member

The asyncTimeout is about taking too long to finish the response, not initiate it. This is good if you have a hung process, and sucks if you have a process that's taking its time but making satisfactory progress.

@mossprescott
Copy link
Author

Actually, our requirement was just to effectively disable anything like asyncTimeout (see slamdata/slamengine#815). So whatever parameters are available, we need a way to say "let this response go for as long as x", where x is at least several minutes. And we would eventually probably like that to apply to only particular requests which we know can run for a long time, but that seems to be covered by http4s/http4s#282.

From the http4s user's point of view, It would be helpful if there was a consistent and documented set of tuning parameters, even if they aren't all supported by all the backends.

Secondarily, the behavior of the Jetty backend returning 200 OK after timing out is confusing and wrong (see also http4s/http4s#337). Hopefully that's something that can be addressed in http4s.

@rossabaker rossabaker removed their assignment Apr 5, 2021
rossabaker referenced this issue in http4s/http4s-servlet Apr 3, 2022
This used to work, but was lost along the way.  We need to set it
on the async context, and then actually run the timeout renderer.

Relates to http4s/http4s#336.
rossabaker referenced this issue in http4s/http4s-servlet Apr 4, 2022
This used to work, but was lost along the way.  We need to set it
on the async context, and then actually run the timeout renderer.

Relates to http4s/http4s#336.
@rossabaker rossabaker transferred this issue from http4s/http4s Apr 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working module:jetty-server
Projects
None yet
Development

No branches or pull requests

3 participants