Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streaming output #136

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

Streaming output #136

wants to merge 7 commits into from

Conversation

itowlson
Copy link
Contributor

@itowlson itowlson commented Nov 22, 2021

Fixes #115.

The current draft de-unifies the ways we call _routes and the handler entrypoint, because one is convenient to have as a vector in a blocking way, while the other we want as a stream to hand off to hyper to hook up to the HTTP response network stream. It would be desirable to at least partially re-unify these.

The key component here is the StreamWriter type (which should have a better name). This exists because we need something that is Write - so that WASI WritePipe can wrap it - but can also be Stream so that hyper::Body can stream it. This is gnarly because Write is strictly synchronous. The current solution runs the Wasm module (and hence the Write calls) in a child task, and then uses async-stream over a polling loop to present the content as a stream in the main request handling task.

If the phrase "polling loop" gives you jitters, I'm with you. I have left it some comments showing how I tried to have the async runtime wake the task when new data was written. But what this ended up doing was serving some of the response, then blocking waiting for a change notification until Wasm finished writing. So it looked like you were getting a quick time to first byte, but your time to, say, hundred and first was not so good. I would really like to understand what was going on here and rework it to avoid the polling; I'm partly anxious about whether the polling will cause undue load when there are a lot of requests, but also it is just so clearly not right.

The solution to this, which is probably not ideal but seems like an adequate compromise after banging my head against tokio for several days, is as follows:

  • The Write side synchronously appends to a buffer and sends a change notification over a tokio watch channel.
  • The Stream side runs a loop where, on each iteration, it looks to see if the buffer has anything in it:
    • If it does, it waits for a fraction of a microsecond then yields that and continues looping.
    • If it does not, and the object has received a signal that all input (writes) are done, it exits the loop (and thereby ends the stream).
    • If it does not, and it hasn't had the done signal, it waits for a fraction of a microsecond and then waits for a change notification.

Why all the waiting for fractions of microseconds? Well, in my tests, if I simply went straight to "wait for change notification," the first few waits would work fine, but then something would stall, and it would wait until the end of input. This caused a very noticeable hiccup in the HTTP response. I haven't been able to work out why it stalled - if anyone has any insight, I would love to learn from you! But adding those tiny waits allowed the pipeline to keep flowing.

Incidentally, the tiny waits alone were enough, but I was concerned that they would result in a lot of no-op polls of the buffer. Checking the change notification means that if stuff is piling up quickly if will be served (the wait-for-change will process almost synchronously), but we won't hammer the loop if there's nothing to do.

We should absolutely bear this in mind when we load test WAGI with large numbers of requests. It will important to include large and slow responses in that testing.

There is also some error handling that needs tidying up. I think this is now done.

On the plus side, large or slow responses start arriving a lot faster now. Here are the curl stats from before, for a simple guest that just prints 100000 lines of the form <p>Line 123</p>:

$ curl -w "Connect time: %{time_connect} Time to first byte: %{time_starttransfer} Total time: %{time_total} \n" -o /dev/null localhost:3000/
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1746k  100 1746k    0     0   686k      0  0:00:02  0:00:02 --:--:--  686k
Connect time: 0.000545 Time to first byte: 2.543517 Total time: 2.543792
                                           ^^^^^^^^

And her are the stats with this PR applied (debug on my local machine, so highly unscientific):

$ curl -w "Connect time: %{time_connect} Time to first byte: %{time_starttransfer} Total time: %{time_total} \n" -o /dev/null localhost:3000/
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1746k    0 1746k    0     0   636k      0 --:--:--  0:00:02 --:--:--  636k
Connect time: 0.000615 Time to first byte: 0.333633 Total time: 2.743841
                                           ^^^^^^^^

@itowlson itowlson marked this pull request as ready for review November 23, 2021 02:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Stream chunked data out of guest modules instead of buffering
2 participants