-
Notifications
You must be signed in to change notification settings - Fork 546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fetch() performance parity #1203
Comments
Do you have a reproducible benchmark we can use? |
I took a look at the comment in hackernews and the comment was not including benchmarks either. I'm prone to close this unless somebody would like to prepare a cross-runtime benchmark suite. |
Fair. I'll take a stab at extracting out the parts from my server to demonstrate this. I have to note that the slowness I observed was in a production env (running in a VM in a container behind a network load balancer) that also sees a tonne of other IO. |
I wrote up a tiny DoH load-tester with python and golang (don't ask) which I intend to publish as a github action by tomorrow. In the meanwhile, here's a run from my laptop:
|
After fighting a bit with github-actions, a poor man's profiler on is finally up: From the current run, here's Node:
These are measured from Anything in the The profiler forwards as many DoH requests it can for Deno's serveDoh doesn't slow down at all, and blazes through 6300+ requests. Node's serverHTTPS (+ undici), otoh, manages 1200+ requests. I must note that the final batch of 100 queries served after the ~1100th request, took 20s+ to complete. Just a note: I haven't used Node's built-in perf-module to profile since that is un-importable on Deno (from what I can tell)... hand-rolled a minimal and inefficient perf-counter on my own, instead. |
I might be a bit lost on your codebase, but I could not find undici being used anywhere. I would note that there are plenty of things happening on that module. How have you identified the problem in undici fetch implementation? |
If by any chance you are testing the http/2 implementation, consider that https://github.com/serverless-dns/serverless-dns/blob/b36a9cdb5786b78a7dd8e22a2f0a767e9e340ab1/src/plugins/dns-operation/dnsResolver.js#L398 is highly inefficient as you will be incurring in significant overhead of creating those http/2 connections. You have to use a connection pool. |
Dang, we moved to Here's results from Node +
What's measured is just this method We then rolled our own
Of course, a standalone benchmark would be great to have, but I wanted to demonstrate the issue I spot with the kind of workloads we specifically run (lots and lots of DoH requests), and that, by bypassing |
I don't think anyone had performed an optimization of either fetch or node WHATWG Streams implementation, so this is a good point and there is likely a lot of margin there. Have you tried using Are there some instructions on how to run the benchmarks? Being able to get some good diagnostics would definitely help. |
Here is a bit of tests from our own benchmarks:
There is likely a lot of room for improvements. |
I will do so over the coming days and report back.
Apart from running the # clone serverless-dns
git clone https://github.com/serverless-dns/serverless-dns.git
# download deps
npm i
# install q, a golang doh client
# https://github.com/natesales/q
echo "deb [trusted=yes] https://repo.natesales.net/apt /" | sudo tee /etc/apt/sources.list.d/natesales.list > /dev/null
sudo apt-get update
sudo apt-get install q
# export QDOH, where the 'run' script goes looking for `q`
export QDOH=q
# install node v15+ / deno v1.17+, if missing
...
# doh query, <random-str>.dnsleaktest.com, is sent to cloudflare-dns.com
# run doh, on node + undici
timeout 60s ./run node p1
# doh query, <random-str>.dnsleaktest.com, is sent to cloudflare-dns.com
# run doh, on deno
timeout 60s ./run deno p1
# dns query, <random-str>.dnsleaktest.com, is sent to 1.1.1.1
# run udp/tcp dns, on node
timeout 60s ./run node p3
Thanks. Sorry, if this is a noob question, but: how do I generate such logs myself for my tests? |
Check out #1214 for the fetch benchmarks. You can run our benchmarks by running:
|
Hello folks! Recently, I spent some time working on the TL;DR:After some analysis, I came up with a few PRs improving
However, the major bottleneck is under Firstly, the benchmark was unfair with After several analyses, I have found that Avoiding WebStreams in response diff --git a/lib/fetch/index.js b/lib/fetch/index.js
index 9047976..778ad82 100644
--- a/lib/fetch/index.js
+++ b/lib/fetch/index.js
@@ -50,6 +50,7 @@ const {
const { kHeadersList } = require('../core/symbols')
const EE = require('events')
const { Readable, pipeline } = require('stream')
+const Readable2 = require('../api/readable')
const { isErrored, isReadable } = require('../core/util')
const { dataURLProcessor } = require('./dataURL')
const { kIsMockActive } = require('../mock/mock-symbols')
@@ -964,7 +965,7 @@ async function fetchFinale (fetchParams, response) {
})
// 4. Set response’s body to the result of piping response’s body through transformStream.
- response.body = { stream: response.body.stream.pipeThrough(transformStream) }
+ // response.body = { stream: response.body.stream.pipeThrough(transformStream) }
}
// 6. If fetchParams’s process response consume body is non-null, then:
@@ -1731,9 +1732,8 @@ async function httpNetworkFetch (
})()
}
+ const { body, status, statusText, headersList } = await dispatch({ body: requestBody })
try {
- const { body, status, statusText, headersList } = await dispatch({ body: requestBody })
-
const iterator = body[Symbol.asyncIterator]()
fetchParams.controller.next = () => iterator.next()
@@ -1797,7 +1797,7 @@ async function httpNetworkFetch (
// 17. Run these steps, but abort when the ongoing fetch is terminated:
// 1. Set response’s body to a new body whose stream is stream.
- response.body = { stream }
+ response.body = { stream: body }
// 2. If response is not a network error and request’s cache mode is
// not "no-store", then update response in httpCache for request.
@@ -1957,7 +1957,7 @@ async function httpNetworkFetch (
headers.append(key, val)
}
- this.body = new Readable({ read: resume })
+ this.body = new Readable2(resume, this.abort, headers.get('content-type'))
const decoders = [] If you apply the above git diff and profile the application, it will show And, these So, avoiding the Avoiding AbortSignaldiff --git a/lib/fetch/index.js b/lib/fetch/index.js
index 1fbf29b..322e5ae 100644
--- a/lib/fetch/index.js
+++ b/lib/fetch/index.js
@@ -50,6 +50,7 @@ const {
const { kHeadersList } = require('../core/symbols')
const EE = require('events')
const { Readable, pipeline } = require('stream')
+const Readable2 = require('../api/readable')
const { isErrored, isReadable } = require('../core/util')
const { dataURLProcessor } = require('./dataURL')
const { kIsMockActive } = require('../mock/mock-symbols')
@@ -957,14 +958,14 @@ async function fetchFinale (fetchParams, response) {
// 3. Set up transformStream with transformAlgorithm set to identityTransformAlgorithm
// and flushAlgorithm set to processResponseEndOfBody.
- const transformStream = new TransformStream({
- start () {},
- transform: identityTransformAlgorithm,
- flush: processResponseEndOfBody
- })
+ // const transformStream = new TransformStream({
+ // start () {},
+ // transform: identityTransformAlgorithm,
+ // flush: processResponseEndOfBody
+ // })
// 4. Set response’s body to the result of piping response’s body through transformStream.
- response.body = { stream: response.body.stream.pipeThrough(transformStream) }
+ // response.body = { stream: response.body.stream.pipeThrough(transformStream) }
}
// 6. If fetchParams’s process response consume body is non-null, then:
@@ -1731,9 +1732,8 @@ async function httpNetworkFetch (
})()
}
+ const { body, status, statusText, headersList } = await dispatch({ body: requestBody })
try {
- const { body, status, statusText, headersList } = await dispatch({ body: requestBody })
-
const iterator = body[Symbol.asyncIterator]()
fetchParams.controller.next = () => iterator.next()
@@ -1775,29 +1775,29 @@ async function httpNetworkFetch (
// 16. Set up stream with pullAlgorithm set to pullAlgorithm,
// cancelAlgorithm set to cancelAlgorithm, highWaterMark set to
// highWaterMark, and sizeAlgorithm set to sizeAlgorithm.
- if (!ReadableStream) {
- ReadableStream = require('stream/web').ReadableStream
- }
-
- const stream = new ReadableStream(
- {
- async start (controller) {
- fetchParams.controller.controller = controller
- },
- async pull (controller) {
- await pullAlgorithm(controller)
- },
- async cancel (reason) {
- await cancelAlgorithm(reason)
- }
- },
- { highWaterMark: 0 }
- )
+ // if (!ReadableStream) {
+ // ReadableStream = require('stream/web').ReadableStream
+ // }
+
+ // const stream = new ReadableStream(
+ // {
+ // async start (controller) {
+ // fetchParams.controller.controller = controller
+ // },
+ // async pull (controller) {
+ // await pullAlgorithm(controller)
+ // },
+ // async cancel (reason) {
+ // await cancelAlgorithm(reason)
+ // }
+ // },
+ // { highWaterMark: 0 }
+ // )
// 17. Run these steps, but abort when the ongoing fetch is terminated:
// 1. Set response’s body to a new body whose stream is stream.
- response.body = { stream }
+ response.body = { stream: body }
// 2. If response is not a network error and request’s cache mode is
// not "no-store", then update response in httpCache for request.
@@ -1870,10 +1870,10 @@ async function httpNetworkFetch (
fetchParams.controller.controller.enqueue(new Uint8Array(bytes))
// 8. If stream is errored, then terminate the ongoing fetch.
- if (isErrored(stream)) {
- fetchParams.controller.terminate()
- return
- }
+ // if (isErrored(stream)) {
+ // fetchParams.controller.terminate()
+ // return
+ // }
// 9. If stream doesn’t need more data ask the user agent to suspend
// the ongoing fetch.
@@ -1891,16 +1891,16 @@ async function httpNetworkFetch (
response.aborted = true
// 2. If stream is readable, error stream with an "AbortError" DOMException.
- if (isReadable(stream)) {
- fetchParams.controller.controller.error(new AbortError())
- }
+ // if (isReadable(stream)) {
+ // fetchParams.controller.controller.error(new AbortError())
+ // }
} else {
// 3. Otherwise, if stream is readable, error stream with a TypeError.
- if (isReadable(stream)) {
- fetchParams.controller.controller.error(new TypeError('terminated', {
- cause: reason instanceof Error ? reason : undefined
- }))
- }
+ // if (isReadable(stream)) {
+ // fetchParams.controller.controller.error(new TypeError('terminated', {
+ // cause: reason instanceof Error ? reason : undefined
+ // }))
+ // }
}
// 4. If connection uses HTTP/2, then transmit an RST_STREAM frame.
@@ -1957,7 +1957,7 @@ async function httpNetworkFetch (
headers.append(key, val)
}
- this.body = new Readable({ read: resume })
+ this.body = new Readable2(resume, this.abort, headers.get('content-type'))
const decoders = [] Then, I came up with avoiding Avoiding AbortControllerdiff --git a/lib/fetch/request.js b/lib/fetch/request.js
index 0f10e67..ae9c37c 100644
--- a/lib/fetch/request.js
+++ b/lib/fetch/request.js
@@ -29,9 +29,9 @@ let TransformStream
const kInit = Symbol('init')
-const requestFinalizer = new FinalizationRegistry(({ signal, abort }) => {
- signal.removeEventListener('abort', abort)
-})
+// const requestFinalizer = new FinalizationRegistry(({ signal, abort }) => {
+// signal.removeEventListener('abort', abort)
+// })
// https://fetch.spec.whatwg.org/#request-class
class Request {
@@ -355,7 +355,7 @@ class Request {
// 28. Set this’s signal to a new AbortSignal object with this’s relevant
// Realm.
- const ac = new AbortController()
+ const ac = { signal: { addEventListener: () => {} } }
this[kSignal] = ac.signal
this[kSignal][kRealm] = this[kRealm]
@@ -376,7 +376,7 @@ class Request {
} else {
const abort = () => ac.abort()
signal.addEventListener('abort', abort, { once: true })
- requestFinalizer.register(this, { signal, abort })
+ // requestFinalizer.register(this, { signal, abort })
}
}
@@ -741,7 +741,8 @@ class Request {
clonedRequestObject[kHeaders][kRealm] = this[kHeaders][kRealm]
// 4. Make clonedRequestObject’s signal follow this’s signal.
- const ac = new AbortController()
+ const ac = { signal: { addEventListener: () => {}, abort: () => {} } }
+ // const ac = new AbortController()
if (this.signal.aborted) {
ac.abort()
} else { Additional NotesIn the first iteration, The There is probably a bad configuration out there and I have found that
TCP Retransmission using root@rafaelgss-desktop:/usr/share/bcc/tools# ./tcpretrans -c
Tracing retransmits ... Hit Ctrl-C to end
^C
LADDR:LPORT RADDR:RPORT RETRANSMITS
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52390 99
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52400 99
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52398 99
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52402 99
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52414 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52424 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52412 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52392 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52420 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52396 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52422 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52404 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52410 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52428 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52418 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52408 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52426 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52416 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52406 100
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52394 101 For comparison reasons, looks the retransmissions using
root@rafaelgss-desktop:/usr/share/bcc/tools# ./tcpretrans -c
Tracing retransmits ... Hit Ctrl-C to end
^C
LADDR:LPORT RADDR:RPORT RETRANSMITS
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52526 1
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52532 1
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52506 1
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52514 1
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52524 1
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52512 1
[::ffff:127.0.0.1]#3001 <-> [::ffff:127.0.0.1]#52518 1
root@rafaelgss-desktop:/usr/share/bcc/tools# ./tcpretrans -c
Tracing retransmits ... Hit Ctrl-C to end
^C
EMPTY
LADDR:LPORT RADDR:RPORT RETRANSMITS Unfortunately, I haven't found out why it's happening. I tried, I swear :P. |
@benjamingr why is creating abort signals so expensive? |
I would guess it is because createAbortSignal+makeTransferable does a lot of prototype mutation stuff and a call into c++ which is basically the bane of js engine performance. i think we could move the logic of const kDoThing = Symbol('kDoThing');
function createAbortSignal(aborted, reason) {
return new AbortSignal(aborted, reason, kDoThing);
}
class AbortSignal extends EventTarget {
constructor(aborted, reason, thing) {
if (thing !== kDoThing) throw new ERR_ILLEGAL_CONSTRUCTOR();
super();
this[kAborted] = aborted;
this[kReason] = reason;
}
} the makeTransferable part is harder cuz c++ but i feel like we could at least make it more static than what |
@devsnek I didn't mention the analysis I did in Node.js Webstreams (planning to open an issue in Node.js any time soon), but indeed, |
The way it works is this: The structured clone algorithm only works by default with ordinary JavaScript objects with enumerable properties. Things like JavaScript classes are not supported. In order to make objects like When a Host object is passed into the structured clone algorithm implementation, v8 will call out to the serialization/deserialization delegate implementation that Node.js provides. Node.js' delegate implementation will check to see what kind of Host object it is. If the Host object is just a native c++ class that extends from But, we have a problem here. Classes like The |
Looking at this more, it seems like we just need to teach v8 how to recognize non-c++-backed "host" objects, and then we wouldn't need to add the JSTransferable superclass. oop thank you for the writeup james |
Yes, it would be ideal if the structured clone algorithm provided a way to allow arbitrary classes to define how they would be serialized / deserialized. I've raised this conversation with the WHATWG (whatwg/html#7428) before and, unfortunately, it received a pretty chilly response. An approach based on well-known Symbols (e.g. Unfortunately, without these changes at either the standard (tc-39, whatwg) level or v8 level, |
I don't think we need to do anything at the standards level, if v8 just had an api to tag constructors with a private symbol or similar, it could forward them back to |
Yeah, doing it entirely within v8 is possible. My concern there would be interop. I would much prefer the mechanism to be standardized so that there's a better hope that it would work consistently across runtimes. |
Within v8, the key limitation is that, inside v8's implementation, only C++-backed objects can be identified as host objects, and only host objects are passed into the delegate to be handled, and since a host object can be nested arbitrarily deep in object graphs, we have to depend on v8 identifying them as it walks the graph being serialized. If we can start there -- teaching v8 to recognize when a pure JavaScript object is to be treated as a host object -- then we've gone a long way towards solving the problem. |
@RafaelGSS TCP Retransmission might be related to tcpNoDelay. Have you tried disabling that? |
waiting for 19.5.0 to test fetch result just saw @RafaelGSS pr 👍 |
That's a good point. I haven't. |
using v20 nightly build I can see no diffrence between deno, and node fetch infact it looks better than deno .. but bun slightly faster ... good job guys |
fetch seems to have gotten much slower in v19.7.0 compared to v19.6.1 v19.6.1:
v19.7.0:
|
cc @anonrig |
I don't think this is related to URL, but related to stream performance. ReadableStream/WritableStream abort-signal might cause this regression. (nodejs/node#46273) cc @debadree25 @ronag |
The abort signal connection feels unlikely in 19.7 none of the streams PRs seem to introduce any significant change inside WHATWYG streams all of it interop work, could be something else? |
Can someone do a bisect? |
The likely candidate is the PR that made byte readable streams transferable. However undici doesn't use byte readable streams so I'm unsure how it would affect performance this drastically on my machine. Has anyone else ran the benchmarks on 19.6.1 vs 19.7.0? |
Benchmarks look similar on my machine too, attempting to do the git bisect but will take some time 😅 |
The difference in results between v19.6.1 and v19.7.0 albeit is small v19.6.1 [bench:run] │ Tests │ Samples │ Result │ Tolerance │ Difference with slowest │
[bench:run] |─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
[bench:run] │ undici - fetch │ 1 │ 809.35 req/sec │ ± 0.00 % │ - │
[bench:run] |─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
[bench:run] │ http - no keepalive │ 1 │ 1244.84 req/sec │ ± 0.00 % │ + 53.81 % │
[bench:run] |─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
[bench:run] │ http - keepalive │ 1 │ 1578.42 req/sec │ ± 0.00 % │ + 95.02 % │
[bench:run] |─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
[bench:run] │ undici - pipeline │ 1 │ 2047.72 req/sec │ ± 0.00 % │ + 153.01 % │
[bench:run] |─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
[bench:run] │ undici - request │ 1 │ 2312.49 req/sec │ ± 0.00 % │ + 185.72 % │
[bench:run] |─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
[bench:run] │ undici - stream │ 1 │ 3077.30 req/sec │ ± 0.00 % │ + 280.22 % │
[bench:run] |─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
[bench:run] │ undici - dispatch │ 1 │ 3571.34 req/sec │ ± 0.00 % │ + 341.26 % │ v19.7.0 [bench:run] │ Tests │ Samples │ Result │ Tolerance │ Difference with slowest │
[bench:run] |─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
[bench:run] │ undici - fetch │ 1 │ 781.15 req/sec │ ± 0.00 % │ - │
[bench:run] |─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
[bench:run] │ http - no keepalive │ 1 │ 1284.11 req/sec │ ± 0.00 % │ + 64.39 % │
[bench:run] |─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
[bench:run] │ http - keepalive │ 1 │ 1593.15 req/sec │ ± 0.00 % │ + 103.95 % │
[bench:run] |─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
[bench:run] │ undici - pipeline │ 1 │ 2129.34 req/sec │ ± 0.00 % │ + 172.59 % │
[bench:run] |─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
[bench:run] │ undici - request │ 1 │ 2427.34 req/sec │ ± 0.00 % │ + 210.74 % │
[bench:run] |─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
[bench:run] │ undici - stream │ 1 │ 3134.43 req/sec │ ± 0.00 % │ + 301.26 % │
[bench:run] |─────────────────────|─────────|─────────────────|───────────|─────────────────────────|
[bench:run] │ undici - dispatch │ 1 │ 3457.92 req/sec │ ± 0.00 % │ + 342.67 % │
[bench:run] |
Refactoring JS transferable to move away from the prototype mutation: nodejs/node#47956. Reviews welcomed. |
NodeJS 20.3.0:
After nodejs/node#47956:
Probably it will be better to have one of those that already shared the benchmarks but is just to showcase that PR helped to improve the fetch by some margin. |
@RafaelGSS if you have time to refresh your analysis, folks could move to the next bottleneck. |
Planing to do it soon! |
Here is some good news, fetch got even faster after Node v20.6.0. In node 20.5.0, I got a nice 877.08 req/s out of fetch, while in v20.6.0 I got a 932 req/s in my benchmark, with a consistent and measurable 5% improvement. Given all the latest improvements, we have been able to reduce the gap between |
Updated my benchmarks, it does look this small improvement is visible. bun and axios are still significantly faster, though. |
A few updates. The latest benchmarks shows that, while axios is still faster, undici.fetch() now passes both node-fetch and got. I also took the liberty of doing a flamegraph, and apparently the vast majority of the cost is due to the initialization of the whatwg streams. I can see two ways to speed this up:
@KhafraDev wdyt? |
There is another alternative, as @RafaelGSS experimented with earlier: move webstreams to node streams, when they're only used internally. As long as we can match the methods to look somewhat similar to webstreams (as in, _read = pull, etc., by extending the class) I wouldn't be against it. Honestly we could probably change response.body to a node stream, and convert to a webstream when accessed, which is very rarely. I should probably do a better write up of how the body mixin methods work because there's a TON of optimizations we could do without infringing on maintainability. |
After more than two years, I'm extremely happy to confirm that we have achieved performance parity with
There is still some way to go to reach |
This would solve...
Undici fetch isn't as fast as the ones in browsers like Firefox / Deno / Workers.
The implementation should look like...
Measure and improve (e2e latencies, op/sec, conns/sec, etc?) to match up with other impls.
I have also considered...
I hand-rolled a http2 nodejs stdlib client and it was a magnitude faster than undici fetch, even without connection pooling.
I must note though, the conns were to Cloudflare and Google endpoints, which responded with 1KB or less per fetch (basically,
POST
and uncachedGET
DNS over HTTPS connections).Additional context
This is a follow up from: https://news.ycombinator.com/item?id=30164925
Sorry that I didn't really benchmark the difference between Deno / Firefox / Undici / Cloudflare Workers, but I knew instantly that it was the cause of higher latencies when I deployed the same code in Node v16 LTS.
Possibly related: #22, #208, #952, #902
Other projects: szmarczak/http2-wrapper/issues/71
The text was updated successfully, but these errors were encountered: