Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: helia-docker container dies when running in tiros #18

Closed
3 of 4 tasks
SgtPooki opened this issue Oct 19, 2023 · 35 comments
Closed
3 of 4 tasks

fix: helia-docker container dies when running in tiros #18

SgtPooki opened this issue Oct 19, 2023 · 35 comments
Assignees

Comments

@SgtPooki
Copy link
Member

SgtPooki commented Oct 19, 2023

We need to move away from using an in-memory data-store because there is no way to limit the amount of memory used. @wemeetagain mentioned that Lodestar uses an in-mem datastore that flushes to file-system. The code for that is https://github.com/ChainSafe/lodestar/blob/unstable/packages/beacon-node/src/network/peers/datastore.ts#L71

If lodestar has a datastore we can use, we should use that, otherwise we should migrate to use file-system directly and then update to use lodestar strategy.

from #18 (comment)


image

log-events-viewer-result.csv

Action items before close

@SgtPooki
Copy link
Member Author

After some research last week, it looks like helia is adding too many eventListeners to user provided signals, and recalling any-signal multiple times with the same signal in https://github.com/libp2p/js-libp2p/blob/master/packages/libp2p/src/connection-manager/dial-queue.ts

And overall just mis-managing the AbortSignal's used (shutdown controller, timeout signal per dial, user shutdown controller, combineSignal that adds more event Listeners) and this is causing the docker container to fall over.

js-libp2p (in node) currently sets max eventlisteners to Infinity as a workaround for this.. but.. I don't imagine that's a great way to handle things and need to investigate this further.

@SgtPooki
Copy link
Member Author

Error from my container running for 12 hours locally:

2023-10-22 21:50:33 Server listening on http://0.0.0.0:8080
2023-10-22 23:03:55 (node:26) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 relay:removed listeners added to ReservationStore. Use events.setMaxListeners() to increase limit
2023-10-22 23:03:55     at [kNewListener] (node:internal/event_target:553:17)
2023-10-22 23:03:55     at ReservationStore.addEventListener (node:internal/event_target:666:23)
2023-10-22 23:03:55     at ReservationStore.addEventListener (file:///app/node_modules/@libp2p/interface/dist/src/events.js:19:15)
2023-10-22 23:03:55     at new CircuitRelayTransportListener (file:///app/node_modules/libp2p/dist/src/circuit-relay/transport/listener.js:18:25)
2023-10-22 23:03:55     at createListener (file:///app/node_modules/libp2p/dist/src/circuit-relay/transport/listener.js:75:12)
2023-10-22 23:03:55     at CircuitRelayTransport.createListener (file:///app/node_modules/libp2p/dist/src/circuit-relay/transport/index.js:192:16)
2023-10-22 23:03:55     at DefaultTransportManager.listen (file:///app/node_modules/libp2p/dist/src/transport-manager.js:147:44)
2023-10-22 23:03:55     at reserveQueue.add.peerId.peerId (file:///app/node_modules/libp2p/dist/src/circuit-relay/transport/reservation-store.js:144:45)
2023-10-22 23:03:55     at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
2023-10-22 23:03:55     at async file:///app/node_modules/p-queue/dist/index.js:118:36

@SgtPooki
Copy link
Member Author

starting up helia-http-gateway outside of docker and requesting /ipns/blog.libp2p.io:

╰─ ✔ ❯ PORT=8888 DEBUG="helia-server*" npm run start:dev                                                                                               9.35   21.4G   46%   100%  ─╯

> [email protected] start:dev
> npm run build && node --trace-warnings dist/src/index.js


> [email protected] build
> aegir build --bundle false

[09:39:17] tsc [started]
[09:39:18] tsc [completed]
process.env.DEBUG:  helia-server*
  helia-server:express:helia-fetch Initializing helia-fetch... +0ms
  helia-server:express:helia-fetch Initialized +723ms
Helia Started!
  helia-server:express Initialized +0ms
Server listening on http://0.0.0.0:8888
  helia-server:express:helia-fetch Parsing path: /ipns/assets/css/0.styles.4520169f.css +14s
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'assets',
  relativePath: '/css/0.styles.4520169f.css'
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +14s
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +1ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +0ms
  helia-server:express Redirecting to final URL: http://localhost:8888/ipns/blog.libp2p.io/assets/css/0.styles.4520169f.css +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/site.webmanifest +3ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'site.webmanifest',
  relativePath: ''
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +3ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +0ms
  helia-server:express Redirecting to final URL: http://localhost:8888/ipns/blog.libp2p.io/site.webmanifest +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/assets/css/0.styles.4520169f.css +19s
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'assets',
  relativePath: '/css/0.styles.4520169f.css'
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +19s
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +1ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +0ms
  helia-server:express Redirecting to final URL: http://localhost:8888/ipns/blog.libp2p.io/assets/css/0.styles.4520169f.css +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/assets/js/app.429d2de8.js +2ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'assets',
  relativePath: '/js/app.429d2de8.js'
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +2ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +0ms
  helia-server:express Redirecting to final URL: http://localhost:8888/ipns/blog.libp2p.io/assets/js/app.429d2de8.js +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/assets/js/3.2eedf0e2.js +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'assets',
  relativePath: '/js/3.2eedf0e2.js'
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +0ms
  helia-server:express Redirecting to final URL: http://localhost:8888/ipns/blog.libp2p.io/assets/js/3.2eedf0e2.js +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/js/app.429d2de8.js +122ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/js/app.429d2de8.js'
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +122ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +0ms
  helia-server:express Requesting content from helia: /ipns/blog.libp2p.io/assets/js/app.429d2de8.js +0ms
  helia-server:express Fetching from Helia: /ipns/blog.libp2p.io/assets/js/app.429d2de8.js +0ms
  helia-server:express:helia-fetch Fetching: /ipns/blog.libp2p.io/assets/js/app.429d2de8.js +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/js/app.429d2de8.js +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/js/app.429d2de8.js'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/js/app.429d2de8.js'
} +0ms
  helia-server:express:helia-fetch Fetching from Delegate Routing: blog.libp2p.io +1ms
  helia-server:express:helia-fetch Final IPFS path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/js/app.429d2de8.js +815ms
  helia-server:express:helia-fetch Fetching: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/js/app.429d2de8.js +0ms
  helia-server:express:helia-fetch Parsing path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/js/app.429d2de8.js +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/js/app.429d2de8.js'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/js/app.429d2de8.js'
} +0ms
(node:92038) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
    at [kNewListener] (node:internal/event_target:527:17)
    at [kNewListener] (node:internal/abort_controller:239:24)
    at EventTarget.addEventListener (node:internal/event_target:638:23)
    at anySignal (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/any-signal/dist/src/index.js:21:20)
    at IpniContentRouting.findProviders (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/@libp2p/ipni-content-routing/dist/src/index.js:51:24)
    at findProviders.next (<anonymous>)
    at file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/it-merge/dist/src/index.js:27:38
    at Array.map (<anonymous>)
    at file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/it-merge/dist/src/index.js:26:43
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
  helia-server:express:helia-fetch Fetching from IPFS: {
  cid: CID(QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN),
  type: 'file',
  options: { path: '/assets/js/app.429d2de8.js' }
} +7s
  helia-server:express:helia-fetch Getting file response: {
  cid: CID(QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN),
  options: { path: '/assets/js/app.429d2de8.js' }
} +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/js/app.429d2de8.js +2s
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/js/app.429d2de8.js'
} +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/favicon-16x16.png +496ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'favicon-16x16.png',
  relativePath: ''
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +10s
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +0ms
  helia-server:express Redirecting to final URL: http://localhost:8888/ipns/blog.libp2p.io/favicon-16x16.png +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/go-2022.b6c02c94.png +58ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/go-2022.b6c02c94.png'
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +58ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +0ms
  helia-server:express Requesting content from helia: /ipns/blog.libp2p.io/assets/img/go-2022.b6c02c94.png +0ms
  helia-server:express Fetching from Helia: /ipns/blog.libp2p.io/assets/img/go-2022.b6c02c94.png +0ms
  helia-server:express:helia-fetch Fetching: /ipns/blog.libp2p.io/assets/img/go-2022.b6c02c94.png +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/go-2022.b6c02c94.png +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/go-2022.b6c02c94.png'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/go-2022.b6c02c94.png'
} +0ms
  helia-server:express:helia-fetch Final IPFS path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/go-2022.b6c02c94.png +1ms
  helia-server:express:helia-fetch Fetching: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/go-2022.b6c02c94.png +0ms
  helia-server:express:helia-fetch Parsing path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/go-2022.b6c02c94.png +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/img/go-2022.b6c02c94.png'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/img/go-2022.b6c02c94.png'
} +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/libp2p_http.d5e34487.png +805ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/libp2p_http.d5e34487.png'
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +806ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +0ms
  helia-server:express Requesting content from helia: /ipns/blog.libp2p.io/assets/img/libp2p_http.d5e34487.png +0ms
  helia-server:express Fetching from Helia: /ipns/blog.libp2p.io/assets/img/libp2p_http.d5e34487.png +0ms
  helia-server:express:helia-fetch Fetching: /ipns/blog.libp2p.io/assets/img/libp2p_http.d5e34487.png +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/libp2p_http.d5e34487.png +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/libp2p_http.d5e34487.png'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/libp2p_http.d5e34487.png'
} +0ms
  helia-server:express:helia-fetch Final IPFS path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/libp2p_http.d5e34487.png +0ms
  helia-server:express:helia-fetch Fetching: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/libp2p_http.d5e34487.png +0ms
  helia-server:express:helia-fetch Parsing path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/libp2p_http.d5e34487.png +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/img/libp2p_http.d5e34487.png'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/img/libp2p_http.d5e34487.png'
} +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/crypto_tls_header.21e9628e.png +459ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/crypto_tls_header.21e9628e.png'
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +459ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +0ms
  helia-server:express Requesting content from helia: /ipns/blog.libp2p.io/assets/img/crypto_tls_header.21e9628e.png +1ms
  helia-server:express Fetching from Helia: /ipns/blog.libp2p.io/assets/img/crypto_tls_header.21e9628e.png +0ms
  helia-server:express:helia-fetch Fetching: /ipns/blog.libp2p.io/assets/img/crypto_tls_header.21e9628e.png +1ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/crypto_tls_header.21e9628e.png +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/crypto_tls_header.21e9628e.png'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/crypto_tls_header.21e9628e.png'
} +0ms
  helia-server:express:helia-fetch Final IPFS path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/crypto_tls_header.21e9628e.png +0ms
  helia-server:express:helia-fetch Fetching: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/crypto_tls_header.21e9628e.png +0ms
  helia-server:express:helia-fetch Parsing path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/crypto_tls_header.21e9628e.png +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/img/crypto_tls_header.21e9628e.png'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/img/crypto_tls_header.21e9628e.png'
} +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/blog-post-placeholder.af417eb0.png +393ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/blog-post-placeholder.af417eb0.png'
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +393ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +0ms
  helia-server:express Requesting content from helia: /ipns/blog.libp2p.io/assets/img/blog-post-placeholder.af417eb0.png +1ms
  helia-server:express Fetching from Helia: /ipns/blog.libp2p.io/assets/img/blog-post-placeholder.af417eb0.png +0ms
  helia-server:express:helia-fetch Fetching: /ipns/blog.libp2p.io/assets/img/blog-post-placeholder.af417eb0.png +1ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/blog-post-placeholder.af417eb0.png +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/blog-post-placeholder.af417eb0.png'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/blog-post-placeholder.af417eb0.png'
} +0ms
  helia-server:express:helia-fetch Final IPFS path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/blog-post-placeholder.af417eb0.png +0ms
  helia-server:express:helia-fetch Fetching: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/blog-post-placeholder.af417eb0.png +0ms
  helia-server:express:helia-fetch Parsing path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/blog-post-placeholder.af417eb0.png +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/img/blog-post-placeholder.af417eb0.png'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/img/blog-post-placeholder.af417eb0.png'
} +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/metrics-in-go-libp2p-header.83e639cd.png +467ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/metrics-in-go-libp2p-header.83e639cd.png'
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +467ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +0ms
  helia-server:express Requesting content from helia: /ipns/blog.libp2p.io/assets/img/metrics-in-go-libp2p-header.83e639cd.png +0ms
  helia-server:express Fetching from Helia: /ipns/blog.libp2p.io/assets/img/metrics-in-go-libp2p-header.83e639cd.png +0ms
  helia-server:express:helia-fetch Fetching: /ipns/blog.libp2p.io/assets/img/metrics-in-go-libp2p-header.83e639cd.png +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/metrics-in-go-libp2p-header.83e639cd.png +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/metrics-in-go-libp2p-header.83e639cd.png'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/metrics-in-go-libp2p-header.83e639cd.png'
} +0ms
  helia-server:express:helia-fetch Final IPFS path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/metrics-in-go-libp2p-header.83e639cd.png +0ms
  helia-server:express:helia-fetch Fetching: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/metrics-in-go-libp2p-header.83e639cd.png +0ms
  helia-server:express:helia-fetch Parsing path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/metrics-in-go-libp2p-header.83e639cd.png +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/img/metrics-in-go-libp2p-header.83e639cd.png'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/img/metrics-in-go-libp2p-header.83e639cd.png'
} +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/interop-testing.efd81a25.png +415ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/interop-testing.efd81a25.png'
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +415ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +0ms
  helia-server:express Requesting content from helia: /ipns/blog.libp2p.io/assets/img/interop-testing.efd81a25.png +0ms
  helia-server:express Fetching from Helia: /ipns/blog.libp2p.io/assets/img/interop-testing.efd81a25.png +0ms
  helia-server:express:helia-fetch Fetching: /ipns/blog.libp2p.io/assets/img/interop-testing.efd81a25.png +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/interop-testing.efd81a25.png +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/interop-testing.efd81a25.png'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/interop-testing.efd81a25.png'
} +0ms
  helia-server:express:helia-fetch Final IPFS path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/interop-testing.efd81a25.png +0ms
  helia-server:express:helia-fetch Fetching: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/interop-testing.efd81a25.png +0ms
  helia-server:express:helia-fetch Parsing path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/interop-testing.efd81a25.png +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/img/interop-testing.efd81a25.png'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/img/interop-testing.efd81a25.png'
} +0ms
  helia-server:express:helia-fetch Fetching from IPFS: {
  cid: CID(QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN),
  type: 'file',
  options: { path: '/assets/img/blog-post-placeholder.af417eb0.png' }
} +22s
  helia-server:express:helia-fetch Getting file response: {
  cid: CID(QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN),
  options: { path: '/assets/img/blog-post-placeholder.af417eb0.png' }
} +1ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/blog-post-placeholder.af417eb0.png +25ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/blog-post-placeholder.af417eb0.png'
} +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/site.webmanifest +596ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'site.webmanifest',
  relativePath: ''
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +22s
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +0ms
  helia-server:express Redirecting to final URL: http://localhost:8888/ipns/blog.libp2p.io/site.webmanifest +0ms
(node:92038) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
    at [kNewListener] (node:internal/event_target:527:17)
    at [kNewListener] (node:internal/abort_controller:239:24)
    at EventTarget.addEventListener (node:internal/event_target:638:23)
    at anySignal (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/any-signal/dist/src/index.js:21:20)
    at DialQueue.createDialAbortControllers (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/libp2p/dist/src/connection-manager/dial-queue.js:200:24)
    at DialQueue.dial (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/libp2p/dist/src/connection-manager/dial-queue.js:123:29)
    at DefaultConnectionManager.openConnection (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/libp2p/dist/src/connection-manager/index.js:312:49)
    at Libp2pNode.dial (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/libp2p/dist/src/libp2p.js:224:50)
    at Network.connectTo (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/ipfs-bitswap/dist/src/network.js:186:29)
    at MsgQueue.send (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/ipfs-bitswap/dist/src/want-manager/msg-queue.js:47:32)
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/libp2p_WebRTC_blog_header.7f5f55e7.png +800ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/libp2p_WebRTC_blog_header.7f5f55e7.png'
} +0ms
  helia-server:express Referer found: http://localhost:8888/ipns/blog.libp2p.io +800ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: ''
} +1ms
  helia-server:express Requesting content from helia: /ipns/blog.libp2p.io/assets/img/libp2p_WebRTC_blog_header.7f5f55e7.png +1ms
  helia-server:express Fetching from Helia: /ipns/blog.libp2p.io/assets/img/libp2p_WebRTC_blog_header.7f5f55e7.png +0ms
  helia-server:express:helia-fetch Fetching: /ipns/blog.libp2p.io/assets/img/libp2p_WebRTC_blog_header.7f5f55e7.png +0ms
  helia-server:express:helia-fetch Parsing path: /ipns/blog.libp2p.io/assets/img/libp2p_WebRTC_blog_header.7f5f55e7.png +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/libp2p_WebRTC_blog_header.7f5f55e7.png'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipns',
  address: 'blog.libp2p.io',
  relativePath: '/assets/img/libp2p_WebRTC_blog_header.7f5f55e7.png'
} +0ms
  helia-server:express:helia-fetch Final IPFS path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/libp2p_WebRTC_blog_header.7f5f55e7.png +0ms
  helia-server:express:helia-fetch Fetching: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/libp2p_WebRTC_blog_header.7f5f55e7.png +0ms
  helia-server:express:helia-fetch Parsing path: /ipfs/QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN/assets/img/libp2p_WebRTC_blog_header.7f5f55e7.png +0ms
  helia-server:express:helia-fetch Parsed path: [Object: null prototype] {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/img/libp2p_WebRTC_blog_header.7f5f55e7.png'
} +0ms
  helia-server:express:helia-fetch Processing Fetch: {
  namespace: 'ipfs',
  address: 'QmZUq695FkZgbBYYvaTXvjD3LeZPSkfxhWD8BFGQTZBhvN',
  relativePath: '/assets/img/libp2p_WebRTC_blog_header.7f5f55e7.png'
} +0ms
(node:92038) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
    at [kNewListener] (node:internal/event_target:527:17)
    at [kNewListener] (node:internal/abort_controller:239:24)
    at EventTarget.addEventListener (node:internal/event_target:638:23)
    at anySignal (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/any-signal/dist/src/index.js:21:20)
    at DialQueue.createDialAbortControllers (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/libp2p/dist/src/connection-manager/dial-queue.js:200:24)
    at DialQueue.dial (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/libp2p/dist/src/connection-manager/dial-queue.js:123:29)
    at DefaultConnectionManager.openConnection (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/libp2p/dist/src/connection-manager/index.js:312:49)
    at Libp2pNode.dial (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/libp2p/dist/src/libp2p.js:224:50)
    at Network.connectTo (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/ipfs-bitswap/dist/src/network.js:186:29)
    at MsgQueue.send (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-docker/node_modules/ipfs-bitswap/dist/src/want-manager/msg-queue.js:47:32)

@SgtPooki
Copy link
Member Author

the docker container seems more stable now, but we might want to recommend running it in tiros with USE_LIBP2P=false USE_BITSWAP=false for now to keep mem usage & cpu low

@SgtPooki
Copy link
Member Author

SgtPooki commented Nov 7, 2023

I was able to get an error just in the node runtime by running npm run start in one terminal, and then the following script in another:

#!/usr/bin/env bash

# Query all endpoints until failure
# This script is intended to be run from the root of the helia-http-gateway repository

mkdir -p test-output

wget -T 180 -O test-output/blog.ipfs.tech http://localhost:8080/ipns/blog.ipfs.tech
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/blog.libp2p.io http://localhost:8080/ipns/blog.libp2p.io
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/consensuslab.world http://localhost:8080/ipns/consensuslab.world
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/docs.ipfs.tech http://localhost:8080/ipns/docs.ipfs.tech
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/docs.libp2p.io http://localhost:8080/ipns/docs.libp2p.io
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/drand.love http://localhost:8080/ipns/drand.love
curl -X POST http://localhost:8080/api/v0/repo/gc

# wg -O test-output/fil.orget http://localhost:8080/ipns/fil.org
# curl -X POST http://localhost:8080/api/v0/repo/gc
#
wget -T 180 -O test-output/filecoin.io http://localhost:8080/ipns/filecoin.io
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/green.filecoin.io http://localhost:8080/ipns/green.filecoin.io
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/ipfs.tech http://localhost:8080/ipns/ipfs.tech
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/ipld.io http://localhost:8080/ipns/ipld.io
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/libp2p.io http://localhost:8080/ipns/libp2p.io
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/n0.computer http://localhost:8080/ipns/n0.computer
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/probelab.io http://localhost:8080/ipns/probelab.io
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/protocol.ai http://localhost:8080/ipns/protocol.ai
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/research.protocol.ai http://localhost:8080/ipns/research.protocol.ai
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/singularity.storage http://localhost:8080/ipns/singularity.storage
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/specs.ipfs.tech http://localhost:8080/ipns/specs.ipfs.tech
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/strn.network http://localhost:8080/ipns/strn.network
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/web3.storage http://localhost:8080/ipns/web3.storage
curl -X POST http://localhost:8080/api/v0/repo/gc

rm -rf test-output

The error output was:

[10:22:36.275] INFO (57554): helia-http-gateway:fastifyincoming request
    reqId: "req-f"
    req: {
      "method": "GET",
      "url": "/ipns/green.filecoin.io",
      "hostname": "localhost:8080",
      "remoteAddress": "127.0.0.1",
      "remotePort": 58478
    }
(node:57554) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/@libp2p/webrtc/dist/src/private-to-private/handler.js:21
            connectedPromise.reject(new CodeError('Timed out while trying to connect', 'ERR_TIMEOUT'));
                                    ^

CodeError: Timed out while trying to connect
    at signal.onabort (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/@libp2p/webrtc/dist/src/private-to-private/handler.js:21:37)
    at EventTarget.eventHandler (node:internal/event_target:1093:12)
    at [nodejs.internal.kHybridDispatch] (node:internal/event_target:807:20)
    at EventTarget.dispatchEvent (node:internal/event_target:742:26)
    at abortSignal (node:internal/abort_controller:369:10)
    at Timeout._onTimeout (node:internal/abort_controller:126:7)
    at listOnTimeout (node:internal/timers:573:17)
    at process.processTimers (node:internal/timers:514:7) {
  code: 'ERR_TIMEOUT',
  props: {}
}

@SgtPooki SgtPooki self-assigned this Nov 7, 2023
@SgtPooki
Copy link
Member Author

SgtPooki commented Nov 7, 2023

I'm running the above script, with a minor edit, using https://www.npmjs.com/package/until-death, to try to kill the helia-http-gateway with npx -y until-death ./test-gateway.sh

The script is as follows:

#!/usr/bin/env bash
# test-gateway.sh

# Query all endpoints until failure
# This script is intended to be run from the root of the helia-http-gateway repository

# If localhost:8080 is not listening, then exit with non-zero error code
if ! nc -z localhost 8080; then
  echo "localhost:8080 is not listening"
  exit 1
fi

mkdir -p test-output

wget -T 180 -O test-output/blog.ipfs.tech http://localhost:8080/ipns/blog.ipfs.tech
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/blog.libp2p.io http://localhost:8080/ipns/blog.libp2p.io
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/consensuslab.world http://localhost:8080/ipns/consensuslab.world
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/docs.ipfs.tech http://localhost:8080/ipns/docs.ipfs.tech
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/docs.libp2p.io http://localhost:8080/ipns/docs.libp2p.io
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/drand.love http://localhost:8080/ipns/drand.love
curl -X POST http://localhost:8080/api/v0/repo/gc

# wg -O test-output/fil.orget http://localhost:8080/ipns/fil.org
# curl -X POST http://localhost:8080/api/v0/repo/gc
#
wget -T 180 -O test-output/filecoin.io http://localhost:8080/ipns/filecoin.io
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/green.filecoin.io http://localhost:8080/ipns/green.filecoin.io
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/ipfs.tech http://localhost:8080/ipns/ipfs.tech
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/ipld.io http://localhost:8080/ipns/ipld.io
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/libp2p.io http://localhost:8080/ipns/libp2p.io
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/n0.computer http://localhost:8080/ipns/n0.computer
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/probelab.io http://localhost:8080/ipns/probelab.io
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/protocol.ai http://localhost:8080/ipns/protocol.ai
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/research.protocol.ai http://localhost:8080/ipns/research.protocol.ai
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/singularity.storage http://localhost:8080/ipns/singularity.storage
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/specs.ipfs.tech http://localhost:8080/ipns/specs.ipfs.tech
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/strn.network http://localhost:8080/ipns/strn.network
curl -X POST http://localhost:8080/api/v0/repo/gc

wget -T 180 -O test-output/web3.storage http://localhost:8080/ipns/web3.storage
curl -X POST http://localhost:8080/api/v0/repo/gc

rm -rf test-output

@SgtPooki
Copy link
Member Author

SgtPooki commented Nov 7, 2023

I was able to kill it with the above method, getting a JS heap out of memory error:

[10:41:38.968] INFO (62069): helia-http-gateway:fastifyincoming request
    reqId: "req-1l"
    req: {
      "method": "GET",
      "url": "/ipns/filecoin.io",
      "hostname": "localhost:8080",
      "remoteAddress": "127.0.0.1",
      "remotePort": 61865
    }
(node:62069) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
(node:62069) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
[10:41:49.224] INFO (62069): helia-http-gateway:fastifyrequest completed
    reqId: "req-1l"
    res: {
      "statusCode": 200
    }
    responseTime: 10255.209916949272

<--- Last few GCs --->

[62069:0x128008000]   736550 ms: Scavenge 3985.7 (4127.4) -> 3974.5 (4127.4) MB, 10.92 / 0.12 ms  (average mu = 0.204, current mu = 0.017) allocation failure;
[62069:0x128008000]   736573 ms: Scavenge 3988.0 (4127.6) -> 3977.8 (4128.6) MB, 11.12 / 0.08 ms  (average mu = 0.204, current mu = 0.017) allocation failure;
[62069:0x128008000]   737515 ms: Mark-Compact 3990.1 (4128.6) -> 3897.5 (4129.4) MB, 935.08 / 0.00 ms  (average mu = 0.342, current mu = 0.458) allocation failure; scavenge might not succeed


<--- JS stacktrace --->

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x1044060f0 node::Abort() [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
 2: 0x1044062d8 node::ModifyCodeGenerationFromStrings(v8::Local<v8::Context>, v8::Local<v8::Value>, bool) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
 3: 0x10458ce98 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
 4: 0x10476167c v8::internal::Heap::GarbageCollectionReasonToString(v8::internal::GarbageCollectionReason) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
 5: 0x104765530 v8::internal::Heap::CollectGarbageShared(v8::internal::LocalHeap*, v8::internal::GarbageCollectionReason) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
 6: 0x104761f94 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::internal::GarbageCollectionReason, char const*) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
 7: 0x10475fd1c v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
 8: 0x104756970 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
 9: 0x1047571d0 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
10: 0x10473c1d4 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
11: 0x104b23be4 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
12: 0x104e80c44 Builtins_CEntry_Return1_ArgvOnStack_NoBuiltinExit [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
13: 0x104ed00e8 Builtins_NumberPrototypeToString [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
14: 0x10a00d09c
15: 0x109fe513c
16: 0x10a2b4398
17: 0x109e9ee60
18: 0x104e2f210 Builtins_AsyncFunctionAwaitResolveClosure [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
19: 0x104edcfb8 Builtins_PromiseFulfillReactionJob [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
20: 0x104e1eb94 Builtins_RunMicrotasks [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
21: 0x104df63f4 Builtins_JSRunMicrotasksEntry [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
22: 0x1046ceae4 v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
23: 0x1046cefd0 v8::internal::(anonymous namespace)::InvokeWithTryCatch(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
24: 0x1046cf1ac v8::internal::Execution::TryRunMicrotasks(v8::internal::Isolate*, v8::internal::MicrotaskQueue*) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
25: 0x1046f6268 v8::internal::MicrotaskQueue::RunMicrotasks(v8::internal::Isolate*) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
26: 0x1046f6a04 v8::internal::MicrotaskQueue::PerformCheckpoint(v8::Isolate*) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
27: 0x104df9ef0 Builtins_CallApiCallback [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
28: 0x10a1b7614
29: 0x104df650c Builtins_JSEntryTrampoline [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
30: 0x104df61f4 Builtins_JSEntry [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
31: 0x1046ceb0c v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
32: 0x1046cdf58 v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
33: 0x1045a8824 v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
34: 0x104338d3c node::InternalCallbackScope::Close() [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
35: 0x10433901c node::InternalMakeCallback(node::Environment*, v8::Local<v8::Object>, v8::Local<v8::Object>, v8::Local<v8::Function>, int, v8::Local<v8::Value>*, node::async_context) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
36: 0x10434f48c node::AsyncWrap::MakeCallback(v8::Local<v8::Function>, int, v8::Local<v8::Value>*) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
37: 0x1044d2ff4 node::StreamBase::CallJSOnreadMethod(long, v8::Local<v8::ArrayBuffer>, unsigned long, node::StreamBase::StreamBaseJSChecks) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
38: 0x1044d4688 node::EmitToJSStreamListener::OnStreamRead(long, uv_buf_t const&) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
39: 0x1044d897c node::LibuvStreamWrap::OnUvRead(long, uv_buf_t const*) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
40: 0x1044d9100 node::LibuvStreamWrap::ReadStart()::$_1::__invoke(uv_stream_s*, long, uv_buf_t const*) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
41: 0x104de2dc0 uv__stream_io [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
42: 0x104dea6c4 uv__io_poll [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
43: 0x104dd8bb0 uv_run [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
44: 0x104339754 node::SpinEventLoopInternal(node::Environment*) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
45: 0x104449114 node::NodeMainInstance::Run(node::ExitCode*, node::Environment*) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
46: 0x104448eb0 node::NodeMainInstance::Run() [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
47: 0x1043d160c node::Start(int, char**) [/Users/sgtpooki/.asdf/installs/nodejs/20.8.1/bin/node]
48: 0x1a0d4bf28 start [/usr/lib/dyld]
[1]    62030 abort      DEBUG="helia-server" npm run start                                  [12m19.054s]

@SgtPooki
Copy link
Member Author

SgtPooki commented Nov 8, 2023

i've got those scripts in a debugging folder and will be pushing those up in a PR shortly with some additional fixes.

@SgtPooki
Copy link
Member Author

SgtPooki commented Nov 8, 2023

seems like the container and node.js process has been significantly improved on my local branch. changes coming after an event at my sons school. here's some output:

Requesting http://localhost:8080/ipns/blog.ipfs.tech
http://localhost:8080/ipns/blog.ipfs.tech: HTTP_000 in 60.001766 seconds
Connection to localhost port 8080 [tcp/http-alt] succeeded!
Requesting http://localhost:8080/ipns/blog.ipfs.tech
http://localhost:8080/ipns/blog.ipfs.tech: HTTP_200 in 11.146163 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.996113 seconds
Requesting http://localhost:8080/ipns/blog.libp2p.io
http://localhost:8080/ipns/blog.libp2p.io: HTTP_200 in 5.834325 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.561062 seconds
Requesting http://localhost:8080/ipns/consensuslab.world
http://localhost:8080/ipns/consensuslab.world: HTTP_200 in 2.157051 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.445670 seconds
Requesting http://localhost:8080/ipns/docs.ipfs.tech
http://localhost:8080/ipns/docs.ipfs.tech: HTTP_200 in 4.262585 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.278778 seconds
Requesting http://localhost:8080/ipns/docs.libp2p.io
http://localhost:8080/ipns/docs.libp2p.io: HTTP_200 in 2.769344 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.544346 seconds
Requesting http://localhost:8080/ipns/drand.love
http://localhost:8080/ipns/drand.love: HTTP_200 in 3.396503 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.447386 seconds
Requesting http://localhost:8080/ipns/fil.org
http://localhost:8080/ipns/fil.org: HTTP_200 in 5.396954 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.358178 seconds
Requesting http://localhost:8080/ipns/filecoin.io
http://localhost:8080/ipns/filecoin.io: HTTP_200 in 4.708792 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.511910 seconds
Requesting http://localhost:8080/ipns/green.filecoin.io
http://localhost:8080/ipns/green.filecoin.io: HTTP_200 in 3.592724 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.468012 seconds
Requesting http://localhost:8080/ipns/ipfs.tech
http://localhost:8080/ipns/ipfs.tech: HTTP_200 in 3.181904 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.545069 seconds
Requesting http://localhost:8080/ipns/ipld.io
http://localhost:8080/ipns/ipld.io: HTTP_200 in 2.365077 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.540125 seconds
Requesting http://localhost:8080/ipns/libp2p.io
http://localhost:8080/ipns/libp2p.io: HTTP_200 in 1.943017 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.511259 seconds
Requesting http://localhost:8080/ipns/n0.computer
http://localhost:8080/ipns/n0.computer: HTTP_200 in 1.756028 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.508440 seconds
Requesting http://localhost:8080/ipns/probelab.io
http://localhost:8080/ipns/probelab.io: HTTP_200 in 2.947130 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.596525 seconds
Requesting http://localhost:8080/ipns/protocol.ai
http://localhost:8080/ipns/protocol.ai: HTTP_200 in 3.955738 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.498988 seconds
Requesting http://localhost:8080/ipns/research.protocol.ai
http://localhost:8080/ipns/research.protocol.ai: HTTP_200 in 4.346949 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.509931 seconds
Requesting http://localhost:8080/ipns/singularity.storage
http://localhost:8080/ipns/singularity.storage: HTTP_200 in 3.208631 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.367448 seconds
Requesting http://localhost:8080/ipns/specs.ipfs.tech
http://localhost:8080/ipns/specs.ipfs.tech: HTTP_200 in 1.827266 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.462614 seconds
Requesting http://localhost:8080/ipns/strn.network
http://localhost:8080/ipns/strn.network: HTTP_200 in 7.334869 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.755269 seconds
Requesting http://localhost:8080/ipns/web3.storage
http://localhost:8080/ipns/web3.storage: HTTP_200 in 4.347730 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.150839 seconds
Connection to localhost port 8080 [tcp/http-alt] succeeded!
Requesting http://localhost:8080/ipns/blog.ipfs.tech
http://localhost:8080/ipns/blog.ipfs.tech: HTTP_200 in 49.758429 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.991985 seconds
Requesting http://localhost:8080/ipns/blog.libp2p.io
http://localhost:8080/ipns/blog.libp2p.io: HTTP_200 in 14.113579 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 2.558124 seconds
Requesting http://localhost:8080/ipns/consensuslab.world
http://localhost:8080/ipns/consensuslab.world: HTTP_200 in 5.889396 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.970621 seconds
Requesting http://localhost:8080/ipns/docs.ipfs.tech
http://localhost:8080/ipns/docs.ipfs.tech: HTTP_200 in 4.888529 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.765097 seconds
Requesting http://localhost:8080/ipns/docs.libp2p.io
http://localhost:8080/ipns/docs.libp2p.io: HTTP_200 in 4.747674 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.754912 seconds
Requesting http://localhost:8080/ipns/drand.love
http://localhost:8080/ipns/drand.love: HTTP_200 in 3.807516 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.403479 seconds
Requesting http://localhost:8080/ipns/fil.org
http://localhost:8080/ipns/fil.org: HTTP_200 in 5.218964 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.413936 seconds
Requesting http://localhost:8080/ipns/filecoin.io
http://localhost:8080/ipns/filecoin.io: HTTP_200 in 4.279419 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 1.098515 seconds
Requesting http://localhost:8080/ipns/green.filecoin.io
http://localhost:8080/ipns/green.filecoin.io: HTTP_200 in 3.925621 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.499890 seconds
Requesting http://localhost:8080/ipns/ipfs.tech
http://localhost:8080/ipns/ipfs.tech: HTTP_200 in 3.069305 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.459704 seconds
Requesting http://localhost:8080/ipns/ipld.io
http://localhost:8080/ipns/ipld.io: HTTP_200 in 1.957165 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.431254 seconds
Requesting http://localhost:8080/ipns/libp2p.io
http://localhost:8080/ipns/libp2p.io: HTTP_200 in 1.677803 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.361421 seconds
Requesting http://localhost:8080/ipns/n0.computer
http://localhost:8080/ipns/n0.computer: HTTP_200 in 1.866869 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.343904 seconds
Requesting http://localhost:8080/ipns/probelab.io
http://localhost:8080/ipns/probelab.io: HTTP_200 in 3.967132 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.490225 seconds
Requesting http://localhost:8080/ipns/protocol.ai
http://localhost:8080/ipns/protocol.ai: HTTP_200 in 5.048833 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.792142 seconds
Requesting http://localhost:8080/ipns/research.protocol.ai
http://localhost:8080/ipns/research.protocol.ai: HTTP_200 in 4.113244 seconds
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.870508 seconds

@SgtPooki
Copy link
Member Author

SgtPooki commented Nov 8, 2023

USE_LIBP2P=false & USE_BITSWAP=false significantly improve TTFB metrics:

Requesting http://localhost:8080/ipns/blog.ipfs.tech
http://localhost:8080/ipns/blog.ipfs.tech: HTTP_200 in 33.389076 seconds (TTFB: 33.388933, rediect: 0.005276)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.201580 seconds
Requesting http://localhost:8080/ipns/blog.libp2p.io
http://localhost:8080/ipns/blog.libp2p.io: HTTP_200 in 0.538736 seconds (TTFB: 0.538635, rediect: 0.012172)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.004971 seconds
Requesting http://localhost:8080/ipns/consensuslab.world
http://localhost:8080/ipns/consensuslab.world: HTTP_200 in 0.383609 seconds (TTFB: 0.383476, rediect: 0.005460)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.007073 seconds
Requesting http://localhost:8080/ipns/docs.ipfs.tech
http://localhost:8080/ipns/docs.ipfs.tech: HTTP_200 in 0.392974 seconds (TTFB: 0.392633, rediect: 0.005721)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.005923 seconds
Requesting http://localhost:8080/ipns/docs.libp2p.io
http://localhost:8080/ipns/docs.libp2p.io: HTTP_200 in 0.620038 seconds (TTFB: 0.619875, rediect: 0.005116)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.007380 seconds
Requesting http://localhost:8080/ipns/drand.love
http://localhost:8080/ipns/drand.love: HTTP_200 in 0.393777 seconds (TTFB: 0.393676, rediect: 0.005175)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.004635 seconds
Requesting http://localhost:8080/ipns/fil.org
http://localhost:8080/ipns/fil.org: HTTP_200 in 0.321622 seconds (TTFB: 0.321465, rediect: 0.003620)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.028887 seconds
Requesting http://localhost:8080/ipns/filecoin.io
http://localhost:8080/ipns/filecoin.io: HTTP_200 in 0.887135 seconds (TTFB: 0.886964, rediect: 0.004776)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.035565 seconds
Requesting http://localhost:8080/ipns/green.filecoin.io
http://localhost:8080/ipns/green.filecoin.io: HTTP_200 in 0.558634 seconds (TTFB: 0.558494, rediect: 0.004851)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.431983 seconds
Requesting http://localhost:8080/ipns/ipfs.tech
http://localhost:8080/ipns/ipfs.tech: HTTP_200 in 0.516234 seconds (TTFB: 0.516087, rediect: 0.008784)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.203987 seconds
Requesting http://localhost:8080/ipns/ipld.io
http://localhost:8080/ipns/ipld.io: HTTP_200 in 0.486826 seconds (TTFB: 0.486535, rediect: 0.009419)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.010074 seconds
Requesting http://localhost:8080/ipns/libp2p.io
http://localhost:8080/ipns/libp2p.io: HTTP_200 in 0.301976 seconds (TTFB: 0.301752, rediect: 0.005911)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.191732 seconds
Requesting http://localhost:8080/ipns/n0.computer
http://localhost:8080/ipns/n0.computer: HTTP_200 in 0.460659 seconds (TTFB: 0.460371, rediect: 0.009132)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.007510 seconds
Requesting http://localhost:8080/ipns/probelab.io
http://localhost:8080/ipns/probelab.io: HTTP_200 in 0.395654 seconds (TTFB: 0.395505, rediect: 0.006490)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.184738 seconds
Requesting http://localhost:8080/ipns/protocol.ai
http://localhost:8080/ipns/protocol.ai: HTTP_200 in 0.298125 seconds (TTFB: 0.297968, rediect: 0.006809)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.084095 seconds
Requesting http://localhost:8080/ipns/research.protocol.ai
http://localhost:8080/ipns/research.protocol.ai: HTTP_200 in 0.611435 seconds (TTFB: 0.611299, rediect: 0.006365)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.487283 seconds
Requesting http://localhost:8080/ipns/singularity.storage
http://localhost:8080/ipns/singularity.storage: HTTP_200 in 0.866055 seconds (TTFB: 0.865264, rediect: 0.019014)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.007395 seconds
Requesting http://localhost:8080/ipns/specs.ipfs.tech
http://localhost:8080/ipns/specs.ipfs.tech: HTTP_200 in 0.368673 seconds (TTFB: 0.368238, rediect: 0.006296)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.013496 seconds
Requesting http://localhost:8080/ipns/strn.network
http://localhost:8080/ipns/strn.network: HTTP_200 in 0.661367 seconds (TTFB: 0.661232, rediect: 0.008505)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.522674 seconds
Requesting http://localhost:8080/ipns/web3.storage
http://localhost:8080/ipns/web3.storage: HTTP_200 in 0.385475 seconds (TTFB: 0.385260, rediect: 0.007811)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_200 in 0.392009 seconds

@SgtPooki
Copy link
Member Author

We need to move away from using an in-memory data-store because there is no way to limit the amount of memory used. @wemeetagain mentioned that Lodestar uses an in-mem datastore that flushes to file-system.

If lodestar has a datastore we can use, we should use that, otherwise we should migrate to use file-system directly and then update to use lodestar strategy.

@SgtPooki
Copy link
Member Author

SgtPooki commented Dec 5, 2023

the crashing is reproducing in some CI tests: https://github.com/ipfs/helia-http-gateway/actions/runs/7104233205/job/19338737826?pr=59#step:6:27

@SgtPooki
Copy link
Member Author

SgtPooki commented Dec 6, 2023

I believe this is fixed: ipfs/helia#275 (comment)

@SgtPooki
Copy link
Member Author

SgtPooki commented Dec 8, 2023

not quite fixed. see https://github.com/plprobelab/probelab-infra/issues/87. we have hanging promises probably because signals not being passed through properly.

@SgtPooki
Copy link
Member Author

SgtPooki commented Mar 27, 2024

there are additional errors reported in slack: https://ipshipyard.slack.com/archives/C06C9LWQZC3/p1711097813106169?thread_ts=1710996462.233149&cid=C06C9LWQZC3 (private access)

Some info from @dennis-tra

I just had another look at the different Helia experiment deployments:

  • Helia (TG): works super well 👍 there are no anomalies in the logs or experiment durations (e.g., a very short measurement run that would indicate a crash or oom)
  • Helia (DR): the first screenshot shows that sometimes the container just stops after a few minutes. I attached the logs as well.
  • Helia: Seems to consistently run out of memory after ~30m: FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory (logs also attached). However, that’s weird because our monitoring shows that all containers in the task only approach the 4GB mark of memory usage. The available resources are 8GB :thinking_face: (screenshot also attached)

heliadr-logs.txt

helia-logs.txt

image

image

solution ideas

I believe the update to latest verified-fetch that handles aborted signals should help, along with pulling in stores from ipfs/js-stores#287 for preventing excess memory

@SgtPooki
Copy link
Member Author

FYI probe-lab/tiros#12 is out to help make testing helia-http-gateway against tiros easier

@SgtPooki
Copy link
Member Author

currently running a battery of tests against a locally built helia-http-gateway with hyperfine: hyperfine --show-output './tiros run' &> battery.log.. will upload the log when it finishes.

@SgtPooki

This comment was marked as off-topic.

@SgtPooki

This comment was marked as off-topic.

@SgtPooki

This comment was marked as off-topic.

@SgtPooki
Copy link
Member Author

FYI, looking at the heliadr-logs.txt logfile, this seems key to me:

 (node:7) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit           |
| 2024-03-22T06:28:36.035Z helia-http-gateway:server request aborted by client                                                                                                         |
| 2024-03-22T06:28:36.044Z helia-http-gateway:server running `gc` on Helia node                                                                                                        |
| node:events:496                                                                                                                                                                      |
|       throw er; // Unhandled 'error' event                                                                                                                                           |
|       ^                                                                                                                                                                              |
| Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close                                                                                                                                  |
|     at Socket.onclose (node:internal/streams/end-of-stream:154:30)                                                                                                                   |
|     at Socket.emit (node:events:530:35)                                                                                                                                              |
|     at TCP.<anonymous> (node:net:337:12)                                                                                                                                             |
| Emitted 'error' event on JSStreamSocket instance at:                                                                                                                                 |
|     at Duplex.<anonymous> (node:internal/js_stream_socket:64:38)                                                                                                                     |
|     at Duplex.emit (node:events:518:28)                                                                                                                                              |
|     at emitErrorNT (node:internal/streams/destroy:169:8)                                                                                                                             |
|     at emitErrorCloseNT (node:internal/streams/destroy:128:3)                                                                                                                        |
|     at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {                                                                                                 |
|   code: 'ERR_STREAM_PREMATURE_CLOSE'                                                                                                                                                 |
| }                                                                                                                                                                                    |
| Node.js v20.11.1     

we are aborting the request in helia-http-gateway, but we were using @helia/verified-fetch@^1.1.2, see #81.

A lot of changes happened between 1.1.2 and 1.3.2 of helia-verified-fetch, such as handling signals properly. From the changelog:

@helia/verified-fetch 1.3.2 (2024-03-25)

Bug Fixes

@helia/verified-fetch 1.3.1 (2024-03-22)

Bug Fixes

  • aborted signals throw libp2p AbortError (#30) (4575791)

@helia/verified-fetch 1.3.0 (2024-03-21)

Features

Bug Fixes

@SgtPooki
Copy link
Member Author

For helia-logs.txt, it appears to be a legitimate heap allocation failure.

I think the js-stores at ipfs/js-stores#287 will fix this.

| (node:6) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit                   |
| <--- Last few GCs --->                                                                                                                                                                       |
| [6:0x5dad190]  1021958 ms: Mark-Compact 1900.0 (2084.3) -> 1885.1 (2087.5) MB, 2381.49 / 0.49 ms  (average mu = 0.144, current mu = 0.072) allocation failure; scavenge might not succeed    |
| [6:0x5dad190]  1023782 ms: Mark-Compact 1901.6 (2087.8) -> 1887.5 (2090.5) MB, 1708.10 / 0.02 ms  (average mu = 0.109, current mu = 0.063) allocation failure; scavenge might not succeed    |
| <--- JS stacktrace --->                                                                                                                                                                      |
| FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory                                                                                     |
| ----- Native stack trace -----                                                                                                                                                               |
|  1: 0xca5580 node::Abort() [node]                                                                                                                                                            |
|  2: 0xb781f9  [node]                                                                                                                                                                         |
|  3: 0xeca4d0 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]                                                                                  |
|  4: 0xeca7b7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]                                                                    |
|  5: 0x10dc505  [node]                                                                                                                                                                        |
|  6: 0x10dca94 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [node]                                                                                                     |
|  7: 0x10f3984 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::internal::GarbageCollectionReason, char const*) [node]                                        |
|  8: 0x10f419c v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]                                           |
|  9: 0x10ca4a1 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]    |
| 10: 0x10cb635 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]   |
| 11: 0x10a8c86 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]                            |
| 12: 0x1503a16 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]                                                                            |
| 13: 0x7fb01d699ef6                 

@SgtPooki
Copy link
Member Author

@achingbrain do you have any other ideas what could be happening here, or why we would be running into issues when ran in a linux container environment?

@dennis-tra I wonder if we could more effectively reproduce this by running the tiros job locally in a container. Any ideas? Also, what are the machine specs being used by the IPFS container?

@SgtPooki SgtPooki self-assigned this Mar 27, 2024
@SgtPooki
Copy link
Member Author

SgtPooki commented Mar 27, 2024

bah. apparently my helia-dr and helia tests on local weren't being hit.. thats why memory never seemed to increase...

I'm modifying tiros run command to not test HTTP, and throw if ERR_NAME_NOT_RESOLVED happens.

@SgtPooki
Copy link
Member Author

fixed with #81 (comment)

will be running tests again

@SgtPooki
Copy link
Member Author

SgtPooki commented Mar 27, 2024

FYI that subdomains of docker hosts are not resolving properly, even for the kubo container:

the below is ran from the chrome container obtained from tiros repo when running docker compose up -d:

$ curl http://ipfs:8080/ipfs/bafkqaddimvwgy3zao5xxe3debi
hello world
$ curl http://bafkqaddimvwgy3zao5xxe3debi.ipfs.ipfs:8080
curl: (6) Could not resolve host: bafkqaddimvwgy3zao5xxe3debi.ipfs.ipfs
$

edit: so we should probably be setting USE_SUBDOMAINS=false for tiros.

@SgtPooki
Copy link
Member Author

this is probably an error with my mac local network, because subdomains were working for tiros in deployed env.. but checking responses from Kubo, it looks like setting USE_SUBDOMAINS=false gives us the same result:

Kubo:

INFO[0010] Start probing libp2p.io IPFS
DEBU[0010] Connecting to browser...                      url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0010] Initialize incognito browser                  url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0010] Clearing browser cookies                      url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0010] Opening new page                              url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0010] Attaching javascript                          url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0010] Navigating...                                 timeout=15s url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0011] Waiting for onload event ...                  timeout=15s url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0011] Waiting for network idle event ...            timeout=15s url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0011] Running polyfill JS ...                       url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0011] Running measurement ...                       url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0016] Getting Performance Entry measurement ...     url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0017] Closing page...                               url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0017] Closing browser connection...                 url="http://ipfs:8080/ipns/libp2p.io"
INFO[0017] Probed website libp2p.io                      error="<nil>" fcp=281.1999999284744 lcp=281.1999999284744 status=301 ttfb=257 tti=275
INFO[0017] Handling probe result                         url="http://ipfs:8080/ipns/libp2p.io"

helia-http-gateway:

INFO[0027] Start probing libp2p.io IPFS
DEBU[0027] Connecting to browser...                      url="http://ipfs:8080/ipns/libp2p.io"
INFO[0027] Awaiting Provider or Probe result...
DEBU[0027] Initialize incognito browser                  url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0027] Clearing browser cookies                      url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0027] Opening new page                              url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0027] Attaching javascript                          url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0027] Navigating...                                 timeout=15s url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0027] Waiting for onload event ...                  timeout=15s url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0028] Waiting for network idle event ...            timeout=15s url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0028] Running polyfill JS ...                       url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0028] Running measurement ...                       url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0033] Getting Performance Entry measurement ...     url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0033] Closing page...                               url="http://ipfs:8080/ipns/libp2p.io"
DEBU[0033] Closing browser connection...                 url="http://ipfs:8080/ipns/libp2p.io"
INFO[0033] Probed website libp2p.io                      error="<nil>" fcp=337.90000009536743 lcp=337.90000009536743 status=301 ttfb=310.10000002384186 tti=323
INFO[0033] Handling probe result                         url="http://ipfs:8080/ipns/libp2p.io"

So i'll just continue my testing with this flag disabled.

@SgtPooki
Copy link
Member Author

SgtPooki commented Mar 27, 2024

I was able to get the ERR_STREAM_PREMATURE_CLOSE error locally after only a few tries with tiros:

2024-03-27 15:46:47 2024-03-27T22:46:47.544Z helia-http-gateway:server heliaServer Started!
2024-03-27 15:46:47 2024-03-27T22:46:47.544Z helia-http-gateway:server initialized
2024-03-27 15:46:53 2024-03-27T22:46:53.526Z helia-http-gateway:server fetching Helia version info
2024-03-27 15:46:53 2024-03-27T22:46:53.527Z helia-http-gateway:server helia version string: 4.1.0
2024-03-27 15:46:53 (node:7) ExperimentalWarning: Importing JSON modules is an experimental feature and might change at any time
2024-03-27 15:46:53 (Use `node --trace-warnings ...` to show where the warning was created)
2024-03-27 15:46:53 2024-03-27T22:46:53.728Z helia-http-gateway:server helia version info: { Version: '4.1.0', Commit: '44e6195' }
2024-03-27 15:47:04 2024-03-27T22:47:04.284Z helia-http-gateway:server fetch request /ipns/n0.computer
2024-03-27 15:47:04 2024-03-27T22:47:04.284Z helia-http-gateway:server handling entry:  { address: 'n0.computer', namespace: 'ipns', relativePath: undefined }
2024-03-27 15:47:04 2024-03-27T22:47:04.284Z helia-http-gateway:server subdomains are disabled, fetching without subdomain
2024-03-27 15:47:04 2024-03-27T22:47:04.284Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:04 2024-03-27T22:47:04.284Z helia-http-gateway:server fetching url "http://ipfs:8080/ipns/n0.computer" with @helia/verified-fetch
2024-03-27 15:47:04 2024-03-27T22:47:04.484Z helia-http-gateway:server verified-fetch response not ok:  301
2024-03-27 15:47:04 2024-03-27T22:47:04.523Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:04 2024-03-27T22:47:04.523Z helia-http-gateway:server fetching url "http://ipfs:8080/favicon.ico" with @helia/verified-fetch
2024-03-27 15:47:04 2024-03-27T22:47:04.523Z helia-http-gateway:server verified-fetch response not ok:  400
2024-03-27 15:47:10 2024-03-27T22:47:10.326Z helia-http-gateway:server running `gc` on Helia node
2024-03-27 15:47:10 2024-03-27T22:47:10.556Z helia-http-gateway:server fetch request /ipns/blog.libp2p.io
2024-03-27 15:47:10 2024-03-27T22:47:10.556Z helia-http-gateway:server handling entry:  {
2024-03-27 15:47:10   address: 'blog.libp2p.io',
2024-03-27 15:47:10   namespace: 'ipns',
2024-03-27 15:47:10   relativePath: undefined
2024-03-27 15:47:10 }
2024-03-27 15:47:10 2024-03-27T22:47:10.556Z helia-http-gateway:server subdomains are disabled, fetching without subdomain
2024-03-27 15:47:10 2024-03-27T22:47:10.556Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:10 2024-03-27T22:47:10.556Z helia-http-gateway:server fetching url "http://ipfs:8080/ipns/blog.libp2p.io" with @helia/verified-fetch
2024-03-27 15:47:10 2024-03-27T22:47:10.798Z helia-http-gateway:server verified-fetch response not ok:  301
2024-03-27 15:47:10 2024-03-27T22:47:10.844Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:10 2024-03-27T22:47:10.844Z helia-http-gateway:server fetching url "http://ipfs:8080/favicon.ico" with @helia/verified-fetch
2024-03-27 15:47:10 2024-03-27T22:47:10.845Z helia-http-gateway:server verified-fetch response not ok:  400
2024-03-27 15:47:16 2024-03-27T22:47:16.607Z helia-http-gateway:server running `gc` on Helia node
2024-03-27 15:47:16 2024-03-27T22:47:16.849Z helia-http-gateway:server fetch request /ipns/web3.storage
2024-03-27 15:47:16 2024-03-27T22:47:16.849Z helia-http-gateway:server handling entry:  { address: 'web3.storage', namespace: 'ipns', relativePath: undefined }
2024-03-27 15:47:16 2024-03-27T22:47:16.849Z helia-http-gateway:server subdomains are disabled, fetching without subdomain
2024-03-27 15:47:16 2024-03-27T22:47:16.849Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:16 2024-03-27T22:47:16.849Z helia-http-gateway:server fetching url "http://ipfs:8080/ipns/web3.storage" with @helia/verified-fetch
2024-03-27 15:47:16 (node:7) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
2024-03-27 15:47:17 2024-03-27T22:47:17.086Z helia-http-gateway:server verified-fetch response not ok:  301
2024-03-27 15:47:17 2024-03-27T22:47:17.169Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:17 2024-03-27T22:47:17.169Z helia-http-gateway:server fetching url "http://ipfs:8080/favicon.ico" with @helia/verified-fetch
2024-03-27 15:47:17 2024-03-27T22:47:17.170Z helia-http-gateway:server verified-fetch response not ok:  400
2024-03-27 15:47:22 2024-03-27T22:47:22.886Z helia-http-gateway:server running `gc` on Helia node
2024-03-27 15:47:23 2024-03-27T22:47:23.148Z helia-http-gateway:server fetch request /ipns/singularity.storage
2024-03-27 15:47:23 2024-03-27T22:47:23.148Z helia-http-gateway:server handling entry:  {
2024-03-27 15:47:23   address: 'singularity.storage',
2024-03-27 15:47:23   namespace: 'ipns',
2024-03-27 15:47:23   relativePath: undefined
2024-03-27 15:47:23 }
2024-03-27 15:47:23 2024-03-27T22:47:23.148Z helia-http-gateway:server subdomains are disabled, fetching without subdomain
2024-03-27 15:47:23 2024-03-27T22:47:23.148Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:23 2024-03-27T22:47:23.148Z helia-http-gateway:server fetching url "http://ipfs:8080/ipns/singularity.storage" with @helia/verified-fetch
2024-03-27 15:47:23 2024-03-27T22:47:23.627Z helia-http-gateway:server verified-fetch response not ok:  301
2024-03-27 15:47:23 2024-03-27T22:47:23.759Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:23 2024-03-27T22:47:23.759Z helia-http-gateway:server fetching url "http://ipfs:8080/favicon.ico" with @helia/verified-fetch
2024-03-27 15:47:23 2024-03-27T22:47:23.760Z helia-http-gateway:server verified-fetch response not ok:  400
2024-03-27 15:47:29 2024-03-27T22:47:29.185Z helia-http-gateway:server running `gc` on Helia node
2024-03-27 15:47:29 2024-03-27T22:47:29.428Z helia-http-gateway:server fetch request /ipns/drand.love
2024-03-27 15:47:29 2024-03-27T22:47:29.428Z helia-http-gateway:server handling entry:  { address: 'drand.love', namespace: 'ipns', relativePath: undefined }
2024-03-27 15:47:29 2024-03-27T22:47:29.428Z helia-http-gateway:server subdomains are disabled, fetching without subdomain
2024-03-27 15:47:29 2024-03-27T22:47:29.428Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:29 2024-03-27T22:47:29.428Z helia-http-gateway:server fetching url "http://ipfs:8080/ipns/drand.love" with @helia/verified-fetch
2024-03-27 15:47:29 2024-03-27T22:47:29.506Z helia-http-gateway:server verified-fetch response not ok:  400
2024-03-27 15:47:29 2024-03-27T22:47:29.520Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:29 2024-03-27T22:47:29.520Z helia-http-gateway:server fetching url "http://ipfs:8080/favicon.ico" with @helia/verified-fetch
2024-03-27 15:47:29 2024-03-27T22:47:29.521Z helia-http-gateway:server verified-fetch response not ok:  400
2024-03-27 15:47:35 2024-03-27T22:47:35.506Z helia-http-gateway:server running `gc` on Helia node
2024-03-27 15:47:35 2024-03-27T22:47:35.767Z helia-http-gateway:server fetch request /ipns/consensuslab.world
2024-03-27 15:47:35 2024-03-27T22:47:35.767Z helia-http-gateway:server handling entry:  {
2024-03-27 15:47:35   address: 'consensuslab.world',
2024-03-27 15:47:35   namespace: 'ipns',
2024-03-27 15:47:35   relativePath: undefined
2024-03-27 15:47:35 }
2024-03-27 15:47:35 2024-03-27T22:47:35.768Z helia-http-gateway:server subdomains are disabled, fetching without subdomain
2024-03-27 15:47:35 2024-03-27T22:47:35.768Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:35 2024-03-27T22:47:35.768Z helia-http-gateway:server fetching url "http://ipfs:8080/ipns/consensuslab.world" with @helia/verified-fetch
2024-03-27 15:47:35 (node:7) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
2024-03-27 15:47:36 (node:7) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
2024-03-27 15:47:36 2024-03-27T22:47:36.509Z helia-http-gateway:server verified-fetch response not ok:  301
2024-03-27 15:47:36 2024-03-27T22:47:36.622Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:36 2024-03-27T22:47:36.622Z helia-http-gateway:server fetching url "http://ipfs:8080/favicon.ico" with @helia/verified-fetch
2024-03-27 15:47:36 2024-03-27T22:47:36.623Z helia-http-gateway:server verified-fetch response not ok:  400
2024-03-27 15:47:41 2024-03-27T22:47:41.807Z helia-http-gateway:server running `gc` on Helia node
2024-03-27 15:47:42 2024-03-27T22:47:42.079Z helia-http-gateway:server fetch request /ipns/filecoin.io
2024-03-27 15:47:42 2024-03-27T22:47:42.079Z helia-http-gateway:server handling entry:  { address: 'filecoin.io', namespace: 'ipns', relativePath: undefined }
2024-03-27 15:47:42 2024-03-27T22:47:42.079Z helia-http-gateway:server subdomains are disabled, fetching without subdomain
2024-03-27 15:47:42 2024-03-27T22:47:42.079Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:42 2024-03-27T22:47:42.079Z helia-http-gateway:server fetching url "http://ipfs:8080/ipns/filecoin.io" with @helia/verified-fetch
2024-03-27 15:47:42 (node:7) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
2024-03-27 15:47:42 2024-03-27T22:47:42.522Z helia-http-gateway:server verified-fetch response not ok:  301
2024-03-27 15:47:42 2024-03-27T22:47:42.648Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:42 2024-03-27T22:47:42.648Z helia-http-gateway:server fetching url "http://ipfs:8080/favicon.ico" with @helia/verified-fetch
2024-03-27 15:47:42 2024-03-27T22:47:42.652Z helia-http-gateway:server verified-fetch response not ok:  400
2024-03-27 15:47:45 (node:7) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
2024-03-27 15:47:46 2024-03-27T22:47:46.133Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:46 2024-03-27T22:47:46.133Z helia-http-gateway:server fetching url "http://localhost:8080/ipfs-sw-sw.js" with @helia/verified-fetch
2024-03-27 15:47:46 2024-03-27T22:47:46.134Z helia-http-gateway:server verified-fetch response not ok:  400
2024-03-27 15:47:48 2024-03-27T22:47:48.150Z helia-http-gateway:server running `gc` on Helia node
2024-03-27 15:47:48 2024-03-27T22:47:48.409Z helia-http-gateway:server fetch request /ipns/filecoin.io
2024-03-27 15:47:48 2024-03-27T22:47:48.409Z helia-http-gateway:server handling entry:  { address: 'filecoin.io', namespace: 'ipns', relativePath: undefined }
2024-03-27 15:47:48 2024-03-27T22:47:48.409Z helia-http-gateway:server subdomains are disabled, fetching without subdomain
2024-03-27 15:47:48 2024-03-27T22:47:48.409Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:48 2024-03-27T22:47:48.409Z helia-http-gateway:server fetching url "http://ipfs:8080/ipns/filecoin.io" with @helia/verified-fetch
2024-03-27 15:47:48 2024-03-27T22:47:48.916Z helia-http-gateway:server verified-fetch response not ok:  301
2024-03-27 15:47:49 2024-03-27T22:47:49.041Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:49 2024-03-27T22:47:49.041Z helia-http-gateway:server fetching url "http://ipfs:8080/favicon.ico" with @helia/verified-fetch
2024-03-27 15:47:49 2024-03-27T22:47:49.041Z helia-http-gateway:server verified-fetch response not ok:  400
2024-03-27 15:47:54 2024-03-27T22:47:54.462Z helia-http-gateway:server running `gc` on Helia node
2024-03-27 15:47:54 (node:7) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
2024-03-27 15:47:54 2024-03-27T22:47:54.702Z helia-http-gateway:server fetch request /ipns/probelab.io
2024-03-27 15:47:54 2024-03-27T22:47:54.702Z helia-http-gateway:server handling entry:  { address: 'probelab.io', namespace: 'ipns', relativePath: undefined }
2024-03-27 15:47:54 2024-03-27T22:47:54.702Z helia-http-gateway:server subdomains are disabled, fetching without subdomain
2024-03-27 15:47:54 2024-03-27T22:47:54.702Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:54 2024-03-27T22:47:54.702Z helia-http-gateway:server fetching url "http://ipfs:8080/ipns/probelab.io" with @helia/verified-fetch
2024-03-27 15:47:54 (node:7) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
2024-03-27 15:47:55 2024-03-27T22:47:55.263Z helia-http-gateway:server verified-fetch response not ok:  301
2024-03-27 15:47:55 2024-03-27T22:47:55.449Z helia-http-gateway:server request.query: {}
2024-03-27 15:47:55 2024-03-27T22:47:55.449Z helia-http-gateway:server fetching url "http://ipfs:8080/favicon.ico" with @helia/verified-fetch
2024-03-27 15:47:55 2024-03-27T22:47:55.449Z helia-http-gateway:server verified-fetch response not ok:  400
2024-03-27 15:47:58 node:events:496
2024-03-27 15:47:58       throw er; // Unhandled 'error' event
2024-03-27 15:47:58       ^
2024-03-27 15:47:58 
2024-03-27 15:47:58 Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
2024-03-27 15:47:58     at Socket.onclose (node:internal/streams/end-of-stream:154:30)
2024-03-27 15:47:58     at Socket.emit (node:events:530:35)
2024-03-27 15:47:58     at TCP.<anonymous> (node:net:337:12)
2024-03-27 15:47:58 Emitted 'error' event on JSStreamSocket instance at:
2024-03-27 15:47:58     at Duplex.<anonymous> (node:internal/js_stream_socket:65:38)
2024-03-27 15:47:58     at Duplex.emit (node:events:518:28)
2024-03-27 15:47:58     at emitErrorNT (node:internal/streams/destroy:169:8)
2024-03-27 15:47:58     at emitErrorCloseNT (node:internal/streams/destroy:128:3)
2024-03-27 15:47:58     at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
2024-03-27 15:47:58   code: 'ERR_STREAM_PREMATURE_CLOSE'
2024-03-27 15:47:58 }

this is on the latest @helia/verified-fetch which does pass & handle signals.. I will investigate further to see if I can reproduce outside of tiros

FYI: this error also caused tiros to continue running in the background, still listening on 6666, which was causing issues when trying to start up again:

╰─ ✔ ❯ go run . run
DEBU[0000] Starting telemetry endpoint                   addr="localhost:6666"
INFO[0000] Starting Tiros run...
INFO[0000] Initializing database client                  host=localhost name=tiros_test port=5432 ssl=disable user=tiros_test
WARN[0000] Error serving prometheus                      error="listen tcp 127.0.0.1:6666: bind: address already in use"
INFO[0002] Stopped Tiros run.
ERRO[0002] error: init db connection: pinging database: read tcp 127.0.0.1:59623->127.0.0.1:5432: read: connection reset by peer
exit status 1

@SgtPooki
Copy link
Member Author

I've made some edits to helia-http-gateway locally and using the npm run debug:until-death script made a significant reduction in libp2p dial-queue and kad-dht-running-queries:

Before changes

image image

After Changes

Changes made:

diff --git a/debugging/until-death.sh b/debugging/until-death.sh
index cef57b1..b622471 100755
--- a/debugging/until-death.sh
+++ b/debugging/until-death.sh
@@ -30,7 +30,7 @@ start_gateway() {
 start_gateway & process_pid=$!

 ensure_gateway_running() {
-  npx wait-on "tcp:$PORT" -t 1000 || exit 1
+  npx wait-on "tcp:$PORT" -t 5000 || exit 1
 }


diff --git a/src/helia-server.ts b/src/helia-server.ts
index 8b7ef77..f9809f2 100644
--- a/src/helia-server.ts
+++ b/src/helia-server.ts
@@ -251,12 +251,27 @@ export class HeliaServer {

     const opController = new AbortController()
     setMaxListeners(Infinity, opController.signal)
-    request.raw.on('close', () => {
-      if (request.raw.aborted) {
-        this.log('request aborted by client')
-        opController.abort()
+    const cleanupFn = (): void => {
+      if (request.raw.readableAborted) {
+        this.log.trace('request aborted by client for url "%s"', url)
+      } else if (request.raw.destroyed) {
+        this.log.trace('request destroyed for url "%s"', url)
+      } else if (request.raw.complete) {
+        this.log.trace('request closed or ended in completed state for url "%s"', url)
+      } else {
+        this.log.trace('request closed or ended gracefully for url "%s"', url)
       }
-    })
+      // we want to stop all further processing because the request is closed
+      opController.abort()
+    }
+    /**
+     * The 'close' event is emitted when the stream and any of its underlying resources (a file descriptor, for example) have been closed. The event indicates that no more events will be emitted, and no further computation will occur.
+     * A Readable stream will always emit the 'close' event if it is created with the emitClose option.
+     *
+     * @see https://nodejs.org/api/stream.html#event-close_1
+     */
+    request.raw.on('close', cleanupFn)
+
     await this.isReady
     const resp = await this.heliaFetch(url, { signal: opController.signal, redirect: 'manual' })
     await this.#convertVerifiedFetchResponseToFastifyReply(resp, reply)

After changes:

image image

However, I still ran into the ERR_STREAM_PREMATURE_CLOSE error when running gc after running for about 5 minutes non-stop:

  helia-http-gateway:server fetching url "http://ipfs-tech.ipns.localhost:8080/" with @helia/verified-fetch +0ms
  helia-http-gateway:server:trace request destroyed for url "http://ipfs-tech.ipns.localhost:8080/" +358ms
http://localhost:8080/ipns/ipfs.tech: HTTP_200 in 0.384512 seconds (TTFB: 0.384370, rediect: 0.027293)
running GC
node:events:496
      throw er; // Unhandled 'error' event
      ^

Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
    at Socket.onclose (node:internal/streams/end-of-stream:154:30)
    at Socket.emit (node:events:530:35)
    at TCP.<anonymous> (node:net:337:12)
Emitted 'error' event on JSStreamSocket instance at:
    at Duplex.<anonymous> (node:internal/js_stream_socket:64:38)
    at Duplex.emit (node:events:518:28)
    at emitErrorNT (node:internal/streams/destroy:169:8)
    at emitErrorCloseNT (node:internal/streams/destroy:128:3)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  code: 'ERR_STREAM_PREMATURE_CLOSE'
}

Also, I am explicitly making this problem worse by setting export NODE_OPTIONS=--max_old_space_size=1024 in order to have the problem pop up more quickly. cc @achingbrain

@SgtPooki
Copy link
Member Author

SgtPooki commented Mar 28, 2024

So i just got done with a run of DEBUG="helia*,helia*:trace,libp2p*,libp2p*:trace" npm run debug:until-death &> until-death.log and export NODE_OPTIONS=--max_old_space_size=512:

It took 2 hours to fail, so the until-death.log is 15G (du -hs until-death.log), but here's the tail and some graphs:

╰─ ✔ ❯ tail -200 until-death.log
  protocol: '/noise',
  crypto: [Object],
  prologue: Uint8Array(0) [],
  staticKey: [Object],
  extensions: undefined,
  metrics: [Object],
  components: [DefaultComponents]
}
2024-03-28T02:09:05.593Z libp2p:noise:xxhandshake:trace Stage 0 - Initiator starting to send first message.
2024-03-28T02:09:05.593Z libp2p:noise:xxhandshake:trace Stage 0 - Initiator finished sending first message.
2024-03-28T02:09:05.593Z libp2p:noise:xxhandshake:trace Stage 1 - Initiator waiting to receive first message from responder...
2024-03-28T02:09:05.593Z libp2p:tcp:socket 161.97.115.198:4001 socket close
2024-03-28T02:09:05.593Z libp2p:connection:outbound:1gha9t1711591720379 closing connection to /ip4/161.97.115.198/tcp/4001/p2p/12D3KooWAYzrPKoGwP7BsSt12yc6d68MaJqcpPGmWZnSU6rmEpNy
2024-03-28T02:09:05.593Z libp2p:connection:outbound:1gha9t1711591720379:trace closing all streams
2024-03-28T02:09:05.593Z libp2p:connection:outbound:1gha9t1711591720379:trace closing underlying transport
2024-03-28T02:09:05.593Z libp2p:connection:outbound:1gha9t1711591720379:trace close muxer
2024-03-28T02:09:05.593Z libp2p:connection:outbound:1gha9t1711591720379:trace close maconn
2024-03-28T02:09:05.593Z libp2p:tcp:socket 161.97.115.198:4001 socket was already destroyed when trying to close
2024-03-28T02:09:05.593Z libp2p:connection:outbound:1gha9t1711591720379:trace closed maconn
2024-03-28T02:09:05.593Z libp2p:connection:outbound:1gha9t1711591720379:trace updating timeline with close time
2024-03-28T02:09:05.593Z libp2p:peer-store:trace get await read lock
2024-03-28T02:09:05.593Z libp2p:connection-manager:auto-dial:trace have enough connections 299/50
2024-03-28T02:09:05.593Z libp2p:peer-store:trace get got read lock
2024-03-28T02:09:05.593Z libp2p:peer-store:trace get release read lock
2024-03-28T02:09:05.593Z libp2p:tcp:socket 66.63.188.131:4001 socket close
2024-03-28T02:09:05.593Z libp2p:connection:outbound:ebfrv91711591720936 closing connection to /ip4/66.63.188.131/tcp/4001/p2p/12D3KooWDrNUXdYCGPMJs6GNBE6SfYDrAr4Ty8N7Nc2Xapmzvu1q
2024-03-28T02:09:05.593Z libp2p:connection:outbound:ebfrv91711591720936:trace closing all streams
2024-03-28T02:09:05.593Z libp2p:connection:outbound:ebfrv91711591720936:trace closing underlying transport
2024-03-28T02:09:05.593Z libp2p:connection:outbound:ebfrv91711591720936:trace close muxer
2024-03-28T02:09:05.593Z libp2p:connection:outbound:ebfrv91711591720936:trace close maconn
2024-03-28T02:09:05.593Z libp2p:tcp:socket 66.63.188.131:4001 socket was already destroyed when trying to close
2024-03-28T02:09:05.593Z libp2p:connection:outbound:ebfrv91711591720936:trace closed maconn
2024-03-28T02:09:05.593Z libp2p:connection:outbound:ebfrv91711591720936:trace updating timeline with close time
2024-03-28T02:09:05.593Z libp2p:peer-store:trace get await read lock
2024-03-28T02:09:05.593Z libp2p:connection-manager:auto-dial:trace have enough connections 298/50
2024-03-28T02:09:05.593Z libp2p:peer-store:trace get got read lock
2024-03-28T02:09:05.593Z libp2p:peer-store:trace get release read lock
2024-03-28T02:09:05.593Z libp2p:tcp:socket 43.229.133.60:4001 socket close
2024-03-28T02:09:05.593Z libp2p:connection:outbound:ebezl21711591717316 closing connection to /ip4/43.229.133.60/tcp/4001/p2p/12D3KooWK7yGyNjM2kc1XPumorpKMupmNrj2452E2uzQwW11jUS9
2024-03-28T02:09:05.593Z libp2p:connection:outbound:ebezl21711591717316:trace closing all streams
2024-03-28T02:09:05.593Z libp2p:connection:outbound:ebezl21711591717316:trace closing underlying transport
2024-03-28T02:09:05.593Z libp2p:connection:outbound:ebezl21711591717316:trace close muxer
2024-03-28T02:09:05.593Z libp2p:connection:outbound:ebezl21711591717316:trace close maconn
2024-03-28T02:09:05.593Z libp2p:tcp:socket 43.229.133.60:4001 socket was already destroyed when trying to close
2024-03-28T02:09:05.593Z libp2p:connection:outbound:ebezl21711591717316:trace closed maconn
2024-03-28T02:09:05.593Z libp2p:connection:outbound:ebezl21711591717316:trace updating timeline with close time
2024-03-28T02:09:05.593Z libp2p:peer-store:trace get await read lock
2024-03-28T02:09:05.593Z libp2p:connection-manager:auto-dial:trace have enough connections 297/50
2024-03-28T02:09:05.593Z libp2p:peer-store:trace get got read lock
2024-03-28T02:09:05.593Z libp2p:peer-store:trace get release read lock
2024-03-28T02:09:05.593Z libp2p:tcp:socket 84.16.248.163:4001 socket close
2024-03-28T02:09:05.593Z libp2p:connection:outbound:3vl2xn1711591722486 closing connection to /ip4/84.16.248.163/tcp/4001/p2p/12D3KooWSicuu7keDHfgzydEdcn4txyz2zmjtJhCLwLQFEQd6LBF
2024-03-28T02:09:05.593Z libp2p:connection:outbound:3vl2xn1711591722486:trace closing all streams
2024-03-28T02:09:05.593Z libp2p:connection:outbound:3vl2xn1711591722486:trace closing underlying transport
2024-03-28T02:09:05.593Z libp2p:connection:outbound:3vl2xn1711591722486:trace close muxer
2024-03-28T02:09:05.593Z libp2p:connection:outbound:3vl2xn1711591722486:trace close maconn
2024-03-28T02:09:05.593Z libp2p:tcp:socket 84.16.248.163:4001 socket was already destroyed when trying to close
2024-03-28T02:09:05.593Z libp2p:connection:outbound:3vl2xn1711591722486:trace closed maconn
2024-03-28T02:09:05.593Z libp2p:connection:outbound:3vl2xn1711591722486:trace updating timeline with close time
2024-03-28T02:09:05.593Z libp2p:peer-store:trace get await read lock
2024-03-28T02:09:05.593Z libp2p:connection-manager:auto-dial:trace have enough connections 296/50
2024-03-28T02:09:05.593Z libp2p:peer-store:trace get got read lock
2024-03-28T02:09:05.594Z libp2p:peer-store:trace get release read lock
2024-03-28T02:09:05.594Z libp2p:tcp:socket 165.84.147.38:20675 socket close
2024-03-28T02:09:05.594Z libp2p:tcp:socket 165.84.147.38:20675 socket closed
2024-03-28T02:09:05.594Z libp2p:connection-manager:dial-queue:error dial failed to /ip4/165.84.147.38/tcp/20675/p2p/QmdRK95EX78ftKdvhsvV587K97nYLsMNSrjUSPPsAqvQhy CodeError: error:068000DD:asn1 encoding routines::illegal padding
    at DefaultUpgrader._encryptOutbound (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/upgrader.js:520:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async DefaultUpgrader.upgradeOutbound (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/upgrader.js:199:21)
    at async TCP.dial (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/@libp2p/tcp/dist/src/index.js:79:22)
    at async DefaultTransportManager.dial (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/transport-manager.js:81:20)
    at async queue.add.peerId.peerId [as fn] (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/connection-manager/dial-queue.js:154:38)
    at async raceSignal (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/race-signal/dist/src/index.js:28:16)
    at async Job.run (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/@libp2p/utils/dist/src/queue/job.js:56:28) {
  code: 'ERR_ENCRYPTION_FAILED',
  props: {}
}
2024-03-28T02:09:05.594Z libp2p:peer-store:trace patch await write lock
2024-03-28T02:09:05.594Z libp2p:peer-store:trace patch got write lock
2024-03-28T02:09:05.594Z libp2p:peer-store:trace patch release write lock
2024-03-28T02:09:05.594Z libp2p:connection-manager:dial-queue loading multiaddrs for 12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG
2024-03-28T02:09:05.594Z libp2p:peer-store:trace get await read lock
2024-03-28T02:09:05.594Z libp2p:connection-manager:dial-queue:error error in dial queue CodeError: error:068000DD:asn1 encoding routines::illegal padding
    at DefaultUpgrader._encryptOutbound (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/upgrader.js:520:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async DefaultUpgrader.upgradeOutbound (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/upgrader.js:199:21)
    at async TCP.dial (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/@libp2p/tcp/dist/src/index.js:79:22)
    at async DefaultTransportManager.dial (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/transport-manager.js:81:20)
    at async queue.add.peerId.peerId [as fn] (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/connection-manager/dial-queue.js:154:38)
    at async raceSignal (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/race-signal/dist/src/index.js:28:16)
    at async Job.run (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/@libp2p/utils/dist/src/queue/job.js:56:28) {
  code: 'ERR_ENCRYPTION_FAILED',
  props: {}
}
2024-03-28T02:09:05.594Z libp2p:peer-store:trace get got read lock
2024-03-28T02:09:05.594Z libp2p:kad-dht:network:error could not send FIND_NODE to QmdRK95EX78ftKdvhsvV587K97nYLsMNSrjUSPPsAqvQhy CodeError: error:068000DD:asn1 encoding routines::illegal padding
    at DefaultUpgrader._encryptOutbound (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/upgrader.js:520:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async DefaultUpgrader.upgradeOutbound (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/upgrader.js:199:21)
    at async TCP.dial (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/@libp2p/tcp/dist/src/index.js:79:22)
    at async DefaultTransportManager.dial (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/transport-manager.js:81:20)
    at async queue.add.peerId.peerId [as fn] (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/connection-manager/dial-queue.js:154:38)
    at async raceSignal (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/race-signal/dist/src/index.js:28:16)
    at async Job.run (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/@libp2p/utils/dist/src/queue/job.js:56:28) {
  code: 'ERR_ENCRYPTION_FAILED',
  props: {}
}
2024-03-28T02:09:05.594Z libp2p:peer-store:trace get release read lock
2024-03-28T02:09:05.594Z libp2p:connection-manager:dial-queue loaded multiaddrs for 12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG [
  '/ip4/10.116.0.152/tcp/4001',
  '/ip4/10.116.0.152/udp/4001/quic',
  '/ip4/127.0.0.1/tcp/4001',
  '/ip4/127.0.0.1/udp/4001/quic',
  '/ip4/18.141.49.119/tcp/4001',
  '/ip4/18.141.49.119/udp/4001/quic',
  '/ip6/::1/tcp/4001',
  '/ip6/::1/udp/4001/quic',
  '/ip6/64:ff9b::128d:3177/tcp/4001',
  '/ip6/64:ff9b::128d:3177/udp/4001/quic',
  '/ip6/64:ff9b::128d:3177/tcp/4001',
  '/ip6/64:ff9b::128d:3177/udp/4001/quic'
]
2024-03-28T02:09:05.595Z libp2p:kad-dht:peer-routing closerPeersSingle ciqnguecodiqhsbhikce7taucrkq7surww257747n2t3jubzgmafljq from 12D3KooWEQWNvKYokYFRSCnx7f1Y7zmdVKSxiKsMQQ4cu1jY35UU
2024-03-28T02:09:05.595Z libp2p:kad-dht:network sending FIND_NODE to 12D3KooWEQWNvKYokYFRSCnx7f1Y7zmdVKSxiKsMQQ4cu1jY35UU
2024-03-28T02:09:05.595Z libp2p:kad-dht:query:QmcZXwHfd8CXLdzM2dbEmiQQBh5LutrsPoCPB4V7RHLmcH:error query error CodeError: error:068000DD:asn1 encoding routines::illegal padding
    at DefaultUpgrader._encryptOutbound (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/upgrader.js:520:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async DefaultUpgrader.upgradeOutbound (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/upgrader.js:199:21)
    at async TCP.dial (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/@libp2p/tcp/dist/src/index.js:79:22)
    at async DefaultTransportManager.dial (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/transport-manager.js:81:20)
    at async queue.add.peerId.peerId [as fn] (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/libp2p/dist/src/connection-manager/dial-queue.js:154:38)
    at async raceSignal (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/race-signal/dist/src/index.js:28:16)
    at async Job.run (file:///Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/@libp2p/utils/dist/src/queue/job.js:56:28) {
  code: 'ERR_ENCRYPTION_FAILED',
  props: {}
}
2024-03-28T02:09:05.595Z libp2p:connection-manager:dial-queue:trace addresses for 12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG before filtering [
  '/ip4/10.116.0.152/tcp/4001/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG',
  '/ip4/10.116.0.152/udp/4001/quic/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG',
  '/ip4/127.0.0.1/tcp/4001/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG',
  '/ip4/127.0.0.1/udp/4001/quic/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG',
  '/ip4/18.141.49.119/tcp/4001/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG',
  '/ip4/18.141.49.119/udp/4001/quic/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG',
  '/ip6/::1/tcp/4001/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG',
  '/ip6/::1/udp/4001/quic/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG',
  '/ip6/64:ff9b::128d:3177/tcp/4001/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG',
  '/ip6/64:ff9b::128d:3177/udp/4001/quic/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG',
  '/ip6/64:ff9b::128d:3177/tcp/4001/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG',
  '/ip6/64:ff9b::128d:3177/udp/4001/quic/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG'
]
2024-03-28T02:09:05.595Z libp2p:connection-manager:dial-queue:trace addresses for 12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG after filtering [
  '/ip4/18.141.49.119/tcp/4001/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG',
  '/ip6/64:ff9b::128d:3177/tcp/4001/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG'
]
2024-03-28T02:09:05.595Z libp2p:tcp dialing /ip4/18.141.49.119/tcp/4001/p2p/12D3KooWBzif5zQoa2eXLZRunBvi11nwMK7qRf3oVNpL6228bSqG
2024-03-28T02:09:05.595Z libp2p:connection-manager dial 12D3KooWEQWNvKYokYFRSCnx7f1Y7zmdVKSxiKsMQQ4cu1jY35UU
2024-03-28T02:09:05.596Z libp2p:connection-manager:dial-queue creating dial target for 12D3KooWEQWNvKYokYFRSCnx7f1Y7zmdVKSxiKsMQQ4cu1jY35UU []
node:events:496
      throw er; // Unhandled 'error' event
      ^

Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
    at Socket.onclose (node:internal/streams/end-of-stream:154:30)
    at Socket.emit (node:events:530:35)
    at TCP.<anonymous> (node:net:337:12)
Emitted 'error' event on JSStreamSocket instance at:
    at Duplex.<anonymous> (node:internal/js_stream_socket:64:38)
    at Duplex.emit (node:events:518:28)
    at emitErrorNT (node:internal/streams/destroy:169:8)
    at emitErrorCloseNT (node:internal/streams/destroy:128:3)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  code: 'ERR_STREAM_PREMATURE_CLOSE'
}

Node.js v20.11.1
http://localhost:8080/ipns/protocol.ai: HTTP_301 in 3.465685 seconds (TTFB: 3.465626, rediect: 1.281103)
running GC
http://localhost:8080/api/v0/repo/gc: HTTP_000 in 0.000451 seconds
Error: Timed out waiting for: tcp:8080
    at /Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/wait-on/lib/wait-on.js:131:31
    at doInnerSub (/Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/rxjs/dist/cjs/internal/operators/mergeInternals.js:22:31)
    at outerNext (/Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/rxjs/dist/cjs/internal/operators/mergeInternals.js:17:70)
    at OperatorSubscriber._this._next (/Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/rxjs/dist/cjs/internal/operators/OperatorSubscriber.js:33:21)
    at Subscriber.next (/Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/rxjs/dist/cjs/internal/Subscriber.js:51:18)
    at AsyncAction.work (/Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/rxjs/dist/cjs/internal/observable/timer.js:28:28)
    at AsyncAction._execute (/Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/rxjs/dist/cjs/internal/scheduler/AsyncAction.js:79:18)
    at AsyncAction.execute (/Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/rxjs/dist/cjs/internal/scheduler/AsyncAction.js:67:26)
    at AsyncScheduler.flush (/Users/sgtpooki/code/work/protocol.ai/ipfs/helia-http-gateway/node_modules/rxjs/dist/cjs/internal/scheduler/AsyncScheduler.js:38:33)
    at listOnTimeout (node:internal/timers:573:17)

Finding the last helia-http-gateway: log item before the above error

╰─ ✔ PIPE|0|0 ❯ tac until-death.log | grep -a -F 'helia-http-gateway:' -m1 -B 1 | tac
2024-03-28T02:09:05.589Z helia-http-gateway:server fetching url "http://protocol-ai.ipns.localhost:8080/" with @helia/verified-fetch
2024-03-28T02:09:05.589Z helia:verified-fetch fetch http://protocol-ai.ipns.localhost:8080/

Graphs:

image image image image image image image image image

I'm not sure what to make of all this just yet.. need to investigate further.

@SgtPooki
Copy link
Member Author

doing some processing of the log...

╰─ ✔ ❯ grep -a -o 'ERR_[A-Z_]\+' until-death.log | sort | uniq -c | sort
      3 ERR_SOCKET_READ_TIMEOUT
      6 ERR_UNREQUESTED_PING
      7 ERR_SOCKET_CLOSE_TIMEOUT
     20 ERR_UNSUPPORTED_PROTOCOL
     27 ERR_TOO_MANY_ADDRESSES
     29 ERR_MAX_RECURSIVE_DEPTH_REACHED
     32 ERR_RELAYED_DIAL
     59 ERR_CONNECTION_FAILED
     80 ERR_TRANSIENT_CONNECTION
    109 ERR_TOO_MANY_INBOUND_PROTOCOL_STREAMS
    284 ERR_HOP_REQUEST_FAILED
    308 ERR_CONNECTION_BEING_CLOSED
    936 ERR_TIMEOUT
   1474 ERR_MUXER_UNAVAILABLE
   3038 ERR_TRANSPORT_DIAL_FAILED
   3731 ERR_DIALED_SELF
  16747 ERR_STREAM_RESET
  18733 ERR_ENCRYPTION_FAILED
  49152 ERR_MUXER_LOCAL_CLOSED
 105748 ERR_STREAM_PREMATURE_CLOSE
 120758 ERR_NO_VALID_ADDRESSES
 280800 ERR_UNEXPECTED_EOF

@SgtPooki
Copy link
Member Author

helia-wg notes 2024-03-28

  • look into if delegated-routing is returning the same peers for multiple requests for the same probelab/websites
  • tls bug fix coming from libp2p.. ipv4 vs ip6 seem to act differently in nodejs
  • try restricting mem/cpu via docker to reproduce
  • try dialing "faulty" peers immediately to reproduce TCP/stream reset error?

@SgtPooki
Copy link
Member Author

FYI: with my "helia-all" configuration, I can usually repro this error in a few minutes with the npm run debug:until-death command. However, I am going on a 22 minute run now (more than a few cycles of all websites) and have yet to crash helia-http-gateway. The only change I did was USE_TRUSTLESS_GATEWAYS=false & commented out FILE_DATASTORE_PATH and FILE_BLOCKSTORE_PATH.

@SgtPooki
Copy link
Member Author

SgtPooki commented Mar 29, 2024

Ok, i've been working on this quite a bit the past few days. I've got a script running now that executes until-death.sh a given number of iterations for each permutation of $USE_SUBDOMAINS,$USE_BITSWAP,$USE_TRUSTLESS_GATEWAYS,$USE_LIBP2P,$USE_DELEGATED_ROUTING, filtering out usecases where fetching of content would be impossible:

  if [ "$USE_BITSWAP" = false ]; then
    echo "Skipping test for configuration: $config_id"
    return
  fi
  if [ "$USE_LIBP2P" = false ] && [ "$USE_DELEGATED_ROUTING" = false ]; then
    echo "Skipping test for configuration: $config_id"
    return
  fi

I'm enabling all debug logs for helia & libp2p with export DEBUG="helia*,helia*:trace,libp2p*,libp2p*:trace", and tracking the output for each run in a log file.

I'm running my script with ./time-permutations.sh 5m 100 right now which:

  1. runs until-death.sh for each permutation of the above env vars
  2. does 100 iterations of the above

until-death.sh is given 5 minutes to run, or it times out and closes the run as successful, deleting the run's log file. If unsuccessful (until.death.sh fails before timeout occurs) then it keeps the log file, and continues on to the next permutation or iteration. The time-permutations.sh script also writes a results.csv file with the time the run took, all env var values, and whether it was successful or not

I've also got a local grafana + prometheus setup, and some code for folks to reuse for standing up their own with provisioned dashboards, datasource, and docker-compose for easy running.

I've also updated the test-gateways.sh script with a few extra CIDs to test against.

I've discovered a few things while repeatedly bashing things (sorry to all peers getting bombarded with requests for the same content)

Problems that need fixed

  1. No easy to view metrics for debugging things like this. Polishing up a default dashboard for investigating libp2p things would be noice.
  2. no handling of aborted requests for calls to helia.gc
  3. no catching of uncaughtExceptions (this causes the whole node to halt, breaking Tiros)
  4. no cactching of unhandledRejection (could cause more issues in the future)
  5. setting FILE_DATASTORE_PATH or FILE_BLOCKSTORE_PATH to an empty string causes issues.

Actions to take:

  • create bulk PR including all changes I have made thus far so they don't get lost. check the changes to make sure nothing crazy is in there before proceeding.
  • reusable grafana+prom
  • Catch uncaughtException in src/index.ts
  • catch unhandledRejection in src/index.ts
  • add new debugging scripts and PR for fixes to existing scripts
  • pull out requestAwareAbortSignal to a function that can be reused by helia-server.ts fetch and gc methods.
  • add tiered-limit stores from feat: create new core data & blockstore that acts as a cache/buffered store js-stores#287 to prevent excess memory usage

@SgtPooki
Copy link
Member Author

SgtPooki commented Mar 29, 2024

**new hypothesis**: attempting to dial ip6 on a network that doesn't support it isn't handled properly

IP6 doesn't work on my network. (update: apparently it does...?)

2024-03-29T20:24:17.704Z libp2p:tcp dialing /ip6/2a02:c206:2171:9087::1/tcp/4001/p2p/QmNvRp6XESHAZKUL6oN3np1Rn75ehfAbe8WPEKxhKipnZq
node:events:496
      throw er; // Unhandled 'error' event
      ^

Error [ERR_STREAM_PREMATURE_CLOSE]: Premature close
    at Socket.onclose (node:internal/streams/end-of-stream:154:30)
    at Socket.emit (node:events:530:35)
    at TCP.<anonymous> (node:net:337:12)
Emitted 'error' event on JSStreamSocket instance at:
    at Duplex.<anonymous> (node:internal/js_stream_socket:64:38)
    at Duplex.emit (node:events:518:28)
    at emitErrorNT (node:internal/streams/destroy:169:8)
    at emitErrorCloseNT (node:internal/streams/destroy:128:3)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  code: 'ERR_STREAM_PREMATURE_CLOSE'
}

I'm going to test this more thoroughly once my test-permutations script is done

Nope... blocking all IP4 addrs works fine

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant