Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with kubernetes readiness probe #5

Open
SelfHostedJawn opened this issue Mar 20, 2021 · 2 comments
Open

Issues with kubernetes readiness probe #5

SelfHostedJawn opened this issue Mar 20, 2021 · 2 comments

Comments

@SelfHostedJawn
Copy link

SelfHostedJawn commented Mar 20, 2021

I am following along with the official guide to deploying a cluster on kubernetes, but I can't get it to setup 3 pods like the statefulset specifies. It hangs at 1 pod.

I'm getting the following message in my log:
Readiness probe failed: Get "http://10.1.70.53:9094/id": dial tcp 10.1.70.53:9094: connect: connection refused

the IPFS container doesn't report any errors and shows the following:

Changing user to ipfs
ipfs version 0.8.0
Found IPFS fs-repo at /data/ipfs
Initializing daemon...
go-ipfs version: 0.8.0-ce693d7
Repo version: 11
System version: amd64/linux
Golang version: go1.14.4
2021/03/20 18:21:30 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details.
Swarm listening on /ip4/10.1.70.53/tcp/4001
Swarm listening on /ip4/10.1.70.53/udp/4001/quic
Swarm listening on /ip4/127.0.0.1/tcp/4001
Swarm listening on /ip4/127.0.0.1/udp/4001/quic
Swarm listening on /ip6/::1/tcp/4001
Swarm listening on /ip6/::1/udp/4001/quic
Swarm listening on /p2p-circuit
Swarm announcing /ip4/127.0.0.1/tcp/4001
Swarm announcing /ip4/127.0.0.1/udp/4001/quic
Swarm announcing /ip6/::1/tcp/4001
Swarm announcing /ip6/::1/udp/4001/quic
API server listening on /ip4/127.0.0.1/tcp/5001
WebUI: http://127.0.0.1:5001/webui
Gateway (readonly) server listening on /ip4/127.0.0.1/tcp/8080
Daemon is ready

The IPFS-cluster shows the following:

2021-03-20T18:21:30.984Z INFO config Saving configuration
configuration written to /data/ipfs-cluster/service.json.
2021-03-20T18:21:30.985Z INFO config Saving identity
new identity written to /data/ipfs-cluster/identity.json
new empty peerstore written to /data/ipfs-cluster/peerstore.
2021-03-20T18:21:32.022Z INFO service Initializing. For verbose output run with "-l debug". Please wait...
2021-03-20T18:21:32.046Z INFO cluster IPFS Cluster v0.13.1-next+git88cfcf62fc6c5c3a4f168ce7e1b57bbb5923f8f5 listening on:
/ip4/10.1.70.53/tcp/9096/p2p/bootstrap-peer-id
/ip4/127.0.0.1/tcp/9096/p2p/bootstrap-peer-id
2021-03-20T18:21:32.046Z INFO restapi REST API (HTTP): /ip4/127.0.0.1/tcp/9094
2021-03-20T18:21:32.046Z INFO ipfsproxy IPFS Proxy: /ip4/127.0.0.1/tcp/9095 -> /ip4/127.0.0.1/tcp/5001
2021-03-20T18:21:32.046Z INFO crdt crdt Datastore created. Number of heads: 0. Current max-height: 0
2021-03-20T18:21:32.046Z INFO crdt 'trust all' mode enabled. Any peer in the cluster can modify the pinset.
2021-03-20T18:21:32.050Z INFO cluster Cluster Peers (without including ourselves):
2021-03-20T18:21:32.050Z INFO cluster - No other peers
2021-03-20T18:21:32.050Z INFO cluster ** IPFS Cluster is READY **

I'm stumped as to what to do next to get the cluster up and running on my kubernetes cluster.

@ChrisLahaye
Copy link

@SelfHostedJawn we are having the same issue, did you find a solution?

@SelfHostedJawn
Copy link
Author

@ChrisLahaye I did not. We decided to not run our own IPFS clusters

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants