-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: proxy http requests to downstream pods #11
Conversation
Co-authored-by: Charlie <[email protected]>
- command: | ||
- ./stacks-devnet-api | ||
name: stacks-devnet-api-container | ||
image: quay.io/hirosystems/stacks-devnet-api:latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why Quay and not Docker hub? Is this temporary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we will eventually. From what Ludo says, there are some extra hurdles to deploying to our Docker hub, so this is temporary while we're testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lgalabru Any context on this? We now have unlimited private repos in the Docker hub, so I'm not sure what the hurdle is here. I'd like to halt our usage of Quay where possible to standardize our pipeline and remove redundancies. So if there's a hurdle maybe we can help remove it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, the use case here is that we want to manually push some images to a docker repo. IIRC that was not an option with our hiro docker account in the past because we were restricted on the amount of collaborators - that's also why I've been pushing the chainhook images to my personal docker account.
If you can add us to the hirosystem docker org then yeah we can dish these workarounds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right I recall this being a pain-point. I believe it should be resolved now that we have a Docker Hub service account for all of engineering to use! You should be able to use these credentials to push to our Docker Hub. I created the two repos in the Docker Hub and gave that account read/write permissions, so it should be good to go.
image: stacks-network | ||
imagePullPolicy: Never | ||
image: quay.io/hirosystems/stacks-network-orchestrator:latest | ||
imagePullPolicy: IfNotPresent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI this could cause some unexpected issues, especially in a local development environment when you could be re-testing the same Docker tag repetitively.
Unless you change the Docker tag to something more specific or something that won't get overwritten, I would suggest changing this to "Always" to ensure the kubelet at least checks. If the local image is confirmed to be the same image hash as what's in the remote, it shouldn't re-download it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like it, I'll make that change.
When we actually deploy this to production, do we set this to Never
? I figure if we load all images into the cluster beforehand we:
- never have to download images again so that's a performance boon, and
- don't have to worry about accidental breakages from a "latest" push before we're ready
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It depends on how we decide to configure the Docker tag. If we set it to a Docker tag that gets overwritten, like latest
or main
(named after a branch), then we should set it to "Always" for the same reasons as here. It would be nice if we could set it to a specific Docker tag that does not get overwritten, like v1.2.3
, then we could set the image pull policy to "IfNotPresent" for the best performance and it would be easier to tell exactly what version each customer is using. However, that may make it more difficult to update everyone's chain orchestrator image. So it's a bit of a tradeoff either way.
don't have to worry about accidental breakages from a "latest" push before we're ready
This is another great reason why it's a best practice to not use latest
or branch-named tags in production :) If everything is locked to a specific tag, then upgrades are always deliberate.
I figure if we load all images into the cluster beforehand we
With cloud provider k8s clusters, pre-loading images is typically reserved for the rarest of use cases and it comes with a myriad of downsides like extra overhead and increased costs. We would be better off having the images pulled on-demand like all our other workloads. VMs cache images anyways; after it's pulled the first time, subsequent pulls on the same VM are super fast.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's looking great, thanks @MicaiahReid!
I think I have some reserve on the HTTP server at play. Not a blocker, but the current approach seems a bit low level.
Could we also setup a logging infrastructure? Happy to help on that end.
This is great work, big shout-out on your Rust learning curve :)
@lgalabru I took some time to look into using rocket. It's soo close to working for us, but there's a few key features they're missing. I could make it work, but it would require some of the same path parsing logic that I have here, so I think for now I will keep as-is. I've saved my current changes for implementing rocket and I might try again later. All of your other comments are documented in issues that I'll get started on once this is merged! Let me know if you have any other thoughts! |
## 1.0.0 (2023-11-16) ### Features * add `HEAD /api/v1/network/{network}` route ([#41](#41)) ([1bf329f](1bf329f)) * add logging and network info route ([#20](#20)) ([2af0bab](2af0bab)), closes [#21](#21) * proxy http requests to downstream pods ([#11](#11)) ([6ecdf0f](6ecdf0f)) * release develop ([#84](#84)) ([89a1a1b](89a1a1b)) ### Bug Fixes * add access_control_allow_credentials header ([a482a93](a482a93)) * add cors settings; refactor http responses ([#42](#42)) ([c46db4c](c46db4c)), closes [#21](#21) * assert more general error msg ([#48](#48)) ([926e3a0](926e3a0)) * create namespace in deploy api script ([f5ff5e0](f5ff5e0))
🎉 This PR is included in version 1.0.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
This PR:
stacks-network
image to use a hosted version (quay.io/hirosystems/stacks-network-orchestrator)NodePort
type from services (so they are no longer exposed to localhost)