Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: proxy http requests to downstream pods #11

Merged
merged 48 commits into from
Jun 20, 2023
Merged

feat: proxy http requests to downstream pods #11

merged 48 commits into from
Jun 20, 2023

Conversation

MicaiahReid
Copy link
Contributor

@MicaiahReid MicaiahReid commented Jun 7, 2023

This PR:

  • switches to use a hyper server rather than reqwest
  • updates stacks-network image to use a hosted version (quay.io/hirosystems/stacks-network-orchestrator)
  • dockerizes this API
  • adds general routing for the server, and proxies requests to downstream pods
  • removes NodePort type from services (so they are no longer exposed to localhost)
    • instead, since this api is now a docker image that can be deployed, the api is the only image with a port exposed to the localhost
  • adds .template.yaml file to deploy this api to k8s with some elevated permissions

@MicaiahReid MicaiahReid requested review from lgalabru and CharlieC3 June 7, 2023 19:00
@MicaiahReid MicaiahReid mentioned this pull request Jun 7, 2023
5 tasks
- command:
- ./stacks-devnet-api
name: stacks-devnet-api-container
image: quay.io/hirosystems/stacks-devnet-api:latest
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why Quay and not Docker hub? Is this temporary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we will eventually. From what Ludo says, there are some extra hurdles to deploying to our Docker hub, so this is temporary while we're testing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lgalabru Any context on this? We now have unlimited private repos in the Docker hub, so I'm not sure what the hurdle is here. I'd like to halt our usage of Quay where possible to standardize our pipeline and remove redundancies. So if there's a hurdle maybe we can help remove it?

Copy link
Contributor

@lgalabru lgalabru Jun 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, the use case here is that we want to manually push some images to a docker repo. IIRC that was not an option with our hiro docker account in the past because we were restricted on the amount of collaborators - that's also why I've been pushing the chainhook images to my personal docker account.
If you can add us to the hirosystem docker org then yeah we can dish these workarounds.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right I recall this being a pain-point. I believe it should be resolved now that we have a Docker Hub service account for all of engineering to use! You should be able to use these credentials to push to our Docker Hub. I created the two repos in the Docker Hub and gave that account read/write permissions, so it should be good to go.

.dockerignore Show resolved Hide resolved
Dockerfile Show resolved Hide resolved
image: stacks-network
imagePullPolicy: Never
image: quay.io/hirosystems/stacks-network-orchestrator:latest
imagePullPolicy: IfNotPresent
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI this could cause some unexpected issues, especially in a local development environment when you could be re-testing the same Docker tag repetitively.
Unless you change the Docker tag to something more specific or something that won't get overwritten, I would suggest changing this to "Always" to ensure the kubelet at least checks. If the local image is confirmed to be the same image hash as what's in the remote, it shouldn't re-download it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it, I'll make that change.

When we actually deploy this to production, do we set this to Never? I figure if we load all images into the cluster beforehand we:

  1. never have to download images again so that's a performance boon, and
  2. don't have to worry about accidental breakages from a "latest" push before we're ready

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It depends on how we decide to configure the Docker tag. If we set it to a Docker tag that gets overwritten, like latest or main (named after a branch), then we should set it to "Always" for the same reasons as here. It would be nice if we could set it to a specific Docker tag that does not get overwritten, like v1.2.3, then we could set the image pull policy to "IfNotPresent" for the best performance and it would be easier to tell exactly what version each customer is using. However, that may make it more difficult to update everyone's chain orchestrator image. So it's a bit of a tradeoff either way.

don't have to worry about accidental breakages from a "latest" push before we're ready

This is another great reason why it's a best practice to not use latest or branch-named tags in production :) If everything is locked to a specific tag, then upgrades are always deliberate.

I figure if we load all images into the cluster beforehand we

With cloud provider k8s clusters, pre-loading images is typically reserved for the rarest of use cases and it comes with a myriad of downsides like extra overhead and increased costs. We would be better off having the images pulled on-demand like all our other workloads. VMs cache images anyways; after it's pulled the first time, subsequent pulls on the same VM are super fast.

templates/stacks-devnet-api.template.yaml Outdated Show resolved Hide resolved
@MicaiahReid MicaiahReid requested a review from CharlieC3 June 7, 2023 19:21
Cargo.toml Show resolved Hide resolved
Dockerfile Outdated Show resolved Hide resolved
src/lib.rs Show resolved Hide resolved
src/lib.rs Show resolved Hide resolved
Copy link
Contributor

@lgalabru lgalabru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's looking great, thanks @MicaiahReid!
I think I have some reserve on the HTTP server at play. Not a blocker, but the current approach seems a bit low level.
Could we also setup a logging infrastructure? Happy to help on that end.
This is great work, big shout-out on your Rust learning curve :)

@MicaiahReid
Copy link
Contributor Author

@lgalabru Thanks Ludo!

I'm definitely interested in hearing your reservations and recommendations. I agree that it seems a bit low-level. I think I find it easier to write rust code than to read rust docs 😅

I've added #15 to set up logging infra.

I'm definitely starting to enjoy rust a lot!

@MicaiahReid
Copy link
Contributor Author

@lgalabru I took some time to look into using rocket. It's soo close to working for us, but there's a few key features they're missing.

I could make it work, but it would require some of the same path parsing logic that I have here, so I think for now I will keep as-is. I've saved my current changes for implementing rocket and I might try again later.

All of your other comments are documented in issues that I'll get started on once this is merged! Let me know if you have any other thoughts!

@MicaiahReid MicaiahReid requested a review from lgalabru June 12, 2023 18:37
@MicaiahReid MicaiahReid merged commit 6ecdf0f into main Jun 20, 2023
github-actions bot pushed a commit that referenced this pull request Nov 16, 2023
## 1.0.0 (2023-11-16)

### Features

* add `HEAD /api/v1/network/{network}` route ([#41](#41)) ([1bf329f](1bf329f))
* add logging and network info route ([#20](#20)) ([2af0bab](2af0bab)), closes [#21](#21)
* proxy http requests to downstream pods ([#11](#11)) ([6ecdf0f](6ecdf0f))
* release develop ([#84](#84)) ([89a1a1b](89a1a1b))

### Bug Fixes

* add access_control_allow_credentials header ([a482a93](a482a93))
* add cors settings; refactor http responses ([#42](#42)) ([c46db4c](c46db4c)), closes [#21](#21)
* assert more general error msg ([#48](#48)) ([926e3a0](926e3a0))
* create namespace in deploy api script ([f5ff5e0](f5ff5e0))
Copy link

🎉 This PR is included in version 1.0.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants