Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option for passing extended resources in node labels in GKE #7604

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

mu-soliman
Copy link

@mu-soliman mu-soliman commented Dec 13, 2024

/kind feature

What this PR does / why we need it:

on GKE, Cluster atuoscaler reads extended resource information from kubenv->AUTOSCALER_ENV_VARS->extended_resources in the managed scaling group template definition.

However, users have no way to add a variable to extended_resources, they are controlled from GKE side. This results in cluster autoscaler not knowing about extended resources and in return not supporting scale up from zero for all node pools that have extended resources (like GPU) on GKE.

On the other hand, node labels are passed from the node pool to the managed scaling group template through the kubenv->AUTOSCALER_ENV_VARS->node_labels.

This commit introduces the ability to pass extended resources to the cluster autoscaler as node labels with defined prefix on GKE, similar to how cluster autoscaler expects extended resources on AWS. This allows scaling from zero for node pools with extended resrouces.

on GKE, Adding node labels that start with "clusterautoscaler-nodetemplate-resources-", with value equal to the amount of the resources, allows the cluster autoscaler to detect extended resources and scale up the node pools from zero. 

on GCE, Cluster atuoscaler reads extended resource information from kubenv->AUTOSCALER_ENV_VARS->extended_resources in the managed scaling group template definition.

However, users have no way to add a variable to extended resources, they are controlled from GKE side. This results in cluster autoscaler not supporting scale up from zero for all node pools that has extended resources (like GPU) on GCE.

However, node labels are passed from the node pool to the managed scaling group template through the kubenv->AUTOSCALER_ENV_VARS->node_labels.

This commit introduces the ability to pass extended resources as node labels with defined prefix on GCE, similar to how cluster autoscaler expects extended resources on AWS. This allows scaling from zero for node pools with extended resrouces.
Copy link

linux-foundation-easycla bot commented Dec 13, 2024

CLA Signed

The committers listed above are authorized under a signed CLA.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: mu-soliman
Once this PR has been reviewed and has the lgtm label, please assign maciekpytel for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added area/provider/gce cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Dec 13, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @mu-soliman. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Dec 13, 2024
@k8s-ci-robot
Copy link
Contributor

Welcome @mu-soliman!

It looks like this is your first PR to kubernetes/autoscaler 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/autoscaler has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Dec 13, 2024
@whisperity
Copy link

However, users have no way to add a variable to extended_resources, they are controlled from GKE side. This results in cluster autoscaler not knowing about extended resources and in return not supporting scale up from zero for all node pools that have extended resources (like GPU) on GCE.

So is this for GKE (cloud-managed Kubernetes) or GCE (virtual machines with self-managed Kubernetes)? I am asking because I am having troubles with scaling-up from 0 and looking into ways, but for me the kubelets are all self-installed, and I have no idea where the KUBE_ENV could be set.

@mu-soliman mu-soliman changed the title Add option for passing extended resources in node labels in GCE Add option for passing extended resources in node labels in GKE Dec 16, 2024
@mu-soliman
Copy link
Author

However, users have no way to add a variable to extended_resources, they are controlled from GKE side. This results in cluster autoscaler not knowing about extended resources and in return not supporting scale up from zero for all node pools that have extended resources (like GPU) on GCE.

So is this for GKE (cloud-managed Kubernetes) or GCE (virtual machines with self-managed Kubernetes)? I am asking because I am having troubles with scaling-up from 0 and looking into ways, but for me the kubelets are all self-installed, and I have no idea where the KUBE_ENV could be set.

On GKE cluster autoscaler is configured with cloudProvider parameter set to the value gce, I don't know why, probably for historic reasons, but the code change was done uder GCE subdirectory. The same cluster autoscaler code runs for both GKE and GCE.

The change I submitted was tested on GKE, so I updated the description and title to mention GKE, but I expect that it will run on GCE.

@whisperity
Copy link

The change I submitted was tested on GKE, so I updated the description and title to mention GKE, but I expect that it will run on GCE.

@mu-soliman For reference, I found out that in case someone like me uses "pure" GCE, setting kube-env under the VM's Metadata: information it is possible to simulate the GKE behaviour (such as providing these extended resource flags). Which was nothing sort of a godsend that spared me of days of developing a patch related to it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler area/provider/gce cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants