diff --git a/charts/vngcloud-cluster-autoscaler/Chart.yaml b/charts/vngcloud-cluster-autoscaler/Chart.yaml index 80b88c6..f089455 100644 --- a/charts/vngcloud-cluster-autoscaler/Chart.yaml +++ b/charts/vngcloud-cluster-autoscaler/Chart.yaml @@ -2,13 +2,13 @@ apiVersion: v2 appVersion: 1.29.0 description: Scales Kubernetes worker nodes within autoscaling groups. engine: gotpl -home: https://github.com/kubernetes/autoscaler +home: https://github.com/vngcloud/autoscaler icon: https://github.com/kubernetes/kubernetes/raw/master/logo/logo.png maintainers: - - email: guyjtempleton@googlemail.com - name: gjtempleton -name: cluster-autoscaler + - name: VNG Cloud + email: support@vngcloud.vn +name: vngcloud-cluster-autoscaler sources: - - https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler + - https://github.com/vngcloud/autoscaler type: application -version: 9.35.0 +version: 0.0.0 diff --git a/charts/vngcloud-cluster-autoscaler/README.md b/charts/vngcloud-cluster-autoscaler/README.md index f7aba27..1978ca5 100644 --- a/charts/vngcloud-cluster-autoscaler/README.md +++ b/charts/vngcloud-cluster-autoscaler/README.md @@ -1,18 +1,18 @@ -# cluster-autoscaler +# vngcloud-cluster-autoscaler Scales Kubernetes worker nodes within autoscaling groups. ## TL;DR ```console -$ helm repo add autoscaler https://kubernetes.github.io/autoscaler +$ helm repo add vks-helm-charts https://vngcloud.github.io/vks-helm-charts # Method 1 - Using Autodiscovery -$ helm install my-release autoscaler/cluster-autoscaler \ +$ helm install vngcloud-cluster-autoscaler vks-helm-charts/vngcloud-cluster-autoscaler \ --set 'autoDiscovery.clusterName'= # Method 2 - Specifying groups manually -$ helm install my-release autoscaler/cluster-autoscaler \ +$ helm install vngcloud-cluster-autoscaler vks-helm-charts/vngcloud-cluster-autoscaler \ --set "autoscalingGroups[0].name=your-asg-name" \ --set "autoscalingGroups[0].maxSize=10" \ --set "autoscalingGroups[0].minSize=1" @@ -30,31 +30,6 @@ This chart bootstraps a cluster-autoscaler deployment on a [Kubernetes](http://k - Azure AKS specific Prerequisites: - Kubernetes 1.10+ with RBAC-enabled. -## Previous Helm Chart - -The previous `cluster-autoscaler` Helm chart hosted at [helm/charts](https://github.com/helm/charts) has been moved to this repository in accordance with the [Deprecation timeline](https://github.com/helm/charts#deprecation-timeline). Note that a few things have changed between this version and the old version: - -- This repository **only** supports Helm chart installations using Helm 3+ since the `apiVersion` on the charts has been marked as `v2`. -- Previous versions of the Helm chart have not been migrated - -## Migration from 1.X to 9.X+ versions of this Chart - -**TL;DR:** -You should choose to use versions >=9.0.0 of the `cluster-autoscaler` chart published from this repository; previous versions, and the `cluster-autoscaler-chart` with versioning 1.X.X published from this repository are deprecated. - -
- Previous versions of this chart - further details -On initial migration of this chart from the `helm/charts` repository this chart was renamed from `cluster-autoscaler` to `cluster-autoscaler-chart` due to technical limitations. This affected all `1.X` releases of the chart, version 2.0.0 of this chart exists only to mark the [`cluster-autoscaler-chart` chart](https://artifacthub.io/packages/helm/cluster-autoscaler/cluster-autoscaler-chart) as deprecated. - -Releases of the chart from `9.0.0` onwards return the naming of the chart to `cluster-autoscaler` and return to following the versioning established by the chart's previous location at . - -To migrate from a 1.X release of the chart to a `9.0.0` or later release, you should first uninstall your `1.X` install of the `cluster-autoscaler-chart` chart, before performing the installation of the new `cluster-autoscaler` chart. -
- -## Migration from 9.0 to 9.1 - -Starting from `9.1.0` the `envFromConfigMap` value is expected to contain the name of a ConfigMap that is used as ref for `envFrom`, similar to `envFromSecret`. If you want to keep the previous behaviour of `envFromConfigMap` you must rename it to `extraEnvConfigMaps`. - ## Installing the Chart **By default, no deployment is created and nothing will autoscale**. @@ -68,155 +43,12 @@ Either: To create a valid configuration, follow instructions for your cloud provider: -- [AWS](#aws---using-auto-discovery-of-tagged-instance-groups) -- [GCE](#gce) -- [Azure](#azure) -- [OpenStack Magnum](#openstack-magnum) - [Cluster API](#cluster-api) ### Templating the autoDiscovery.clusterName The cluster name can be templated in the `autoDiscovery.clusterName` variable. This is useful when the cluster name is dynamically generated based on other values coming from external systems like Argo CD or Flux. This also allows you to use global Helm values to set the cluster name, e.g., `autoDiscovery.clusterName=\{\{ .Values.global.clusterName }}`, so that you don't need to set it in more than 1 location in the values file. -### AWS - Using auto-discovery of tagged instance groups - -Auto-discovery finds ASGs tags as below and automatically manages them based on the min and max size specified in the ASG. `cloudProvider=aws` only. - -- Tag the ASGs with keys to match `.Values.autoDiscovery.tags`, by default: `k8s.io/cluster-autoscaler/enabled` and `k8s.io/cluster-autoscaler/` -- Verify the [IAM Permissions](#aws---iam) -- Set `autoDiscovery.clusterName=` -- Set `awsRegion=` -- Set (option) `awsAccessKeyID=` and `awsSecretAccessKey=` if you want to [use AWS credentials directly instead of an instance role](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#using-aws-credentials) - -```console -$ helm install my-release autoscaler/cluster-autoscaler \ - --set autoDiscovery.clusterName= \ - --set awsRegion= -``` - -Alternatively with your own AWS credentials - -```console -$ helm install my-release autoscaler/cluster-autoscaler \ - --set autoDiscovery.clusterName= \ - --set awsRegion= \ - --set awsAccessKeyID= \ - --set awsSecretAccessKey= -``` - -#### Specifying groups manually - -Without autodiscovery, specify an array of elements each containing ASG name, min size, max size. The sizes specified here will be applied to the ASG, assuming IAM permissions are correctly configured. - -- Verify the [IAM Permissions](#aws---iam) -- Either provide a yaml file setting `autoscalingGroups` (see values.yaml) or use `--set` e.g.: - -```console -$ helm install my-release autoscaler/cluster-autoscaler \ - --set "autoscalingGroups[0].name=your-asg-name" \ - --set "autoscalingGroups[0].maxSize=10" \ - --set "autoscalingGroups[0].minSize=1" -``` - -#### Auto-discovery - -For auto-discovery of instances to work, they must be tagged with the keys in `.Values.autoDiscovery.tags`, which by default are `k8s.io/cluster-autoscaler/enabled` and `k8s.io/cluster-autoscaler/`. - -The value of the tag does not matter, only the key. - -An example kops spec excerpt: - -```yaml -apiVersion: kops/v1alpha2 -kind: Cluster -metadata: - name: my.cluster.internal -spec: - additionalPolicies: - node: | - [ - {"Effect":"Allow","Action":["autoscaling:DescribeAutoScalingGroups","autoscaling:DescribeAutoScalingInstances","autoscaling:DescribeLaunchConfigurations","autoscaling:DescribeTags","autoscaling:SetDesiredCapacity","autoscaling:TerminateInstanceInAutoScalingGroup"],"Resource":"*"} - ] - ... ---- -apiVersion: kops/v1alpha2 -kind: InstanceGroup -metadata: - labels: - kops.k8s.io/cluster: my.cluster.internal - name: my-instances -spec: - cloudLabels: - k8s.io/cluster-autoscaler/enabled: "" - k8s.io/cluster-autoscaler/my.cluster.internal: "" - image: kops.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-01-14 - machineType: r4.large - maxSize: 4 - minSize: 0 -``` - -In this example you would need to `--set autoDiscovery.clusterName=my.cluster.internal` when installing. - -It is not recommended to try to mix this with setting `autoscalingGroups`. - -See [autoscaler AWS documentation](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#auto-discovery-setup) for a more discussion of the setup. - -### GCE - -The following parameters are required: - -- `autoDiscovery.clusterName=any-name` -- `cloud-provider=gce` -- `autoscalingGroupsnamePrefix[0].name=your-ig-prefix,autoscalingGroupsnamePrefix[0].maxSize=10,autoscalingGroupsnamePrefix[0].minSize=1` - -To use Managed Instance Group (MIG) auto-discovery, provide a YAML file setting `autoscalingGroupsnamePrefix` (see values.yaml) or use `--set` when installing the Chart - e.g. - -```console -$ helm install my-release autoscaler/cluster-autoscaler \ - --set "autoscalingGroupsnamePrefix[0].name=your-ig-prefix,autoscalingGroupsnamePrefix[0].maxSize=10,autoscalingGroupsnamePrefi[0].minSize=1" \ - --set autoDiscovery.clusterName= \ - --set cloudProvider=gce -``` - -Note that `your-ig-prefix` should be a _prefix_ matching one or more MIGs, and _not_ the full name of the MIG. For example, to match multiple instance groups - `k8s-node-group-a-standard`, `k8s-node-group-b-gpu`, you would use a prefix of `k8s-node-group-`. - -In the event you want to explicitly specify MIGs instead of using auto-discovery, set members of the `autoscalingGroups` array directly - e.g. - -``` -# where 'n' is the index, starting at 0 ---set autoscalingGroups[n].name=https://content.googleapis.com/compute/v1/projects/$PROJECTID/zones/$ZONENAME/instanceGroupManagers/$FULL-MIG-NAME,autoscalingGroups[n].maxSize=$MAXSIZE,autoscalingGroups[n].minSize=$MINSIZE -``` - -### Azure - -The following parameters are required: - -- `cloudProvider=azure` -- `autoscalingGroups[0].name=your-vmss,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1` -- `azureClientID: "your-service-principal-app-id"` -- `azureClientSecret: "your-service-principal-client-secret"` -- `azureSubscriptionID: "your-azure-subscription-id"` -- `azureTenantID: "your-azure-tenant-id"` -- `azureResourceGroup: "your-aks-cluster-resource-group-name"` -- `azureVMType: "vmss"` - -### OpenStack Magnum - -`cloudProvider: magnum` must be set, and then one of - -- `magnumClusterName=` and `autoscalingGroups` with the names of node groups and min/max node counts -- or `autoDiscovery.clusterName=` with one or more `autoDiscovery.roles`. - -Additionally, `cloudConfigPath: "/etc/kubernetes/cloud-config"` must be set as this should be the location of the cloud-config file on the host. - -Example values files can be found [here](../../cluster-autoscaler/cloudprovider/magnum/examples). - -Install the chart with - -```console -$ helm install my-release autoscaler/cluster-autoscaler -f myvalues.yaml -``` - ### Cluster-API `cloudProvider: clusterapi` must be set, and then one or more of @@ -233,32 +65,6 @@ Additional config parameters available, see the `values.yaml` for more details - `clusterAPIWorkloadKubeconfigPath` - `clusterAPICloudConfigPath` -### Exoscale - -The following parameters are required: - -- `cloudProvider=exoscale` -- `autoDiscovery.clusterName=` - -Create an Exoscale API key with appropriate permissions as described in [cluster-autoscaler/cloudprovider/exoscale/README.md](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/exoscale/README.md). -A secret of name `-exoscale-cluster-autoscaler` needs to be created, containing the api key and secret, as well as the zone. - -```console -$ kubectl create secret generic my-release-exoscale-cluster-autoscaler \ - --from-literal=api-key="EXOxxxxxxxxxxxxxxxxxxxxxxxx" \ - --from-literal=api-secret="xxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" --from-literal=api-zone="ch-gva-2" -``` - -After creating the secret, the chart may be installed: - -```console -$ helm install my-release autoscaler/cluster-autoscaler \ - --set cloudProvider=exoscale \ - --set autoDiscovery.clusterName= -``` - -Read [cluster-autoscaler/cloudprovider/exoscale/README.md](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/exoscale/README.md) for further information on the setup without helm. - ## Uninstalling the Chart To uninstall `my-release`: @@ -271,52 +77,6 @@ The command removes all the Kubernetes components associated with the chart and > **Tip**: List all releases using `helm list` or start clean with `helm uninstall my-release` -## Additional Configuration - -### AWS - IAM - -The worker running the cluster autoscaler will need access to certain resources and actions depending on the version you run and your configuration of it. - -For the up-to-date IAM permissions required, please see the [cluster autoscaler's AWS Cloudprovider Readme](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#iam-policy) and switch to the tag of the cluster autoscaler image you are using. - -### AWS - IAM Roles for Service Accounts (IRSA) - -For Kubernetes clusters that use Amazon EKS, the service account can be configured with an IAM role using [IAM Roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) to avoid needing to grant access to the worker nodes for AWS resources. - -In order to accomplish this, you will first need to create a new IAM role with the above mentions policies. Take care in [configuring the trust relationship](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-technical-overview.html#iam-role-configuration) to restrict access just to the service account used by cluster autoscaler. - -Once you have the IAM role configured, you would then need to `--set rbac.serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::123456789012:role/MyRoleName` when installing. - -### Azure - Using azure workload identity - -You can use the project [Azure workload identity](https://github.com/Azure/azure-workload-identity), to automatically configure the correct setup for your pods to used federated identity with Azure. - -You can also set the correct settings yourself instead of relying on this project. - -For example the following configuration will configure the Autoscaler to use your federated identity: - -```yaml -azureUseWorkloadIdentityExtension: true -extraEnv: - AZURE_CLIENT_ID: USER ASSIGNED IDENTITY CLIENT ID - AZURE_TENANT_ID: USER ASSIGNED IDENTITY TENANT ID - AZURE_FEDERATED_TOKEN_FILE: /var/run/secrets/tokens/azure-identity-token - AZURE_AUTHORITY_HOST: https://login.microsoftonline.com/ -extraVolumes: -- name: azure-identity-token - projected: - defaultMode: 420 - sources: - - serviceAccountToken: - audience: api://AzureADTokenExchange - expirationSeconds: 3600 - path: azure-identity-token -extraVolumeMounts: -- mountPath: /var/run/secrets/tokens - name: azure-identity-token - readOnly: true -``` - ## Troubleshooting The chart will succeed even if the container arguments are incorrect. A few minutes after starting `kubectl logs -l "app=aws-cluster-autoscaler" --tail=50` should loop through something like diff --git a/charts/vngcloud-cluster-autoscaler/README.md.gotmpl b/charts/vngcloud-cluster-autoscaler/README.md.gotmpl index c91d0d5..49e12ea 100644 --- a/charts/vngcloud-cluster-autoscaler/README.md.gotmpl +++ b/charts/vngcloud-cluster-autoscaler/README.md.gotmpl @@ -5,14 +5,14 @@ ## TL;DR ```console -$ helm repo add autoscaler https://kubernetes.github.io/autoscaler +$ helm repo add vks-helm-charts https://vngcloud.github.io/vks-helm-charts # Method 1 - Using Autodiscovery -$ helm install my-release autoscaler/cluster-autoscaler \ +$ helm install vngcloud-cluster-autoscaler vks-helm-charts/vngcloud-cluster-autoscaler \ --set 'autoDiscovery.clusterName'= # Method 2 - Specifying groups manually -$ helm install my-release autoscaler/cluster-autoscaler \ +$ helm install vngcloud-cluster-autoscaler vks-helm-charts/vngcloud-cluster-autoscaler \ --set "autoscalingGroups[0].name=your-asg-name" \ --set "autoscalingGroups[0].maxSize=10" \ --set "autoscalingGroups[0].minSize=1" @@ -30,31 +30,6 @@ This chart bootstraps a cluster-autoscaler deployment on a [Kubernetes](http://k - Azure AKS specific Prerequisites: - Kubernetes 1.10+ with RBAC-enabled. -## Previous Helm Chart - -The previous `cluster-autoscaler` Helm chart hosted at [helm/charts](https://github.com/helm/charts) has been moved to this repository in accordance with the [Deprecation timeline](https://github.com/helm/charts#deprecation-timeline). Note that a few things have changed between this version and the old version: - -- This repository **only** supports Helm chart installations using Helm 3+ since the `apiVersion` on the charts has been marked as `v2`. -- Previous versions of the Helm chart have not been migrated - -## Migration from 1.X to 9.X+ versions of this Chart - -**TL;DR:** -You should choose to use versions >=9.0.0 of the `cluster-autoscaler` chart published from this repository; previous versions, and the `cluster-autoscaler-chart` with versioning 1.X.X published from this repository are deprecated. - -
- Previous versions of this chart - further details -On initial migration of this chart from the `helm/charts` repository this chart was renamed from `cluster-autoscaler` to `cluster-autoscaler-chart` due to technical limitations. This affected all `1.X` releases of the chart, version 2.0.0 of this chart exists only to mark the [`cluster-autoscaler-chart` chart](https://artifacthub.io/packages/helm/cluster-autoscaler/cluster-autoscaler-chart) as deprecated. - -Releases of the chart from `9.0.0` onwards return the naming of the chart to `cluster-autoscaler` and return to following the versioning established by the chart's previous location at . - -To migrate from a 1.X release of the chart to a `9.0.0` or later release, you should first uninstall your `1.X` install of the `cluster-autoscaler-chart` chart, before performing the installation of the new `cluster-autoscaler` chart. -
- -## Migration from 9.0 to 9.1 - -Starting from `9.1.0` the `envFromConfigMap` value is expected to contain the name of a ConfigMap that is used as ref for `envFrom`, similar to `envFromSecret`. If you want to keep the previous behaviour of `envFromConfigMap` you must rename it to `extraEnvConfigMaps`. - ## Installing the Chart **By default, no deployment is created and nothing will autoscale**. @@ -68,155 +43,12 @@ Either: To create a valid configuration, follow instructions for your cloud provider: -- [AWS](#aws---using-auto-discovery-of-tagged-instance-groups) -- [GCE](#gce) -- [Azure](#azure) -- [OpenStack Magnum](#openstack-magnum) - [Cluster API](#cluster-api) ### Templating the autoDiscovery.clusterName The cluster name can be templated in the `autoDiscovery.clusterName` variable. This is useful when the cluster name is dynamically generated based on other values coming from external systems like Argo CD or Flux. This also allows you to use global Helm values to set the cluster name, e.g., `autoDiscovery.clusterName=\{\{ .Values.global.clusterName }}`, so that you don't need to set it in more than 1 location in the values file. -### AWS - Using auto-discovery of tagged instance groups - -Auto-discovery finds ASGs tags as below and automatically manages them based on the min and max size specified in the ASG. `cloudProvider=aws` only. - -- Tag the ASGs with keys to match `.Values.autoDiscovery.tags`, by default: `k8s.io/cluster-autoscaler/enabled` and `k8s.io/cluster-autoscaler/` -- Verify the [IAM Permissions](#aws---iam) -- Set `autoDiscovery.clusterName=` -- Set `awsRegion=` -- Set (option) `awsAccessKeyID=` and `awsSecretAccessKey=` if you want to [use AWS credentials directly instead of an instance role](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#using-aws-credentials) - -```console -$ helm install my-release autoscaler/cluster-autoscaler \ - --set autoDiscovery.clusterName= \ - --set awsRegion= -``` - -Alternatively with your own AWS credentials - -```console -$ helm install my-release autoscaler/cluster-autoscaler \ - --set autoDiscovery.clusterName= \ - --set awsRegion= \ - --set awsAccessKeyID= \ - --set awsSecretAccessKey= -``` - -#### Specifying groups manually - -Without autodiscovery, specify an array of elements each containing ASG name, min size, max size. The sizes specified here will be applied to the ASG, assuming IAM permissions are correctly configured. - -- Verify the [IAM Permissions](#aws---iam) -- Either provide a yaml file setting `autoscalingGroups` (see values.yaml) or use `--set` e.g.: - -```console -$ helm install my-release autoscaler/cluster-autoscaler \ - --set "autoscalingGroups[0].name=your-asg-name" \ - --set "autoscalingGroups[0].maxSize=10" \ - --set "autoscalingGroups[0].minSize=1" -``` - -#### Auto-discovery - -For auto-discovery of instances to work, they must be tagged with the keys in `.Values.autoDiscovery.tags`, which by default are `k8s.io/cluster-autoscaler/enabled` and `k8s.io/cluster-autoscaler/`. - -The value of the tag does not matter, only the key. - -An example kops spec excerpt: - -```yaml -apiVersion: kops/v1alpha2 -kind: Cluster -metadata: - name: my.cluster.internal -spec: - additionalPolicies: - node: | - [ - {"Effect":"Allow","Action":["autoscaling:DescribeAutoScalingGroups","autoscaling:DescribeAutoScalingInstances","autoscaling:DescribeLaunchConfigurations","autoscaling:DescribeTags","autoscaling:SetDesiredCapacity","autoscaling:TerminateInstanceInAutoScalingGroup"],"Resource":"*"} - ] - ... ---- -apiVersion: kops/v1alpha2 -kind: InstanceGroup -metadata: - labels: - kops.k8s.io/cluster: my.cluster.internal - name: my-instances -spec: - cloudLabels: - k8s.io/cluster-autoscaler/enabled: "" - k8s.io/cluster-autoscaler/my.cluster.internal: "" - image: kops.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-01-14 - machineType: r4.large - maxSize: 4 - minSize: 0 -``` - -In this example you would need to `--set autoDiscovery.clusterName=my.cluster.internal` when installing. - -It is not recommended to try to mix this with setting `autoscalingGroups`. - -See [autoscaler AWS documentation](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#auto-discovery-setup) for a more discussion of the setup. - -### GCE - -The following parameters are required: - -- `autoDiscovery.clusterName=any-name` -- `cloud-provider=gce` -- `autoscalingGroupsnamePrefix[0].name=your-ig-prefix,autoscalingGroupsnamePrefix[0].maxSize=10,autoscalingGroupsnamePrefix[0].minSize=1` - -To use Managed Instance Group (MIG) auto-discovery, provide a YAML file setting `autoscalingGroupsnamePrefix` (see values.yaml) or use `--set` when installing the Chart - e.g. - -```console -$ helm install my-release autoscaler/cluster-autoscaler \ - --set "autoscalingGroupsnamePrefix[0].name=your-ig-prefix,autoscalingGroupsnamePrefix[0].maxSize=10,autoscalingGroupsnamePrefi[0].minSize=1" \ - --set autoDiscovery.clusterName= \ - --set cloudProvider=gce -``` - -Note that `your-ig-prefix` should be a _prefix_ matching one or more MIGs, and _not_ the full name of the MIG. For example, to match multiple instance groups - `k8s-node-group-a-standard`, `k8s-node-group-b-gpu`, you would use a prefix of `k8s-node-group-`. - -In the event you want to explicitly specify MIGs instead of using auto-discovery, set members of the `autoscalingGroups` array directly - e.g. - -``` -# where 'n' is the index, starting at 0 ---set autoscalingGroups[n].name=https://content.googleapis.com/compute/v1/projects/$PROJECTID/zones/$ZONENAME/instanceGroupManagers/$FULL-MIG-NAME,autoscalingGroups[n].maxSize=$MAXSIZE,autoscalingGroups[n].minSize=$MINSIZE -``` - -### Azure - -The following parameters are required: - -- `cloudProvider=azure` -- `autoscalingGroups[0].name=your-vmss,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1` -- `azureClientID: "your-service-principal-app-id"` -- `azureClientSecret: "your-service-principal-client-secret"` -- `azureSubscriptionID: "your-azure-subscription-id"` -- `azureTenantID: "your-azure-tenant-id"` -- `azureResourceGroup: "your-aks-cluster-resource-group-name"` -- `azureVMType: "vmss"` - -### OpenStack Magnum - -`cloudProvider: magnum` must be set, and then one of - -- `magnumClusterName=` and `autoscalingGroups` with the names of node groups and min/max node counts -- or `autoDiscovery.clusterName=` with one or more `autoDiscovery.roles`. - -Additionally, `cloudConfigPath: "/etc/kubernetes/cloud-config"` must be set as this should be the location of the cloud-config file on the host. - -Example values files can be found [here](../../cluster-autoscaler/cloudprovider/magnum/examples). - -Install the chart with - -```console -$ helm install my-release autoscaler/cluster-autoscaler -f myvalues.yaml -``` - ### Cluster-API `cloudProvider: clusterapi` must be set, and then one or more of @@ -233,32 +65,6 @@ Additional config parameters available, see the `values.yaml` for more details - `clusterAPIWorkloadKubeconfigPath` - `clusterAPICloudConfigPath` -### Exoscale - -The following parameters are required: - -- `cloudProvider=exoscale` -- `autoDiscovery.clusterName=` - -Create an Exoscale API key with appropriate permissions as described in [cluster-autoscaler/cloudprovider/exoscale/README.md](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/exoscale/README.md). -A secret of name `-exoscale-cluster-autoscaler` needs to be created, containing the api key and secret, as well as the zone. - -```console -$ kubectl create secret generic my-release-exoscale-cluster-autoscaler \ - --from-literal=api-key="EXOxxxxxxxxxxxxxxxxxxxxxxxx" \ - --from-literal=api-secret="xxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" --from-literal=api-zone="ch-gva-2" -``` - -After creating the secret, the chart may be installed: - -```console -$ helm install my-release autoscaler/cluster-autoscaler \ - --set cloudProvider=exoscale \ - --set autoDiscovery.clusterName= -``` - -Read [cluster-autoscaler/cloudprovider/exoscale/README.md](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/exoscale/README.md) for further information on the setup without helm. - ## Uninstalling the Chart To uninstall `my-release`: @@ -271,52 +77,6 @@ The command removes all the Kubernetes components associated with the chart and > **Tip**: List all releases using `helm list` or start clean with `helm uninstall my-release` -## Additional Configuration - -### AWS - IAM - -The worker running the cluster autoscaler will need access to certain resources and actions depending on the version you run and your configuration of it. - -For the up-to-date IAM permissions required, please see the [cluster autoscaler's AWS Cloudprovider Readme](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#iam-policy) and switch to the tag of the cluster autoscaler image you are using. - -### AWS - IAM Roles for Service Accounts (IRSA) - -For Kubernetes clusters that use Amazon EKS, the service account can be configured with an IAM role using [IAM Roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) to avoid needing to grant access to the worker nodes for AWS resources. - -In order to accomplish this, you will first need to create a new IAM role with the above mentions policies. Take care in [configuring the trust relationship](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-technical-overview.html#iam-role-configuration) to restrict access just to the service account used by cluster autoscaler. - -Once you have the IAM role configured, you would then need to `--set rbac.serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::123456789012:role/MyRoleName` when installing. - -### Azure - Using azure workload identity - -You can use the project [Azure workload identity](https://github.com/Azure/azure-workload-identity), to automatically configure the correct setup for your pods to used federated identity with Azure. - -You can also set the correct settings yourself instead of relying on this project. - -For example the following configuration will configure the Autoscaler to use your federated identity: - -```yaml -azureUseWorkloadIdentityExtension: true -extraEnv: - AZURE_CLIENT_ID: USER ASSIGNED IDENTITY CLIENT ID - AZURE_TENANT_ID: USER ASSIGNED IDENTITY TENANT ID - AZURE_FEDERATED_TOKEN_FILE: /var/run/secrets/tokens/azure-identity-token - AZURE_AUTHORITY_HOST: https://login.microsoftonline.com/ -extraVolumes: -- name: azure-identity-token - projected: - defaultMode: 420 - sources: - - serviceAccountToken: - audience: api://AzureADTokenExchange - expirationSeconds: 3600 - path: azure-identity-token -extraVolumeMounts: -- mountPath: /var/run/secrets/tokens - name: azure-identity-token - readOnly: true -``` - ## Troubleshooting The chart will succeed even if the container arguments are incorrect. A few minutes after starting `kubectl logs -l "app=aws-cluster-autoscaler" --tail=50` should loop through something like diff --git a/charts/vngcloud-cluster-autoscaler/templates/clusterrole.yaml b/charts/vngcloud-cluster-autoscaler/templates/clusterrole.yaml index 356b9c0..18d8e7b 100644 --- a/charts/vngcloud-cluster-autoscaler/templates/clusterrole.yaml +++ b/charts/vngcloud-cluster-autoscaler/templates/clusterrole.yaml @@ -152,11 +152,13 @@ rules: {{- if and ( and ( eq .Values.cloudProvider "clusterapi" ) ( .Values.rbac.clusterScoped ) ( or ( eq .Values.clusterAPIMode "incluster-incluster" ) ( eq .Values.clusterAPIMode "kubeconfig-incluster" ) ))}} - apiGroups: - cluster.x-k8s.io + - infrastructure.cluster.x-k8s.io resources: - machinedeployments - machinepools - machines - machinesets + - vngcloudmachinetemplates verbs: - get - list diff --git a/charts/vngcloud-cluster-autoscaler/templates/deployment.yaml b/charts/vngcloud-cluster-autoscaler/templates/deployment.yaml index ccbe435..cbd267c 100644 --- a/charts/vngcloud-cluster-autoscaler/templates/deployment.yaml +++ b/charts/vngcloud-cluster-autoscaler/templates/deployment.yaml @@ -53,11 +53,11 @@ spec: command: - ./cluster-autoscaler - --cloud-provider={{ .Values.cloudProvider }} - {{- if and (eq .Values.cloudProvider "clusterapi") (eq .Values.clusterAPIMode "kubeconfig-incluster") }} - - --namespace={{ .Values.clusterAPIConfigMapsNamespace | default "kube-system" }} - {{- else }} - - --namespace={{ .Release.Namespace }} - {{- end }} + # {{- if and (eq .Values.cloudProvider "clusterapi") (eq .Values.clusterAPIMode "kubeconfig-incluster") }} + # - --namespace={{ .Values.clusterAPIConfigMapsNamespace | default "kube-system" }} + # {{- else }} + # - --namespace={{ .Release.Namespace }} + # {{- end }} {{- if .Values.autoscalingGroups }} {{- range .Values.autoscalingGroups }} - --nodes={{ .minSize }}:{{ .maxSize }}:{{ .name }} @@ -269,7 +269,8 @@ spec: {{- end }} {{- if .Values.clusterAPIKubeconfigSecret }} - name: cluster-api-kubeconfig - mountPath: {{ .Values.clusterAPIWorkloadKubeconfigPath | trimSuffix "/value" }} + mountPath: {{ .Values.clusterAPIWorkloadKubeconfigPath }} + subPath: {{ .Values.clusterAPIWorkloadKubeconfigSubPath }} {{- end }} {{- if .Values.extraVolumeMounts }} {{- toYaml .Values.extraVolumeMounts | nindent 12 }} diff --git a/charts/vngcloud-cluster-autoscaler/values.yaml b/charts/vngcloud-cluster-autoscaler/values.yaml index 0affb02..156047c 100644 --- a/charts/vngcloud-cluster-autoscaler/values.yaml +++ b/charts/vngcloud-cluster-autoscaler/values.yaml @@ -124,6 +124,8 @@ clusterAPIMode: incluster-incluster # incluster-incluster, incluster-kubeconfig # clusterAPIWorkloadKubeconfigPath -- Path to kubeconfig for connecting to Cluster API managed workloadcluster, only used if `clusterAPIMode=kubeconfig-kubeconfig or kubeconfig-incluster` clusterAPIWorkloadKubeconfigPath: /etc/kubernetes/value +clusterAPIWorkloadKubeconfigSubPath: "" + # containerSecurityContext -- [Security context for container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) containerSecurityContext: {} # capabilities: