Skip to content

Commit

Permalink
⚡ update: customize to vngcloud
Browse files Browse the repository at this point in the history
  • Loading branch information
anngdinh committed Mar 12, 2024
1 parent c80f2b7 commit 50ee7d6
Show file tree
Hide file tree
Showing 6 changed files with 24 additions and 499 deletions.
12 changes: 6 additions & 6 deletions charts/vngcloud-cluster-autoscaler/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@ apiVersion: v2
appVersion: 1.29.0
description: Scales Kubernetes worker nodes within autoscaling groups.
engine: gotpl
home: https://github.com/kubernetes/autoscaler
home: https://github.com/vngcloud/autoscaler
icon: https://github.com/kubernetes/kubernetes/raw/master/logo/logo.png
maintainers:
- email: [email protected]
name: gjtempleton
name: cluster-autoscaler
- name: VNG Cloud
email: [email protected]
name: vngcloud-cluster-autoscaler
sources:
- https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler
- https://github.com/vngcloud/autoscaler
type: application
version: 9.35.0
version: 0.0.0
248 changes: 4 additions & 244 deletions charts/vngcloud-cluster-autoscaler/README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# cluster-autoscaler
# vngcloud-cluster-autoscaler

Scales Kubernetes worker nodes within autoscaling groups.

## TL;DR

```console
$ helm repo add autoscaler https://kubernetes.github.io/autoscaler
$ helm repo add vks-helm-charts https://vngcloud.github.io/vks-helm-charts

# Method 1 - Using Autodiscovery
$ helm install my-release autoscaler/cluster-autoscaler \
$ helm install vngcloud-cluster-autoscaler vks-helm-charts/vngcloud-cluster-autoscaler \
--set 'autoDiscovery.clusterName'=<CLUSTER NAME>

# Method 2 - Specifying groups manually
$ helm install my-release autoscaler/cluster-autoscaler \
$ helm install vngcloud-cluster-autoscaler vks-helm-charts/vngcloud-cluster-autoscaler \
--set "autoscalingGroups[0].name=your-asg-name" \
--set "autoscalingGroups[0].maxSize=10" \
--set "autoscalingGroups[0].minSize=1"
Expand All @@ -30,31 +30,6 @@ This chart bootstraps a cluster-autoscaler deployment on a [Kubernetes](http://k
- Azure AKS specific Prerequisites:
- Kubernetes 1.10+ with RBAC-enabled.

## Previous Helm Chart

The previous `cluster-autoscaler` Helm chart hosted at [helm/charts](https://github.com/helm/charts) has been moved to this repository in accordance with the [Deprecation timeline](https://github.com/helm/charts#deprecation-timeline). Note that a few things have changed between this version and the old version:

- This repository **only** supports Helm chart installations using Helm 3+ since the `apiVersion` on the charts has been marked as `v2`.
- Previous versions of the Helm chart have not been migrated

## Migration from 1.X to 9.X+ versions of this Chart

**TL;DR:**
You should choose to use versions >=9.0.0 of the `cluster-autoscaler` chart published from this repository; previous versions, and the `cluster-autoscaler-chart` with versioning 1.X.X published from this repository are deprecated.

<details>
<summary>Previous versions of this chart - further details</summary>
On initial migration of this chart from the `helm/charts` repository this chart was renamed from `cluster-autoscaler` to `cluster-autoscaler-chart` due to technical limitations. This affected all `1.X` releases of the chart, version 2.0.0 of this chart exists only to mark the [`cluster-autoscaler-chart` chart](https://artifacthub.io/packages/helm/cluster-autoscaler/cluster-autoscaler-chart) as deprecated.

Releases of the chart from `9.0.0` onwards return the naming of the chart to `cluster-autoscaler` and return to following the versioning established by the chart's previous location at .

To migrate from a 1.X release of the chart to a `9.0.0` or later release, you should first uninstall your `1.X` install of the `cluster-autoscaler-chart` chart, before performing the installation of the new `cluster-autoscaler` chart.
</details>

## Migration from 9.0 to 9.1

Starting from `9.1.0` the `envFromConfigMap` value is expected to contain the name of a ConfigMap that is used as ref for `envFrom`, similar to `envFromSecret`. If you want to keep the previous behaviour of `envFromConfigMap` you must rename it to `extraEnvConfigMaps`.

## Installing the Chart

**By default, no deployment is created and nothing will autoscale**.
Expand All @@ -68,155 +43,12 @@ Either:

To create a valid configuration, follow instructions for your cloud provider:

- [AWS](#aws---using-auto-discovery-of-tagged-instance-groups)
- [GCE](#gce)
- [Azure](#azure)
- [OpenStack Magnum](#openstack-magnum)
- [Cluster API](#cluster-api)

### Templating the autoDiscovery.clusterName

The cluster name can be templated in the `autoDiscovery.clusterName` variable. This is useful when the cluster name is dynamically generated based on other values coming from external systems like Argo CD or Flux. This also allows you to use global Helm values to set the cluster name, e.g., `autoDiscovery.clusterName=\{\{ .Values.global.clusterName }}`, so that you don't need to set it in more than 1 location in the values file.

### AWS - Using auto-discovery of tagged instance groups

Auto-discovery finds ASGs tags as below and automatically manages them based on the min and max size specified in the ASG. `cloudProvider=aws` only.

- Tag the ASGs with keys to match `.Values.autoDiscovery.tags`, by default: `k8s.io/cluster-autoscaler/enabled` and `k8s.io/cluster-autoscaler/<YOUR CLUSTER NAME>`
- Verify the [IAM Permissions](#aws---iam)
- Set `autoDiscovery.clusterName=<YOUR CLUSTER NAME>`
- Set `awsRegion=<YOUR AWS REGION>`
- Set (option) `awsAccessKeyID=<YOUR AWS KEY ID>` and `awsSecretAccessKey=<YOUR AWS SECRET KEY>` if you want to [use AWS credentials directly instead of an instance role](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#using-aws-credentials)

```console
$ helm install my-release autoscaler/cluster-autoscaler \
--set autoDiscovery.clusterName=<CLUSTER NAME> \
--set awsRegion=<YOUR AWS REGION>
```

Alternatively with your own AWS credentials

```console
$ helm install my-release autoscaler/cluster-autoscaler \
--set autoDiscovery.clusterName=<CLUSTER NAME> \
--set awsRegion=<YOUR AWS REGION> \
--set awsAccessKeyID=<YOUR AWS KEY ID> \
--set awsSecretAccessKey=<YOUR AWS SECRET KEY>
```

#### Specifying groups manually

Without autodiscovery, specify an array of elements each containing ASG name, min size, max size. The sizes specified here will be applied to the ASG, assuming IAM permissions are correctly configured.

- Verify the [IAM Permissions](#aws---iam)
- Either provide a yaml file setting `autoscalingGroups` (see values.yaml) or use `--set` e.g.:

```console
$ helm install my-release autoscaler/cluster-autoscaler \
--set "autoscalingGroups[0].name=your-asg-name" \
--set "autoscalingGroups[0].maxSize=10" \
--set "autoscalingGroups[0].minSize=1"
```

#### Auto-discovery

For auto-discovery of instances to work, they must be tagged with the keys in `.Values.autoDiscovery.tags`, which by default are `k8s.io/cluster-autoscaler/enabled` and `k8s.io/cluster-autoscaler/<ClusterName>`.

The value of the tag does not matter, only the key.

An example kops spec excerpt:

```yaml
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
name: my.cluster.internal
spec:
additionalPolicies:
node: |
[
{"Effect":"Allow","Action":["autoscaling:DescribeAutoScalingGroups","autoscaling:DescribeAutoScalingInstances","autoscaling:DescribeLaunchConfigurations","autoscaling:DescribeTags","autoscaling:SetDesiredCapacity","autoscaling:TerminateInstanceInAutoScalingGroup"],"Resource":"*"}
]
...
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: my.cluster.internal
name: my-instances
spec:
cloudLabels:
k8s.io/cluster-autoscaler/enabled: ""
k8s.io/cluster-autoscaler/my.cluster.internal: ""
image: kops.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-01-14
machineType: r4.large
maxSize: 4
minSize: 0
```
In this example you would need to `--set autoDiscovery.clusterName=my.cluster.internal` when installing.

It is not recommended to try to mix this with setting `autoscalingGroups`.

See [autoscaler AWS documentation](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#auto-discovery-setup) for a more discussion of the setup.

### GCE

The following parameters are required:

- `autoDiscovery.clusterName=any-name`
- `cloud-provider=gce`
- `autoscalingGroupsnamePrefix[0].name=your-ig-prefix,autoscalingGroupsnamePrefix[0].maxSize=10,autoscalingGroupsnamePrefix[0].minSize=1`

To use Managed Instance Group (MIG) auto-discovery, provide a YAML file setting `autoscalingGroupsnamePrefix` (see values.yaml) or use `--set` when installing the Chart - e.g.

```console
$ helm install my-release autoscaler/cluster-autoscaler \
--set "autoscalingGroupsnamePrefix[0].name=your-ig-prefix,autoscalingGroupsnamePrefix[0].maxSize=10,autoscalingGroupsnamePrefi[0].minSize=1" \
--set autoDiscovery.clusterName=<CLUSTER NAME> \
--set cloudProvider=gce
```

Note that `your-ig-prefix` should be a _prefix_ matching one or more MIGs, and _not_ the full name of the MIG. For example, to match multiple instance groups - `k8s-node-group-a-standard`, `k8s-node-group-b-gpu`, you would use a prefix of `k8s-node-group-`.

In the event you want to explicitly specify MIGs instead of using auto-discovery, set members of the `autoscalingGroups` array directly - e.g.

```
# where 'n' is the index, starting at 0
--set autoscalingGroups[n].name=https://content.googleapis.com/compute/v1/projects/$PROJECTID/zones/$ZONENAME/instanceGroupManagers/$FULL-MIG-NAME,autoscalingGroups[n].maxSize=$MAXSIZE,autoscalingGroups[n].minSize=$MINSIZE
```
### Azure
The following parameters are required:
- `cloudProvider=azure`
- `autoscalingGroups[0].name=your-vmss,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1`
- `azureClientID: "your-service-principal-app-id"`
- `azureClientSecret: "your-service-principal-client-secret"`
- `azureSubscriptionID: "your-azure-subscription-id"`
- `azureTenantID: "your-azure-tenant-id"`
- `azureResourceGroup: "your-aks-cluster-resource-group-name"`
- `azureVMType: "vmss"`
### OpenStack Magnum
`cloudProvider: magnum` must be set, and then one of
- `magnumClusterName=<cluster name or ID>` and `autoscalingGroups` with the names of node groups and min/max node counts
- or `autoDiscovery.clusterName=<cluster name or ID>` with one or more `autoDiscovery.roles`.
Additionally, `cloudConfigPath: "/etc/kubernetes/cloud-config"` must be set as this should be the location of the cloud-config file on the host.
Example values files can be found [here](../../cluster-autoscaler/cloudprovider/magnum/examples).
Install the chart with
```console
$ helm install my-release autoscaler/cluster-autoscaler -f myvalues.yaml
```

### Cluster-API

`cloudProvider: clusterapi` must be set, and then one or more of
Expand All @@ -233,32 +65,6 @@ Additional config parameters available, see the `values.yaml` for more details
- `clusterAPIWorkloadKubeconfigPath`
- `clusterAPICloudConfigPath`

### Exoscale

The following parameters are required:

- `cloudProvider=exoscale`
- `autoDiscovery.clusterName=<CLUSTER NAME>`

Create an Exoscale API key with appropriate permissions as described in [cluster-autoscaler/cloudprovider/exoscale/README.md](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/exoscale/README.md).
A secret of name `<release-name>-exoscale-cluster-autoscaler` needs to be created, containing the api key and secret, as well as the zone.

```console
$ kubectl create secret generic my-release-exoscale-cluster-autoscaler \
--from-literal=api-key="EXOxxxxxxxxxxxxxxxxxxxxxxxx" \
--from-literal=api-secret="xxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" --from-literal=api-zone="ch-gva-2"
```

After creating the secret, the chart may be installed:

```console
$ helm install my-release autoscaler/cluster-autoscaler \
--set cloudProvider=exoscale \
--set autoDiscovery.clusterName=<CLUSTER NAME>
```

Read [cluster-autoscaler/cloudprovider/exoscale/README.md](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/exoscale/README.md) for further information on the setup without helm.

## Uninstalling the Chart

To uninstall `my-release`:
Expand All @@ -271,52 +77,6 @@ The command removes all the Kubernetes components associated with the chart and

> **Tip**: List all releases using `helm list` or start clean with `helm uninstall my-release`
## Additional Configuration

### AWS - IAM

The worker running the cluster autoscaler will need access to certain resources and actions depending on the version you run and your configuration of it.

For the up-to-date IAM permissions required, please see the [cluster autoscaler's AWS Cloudprovider Readme](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#iam-policy) and switch to the tag of the cluster autoscaler image you are using.

### AWS - IAM Roles for Service Accounts (IRSA)

For Kubernetes clusters that use Amazon EKS, the service account can be configured with an IAM role using [IAM Roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) to avoid needing to grant access to the worker nodes for AWS resources.

In order to accomplish this, you will first need to create a new IAM role with the above mentions policies. Take care in [configuring the trust relationship](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-technical-overview.html#iam-role-configuration) to restrict access just to the service account used by cluster autoscaler.

Once you have the IAM role configured, you would then need to `--set rbac.serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::123456789012:role/MyRoleName` when installing.

### Azure - Using azure workload identity

You can use the project [Azure workload identity](https://github.com/Azure/azure-workload-identity), to automatically configure the correct setup for your pods to used federated identity with Azure.

You can also set the correct settings yourself instead of relying on this project.

For example the following configuration will configure the Autoscaler to use your federated identity:

```yaml
azureUseWorkloadIdentityExtension: true
extraEnv:
AZURE_CLIENT_ID: USER ASSIGNED IDENTITY CLIENT ID
AZURE_TENANT_ID: USER ASSIGNED IDENTITY TENANT ID
AZURE_FEDERATED_TOKEN_FILE: /var/run/secrets/tokens/azure-identity-token
AZURE_AUTHORITY_HOST: https://login.microsoftonline.com/
extraVolumes:
- name: azure-identity-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: api://AzureADTokenExchange
expirationSeconds: 3600
path: azure-identity-token
extraVolumeMounts:
- mountPath: /var/run/secrets/tokens
name: azure-identity-token
readOnly: true
```
## Troubleshooting

The chart will succeed even if the container arguments are incorrect. A few minutes after starting `kubectl logs -l "app=aws-cluster-autoscaler" --tail=50` should loop through something like
Expand Down
Loading

0 comments on commit 50ee7d6

Please sign in to comment.