Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

list index out of range #89

Open
jaimehrubiks opened this issue Feb 2, 2021 · 1 comment
Open

list index out of range #89

jaimehrubiks opened this issue Feb 2, 2021 · 1 comment

Comments

@jaimehrubiks
Copy link

jaimehrubiks commented Feb 2, 2021

I am getting this issue, any clue?

I already removed the tags from the autoscaling group after the first failed run.

(venv) [centos@ip-10-16-35-7 eks-rolling-update]$ ./roll.sh
2021-02-02 19:05:45,792 INFO     Describing autoscaling groups...
2021-02-02 19:05:46,067 INFO     Pausing k8s autoscaler...
2021-02-02 19:05:46,164 INFO     K8s autoscaler modified to replicas: 0
2021-02-02 19:05:46,164 INFO     *** Checking for nodes older than 7 days in autoscaling group ms-dev-apps-general_purpose_xlarge20200617203556813100000014 ***
2021-02-02 19:05:46,353 INFO     Instance id i-051b2437c0caa8d09 : OK
2021-02-02 19:05:46,729 INFO     Instance id i-07f5e4c9aa84b8579 : OK
2021-02-02 19:05:46,860 INFO     Instance id i-0867d2b14b2ef7b95 : OK
2021-02-02 19:05:46,900 ERROR    list index out of range
2021-02-02 19:05:46,900 ERROR    *** Rolling update of ASG has failed. Exiting ***
2021-02-02 19:05:46,900 ERROR    AWS Auto Scaling Group processes will need resuming manually
2021-02-02 19:05:46,900 ERROR    Kubernetes Cluster Autoscaler will need resuming manually

This is my config

export ASG_NAMES="ms-dev-apps-general_purpose_xlarge20200617203556813100000014" 
export K8S_AUTOSCALER_ENABLED=True 
export K8S_AUTOSCALER_NAMESPACE="kube-system" 
export K8S_AUTOSCALER_DEPLOYMENT="cluster-autoscaler-autodetect-aws-cluster-autoscaler"
export K8S_AUTOSCALER_REPLICAS=1
export EXTRA_DRAIN_ARGS="--delete-local-data=true --disable-eviction=true --force=true --grace-period=10 --ignore-daemonsets=true"
export MAX_ALLOWABLE_NODE_AGE=7
export RUN_MODE=4

I am using the latest version

@chadlwilson
Copy link
Contributor

Hmm, have a few questions which might help someone look into this.

  • How many, and which instances did you have in your ASG?
  • How many were you expecting it to detect as older than 7 days?
  • Was the problem reproducible?
  • Were there any actions going on within your ASG at the time you were running the tool? (e.g it was scaling down or otherwise terminating instances?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants