To be able to create snapshots, on AWS our pod will need the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeAvailabilityZones",
"ec2:CreateTags",
"ec2:DescribeTags",
"ec2:DescribeVolumeAttribute",
"ec2:DescribeVolumeStatus",
"ec2:DescribeVolumes",
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot",
"ec2:DescribeSnapshots"
],
"Resource": "*"
}
]
}
If there are no default credentials injected into your nodes, or the default credentials do not have the required access scope, you may need to configure these environment variables:
AWS_ACCESS_KEY_ID | AWS IAM Access Key ID that is used to authenticate. |
AWS_SECRET_ACCESS_KEY | AWS IAM Secret Access Key that is used to authenticate. |
AWS_REGION | The region is usually detected via the meta data service. You can override the value. |
On older versions of kops, master nodes did have the permissions required. A solution there
is to just run k8s-snapshots
on a master node.
To run on a Master, we need to:
To do this, add the following to the above manifest for the k8s-snapshots Deployment:
spec:
...
template:
...
spec:
...
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Equal"
value: ""
effect: "NoSchedule"
nodeSelector:
kubernetes.io/role: master