-
Notifications
You must be signed in to change notification settings - Fork 893
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
karmadactl unregister: force unregister pull mode member clusters with the flag --force #5935
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #5935 +/- ##
==========================================
+ Coverage 48.10% 48.18% +0.07%
==========================================
Files 663 664 +1
Lines 54769 54779 +10
==========================================
+ Hits 26347 26394 +47
+ Misses 26713 26672 -41
- Partials 1709 1713 +4
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
b419934
to
3b02bdc
Compare
local testset up the test env$ hack/local-up-karmada.sh
$ hack/create-cluster.sh member4 ~/.kube/member4.config
$ eval $(karmadactl token create --print-register-command --kubeconfig ~/.kube/karmada.config --karmada-context karmada-apiserver) --kubeconfig ~/.kube/member4.config --karmada-agent-image docker.io/karmada/karmada-agent:latest
$ karmadactl get cluster
NAME CLUSTER VERSION MODE READY AGE ADOPTION
member1 Karmada v1.31.2 Push True 18h -
member2 Karmada v1.31.2 Push True 18h -
member3 Karmada v1.31.2 Pull True 18h -
member4 Karmada v1.31.0 Pull True 26s - To construct a scenario where the resources of a member cluster cannot be cleaned up, you can remove the clusterrole karmada-agent from the member cluster so that karmada-agent does not have permission to clean up the resources of the member cluster $ kubectl --kubeconfig ~/.kube/member4.config delete clusterrole karmada-agent unregister with karmada kubeconfig$ karmadactl unregister member4 --cluster-kubeconfig ~/.kube/member4.config --karmada-config ~/.kube/karmada.config --karmada-context karmada-apiserver -v=4 --wait 15s
E1211 09:52:37.770325 224302 unregister.go:327] Failed to delete works object. cluster name: member4, error: context deadline exceeded
error: context deadline exceeded
$ karmadactl unregister member4 --cluster-kubeconfig ~/.kube/member4.config --karmada-config ~/.kube/karmada.config --karmada-context karmada-apiserver -v=4 --wait 15s --force
... ...
W1211 09:53:13.152585 224519 work.go:56] Deleting the work object timed out. ExecutionSpace: karmada-es-member4, error: context deadline exceeded
I1211 09:53:13.152620 224519 work.go:57] Start forced deletion. Deleting finalizer of works in ExecutionSpace: karmada-es-member4
W1211 09:53:13.210987 224519 cluster.go:70] Deleting the cluster object timed out. cluster name: member4, error: context deadline exceeded
I1211 09:53:13.211021 224519 cluster.go:71] Start forced deletion. cluster name: member4
I1211 09:53:13.273867 224519 cluster.go:88] Forced deletion is complete.
... ...
karmadactl get cluster
NAME CLUSTER VERSION MODE READY AGE ADOPTION
member1 Karmada v1.31.2 Push True 18h -
member2 Karmada v1.31.2 Push True 18h -
member3 Karmada v1.31.2 Pull True 18h - unregister without karmada kubconfig$ karmadactl unregister member4 --cluster-kubeconfig ~/.kube/member4.config -v=4 --wait 15s
... ...
09:56:47.645747 225967 unregister.go:327] Failed to delete works object. cluster name: member4, error: context deadline exceeded
error: context deadline exceeded
$ karmadactl unregister member4 --cluster-kubeconfig ~/.kube/member4.config -v=4 --wait 15s --force
W1211 09:57:43.268018 226391 work.go:56] Deleting the work object timed out. ExecutionSpace: karmada-es-member4, error: context deadline exceeded
I1211 09:57:43.268061 226391 work.go:57] Start forced deletion. Deleting finalizer of works in ExecutionSpace: karmada-es-member4
W1211 09:57:43.315950 226391 cluster.go:70] Deleting the cluster object timed out. cluster name: member4, error: context deadline exceeded
I1211 09:57:43.315978 226391 cluster.go:71] Start forced deletion. cluster name: member4
E1211 09:57:43.374522 226391 cluster.go:85] Force deletion. Failed to remove the finalizer of Cluster(member4), error: clusters.cluster.karmada.io "member4" is forbidden: User "system:karmada:agent:member4" cannot update resource "clusters" in API group "cluster.karmada.io" at the cluster scope
I1211 09:57:43.374572 226391 cluster.go:88] Forced deletion is complete.
... ...
$ karmadactl get cluster
NAME CLUSTER VERSION MODE READY AGE ADOPTION
member1 Karmada v1.31.2 Push True 18h -
member2 Karmada v1.31.2 Push True 18h -
member3 Karmada v1.31.2 Pull True 18h - |
…h the flag --force Signed-off-by: zhzhuang-zju <[email protected]>
3b02bdc
to
10e98ac
Compare
What type of PR is this?
/kind feature
What this PR does / why we need it:
Introducing flag
--force
to provide the force deletion ability. If the Works not be deleted within the timeout period during the unregistering process, it is likely due to the resources in the member cluster can not be cleaned up. With the option force deletion, we will try to clean up the Works object by removing the finalizers from related resources. This behavior may result in some resources remaining in the member clusters.Which issue(s) this PR fixes:
Parts of #5477
Special notes for your reviewer:
Does this PR introduce a user-facing change?: