-
Notifications
You must be signed in to change notification settings - Fork 10.2k
[fix #20271] protect backend usage from concurrent applySnapshot
and defrag
#20553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: silentred The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @silentred. Thanks for your PR. I'm waiting for a etcd-io member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This isn't correct. It can't prevent defragmentation from renaming the db file concurrently. |
The defragmentation also needs to acquire this lock. etcd/server/etcdserver/api/v3rpc/maintenance.go Lines 109 to 120 in 84ac605
The etcd/server/etcdserver/server.go Lines 2427 to 2431 in 84ac605
|
You are right. I was thinking
is that supposed to work? |
I think we can add a
The also cc @fuweid |
@@ -1029,6 +1029,9 @@ func (s *EtcdServer) applySnapshot(ep *etcdProgress, toApply *toApply) { | |||
// wait for raftNode to persist snapshot onto the disk | |||
<-toApply.notifyc | |||
|
|||
// protect the backend from concurrent defrag | |||
s.bemu.Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just have concern about increasing that scope of critical section. Yeah. it's simple fix. Just in case we run into dead lock. We probably need e2e test to cover that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure what this E2E test should do. My thought is this test might include:
- your e2e DONOTMERGE: reproduce #20271 #20503 , which verifies concurrent defrag
- verify other concurrent API layer write, like @ahrtr mentioned in [fix #20271] protect backend usage from concurrent
applySnapshot
anddefrag
#20553 (comment)
Any advice? Big thanks.
BTW, can I ask why should we not use SIGSTOP to pause the etcd process?
Hi @ahrtr , thanks for your detailed explanation #20585 (comment) . I think we still need to align our understandings.
Yes, there are 2 backend instances
I think they have different inode .
I think after
Execpt #20585 this PR attempts to rename snap.db file in the backend, so there is only one backedn instance. It locks the whole process of renaming |
I didn't realize that you raised two separate PRs, pls close #20585 to avoid confusion.
It's the reason why I said
it isn't true. There is also API layer write, i.e. Storage version update. etcd/server/etcdserver/version/monitor.go Line 107 in 963a72e
|
Thanks. I didn't notice. I will squash this PR's commits when it is ready to merge. |
applySnapshot
and defrag
…frag Signed-off-by: shenmu.wy <[email protected]>
Please read https://github.com/etcd-io/etcd/blob/main/CONTRIBUTING.md#contribution-flow.
for #20271