-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Replace the pool when reopening. #18530
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Replace the pool when reopening. #18530
Conversation
Signed-off-by: Arthur Schreiber <[email protected]>
Review ChecklistHello reviewers! 👋 Please follow this checklist when reviewing this Pull Request. General
Tests
Documentation
New flags
If a workflow is added or modified:
Backward compatibility
|
Signed-off-by: Arthur Schreiber <[email protected]>
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #18530 +/- ##
==========================================
- Coverage 67.50% 67.48% -0.02%
==========================================
Files 1607 1607
Lines 263093 263255 +162
==========================================
+ Hits 177593 177663 +70
- Misses 85500 85592 +92 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Signed-off-by: Mohamed Hamza <[email protected]>
func (pool *ConnPool[C]) Open(connect Connector[C], refresh RefreshCheck) *ConnPool[C] { | ||
if pool.close != nil { | ||
// already open | ||
if !pool.state.CompareAndSwap(UNINITIALIZED, OPENED) { | ||
// already open or closed | ||
return pool | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Open
probably should be changed to return an error when it's called on a pool object that has been closed previously. Unfortunately, that'd require changing the signature and call sites and would bloat up this PRs size.
Maybe that can be done as a followup step.
Signed-off-by: Arthur Schreiber <[email protected]> This is a backport of vitessio#18530
This is a backport of vitessio#18530
When closing the pool, there is a race condition between the closing of the `close` channel in `CloseWithContext`, and the `maybeStarving` worker closing it. We'll remove the close in `CloseWithContext` and allow the worker to close it on its next run. Signed-off-by: Mohamed Hamza <[email protected]>
Signed-off-by: Mohamed Hamza <[email protected]>
closing and normal operations Signed-off-by: Mohamed Hamza <[email protected]>
Signed-off-by: Mohamed Hamza <[email protected]>
Signed-off-by: Mohamed Hamza <[email protected]>
Signed-off-by: Mohamed Hamza <[email protected]>
I am all in to support that Close should lead to pool not being re-openable but it should still ensure that all the connections are returned back. Otherwise we might end up leaking connections. |
Description
It's proven quite difficult to fix some shortcomings in the connection pool like the blocking
Close()
call (which is quite problematic as this call happens duringPlannedReparentShard
and makes the operation take longer than required) and race conditions around concurrentGet()
/put()
/Close()
operations.Instead, I want to pick a previous idea that we disallow reopening a pool after
Close()
has been called and replace the whole pool object.The
Close()
function becomes non-blocking - we close any connections that have been returned to the pool before theClose()
call was made, and return. Any in-flight connections will be closed onceRecycle()
gets called on the connection.SetCapacity()
probably should also be changed in a followup PR to disallow setting the capacity to 0 or lower, and to be non-blocking as well (we can just start closing connections once they get returned to the pool).Related Issue(s)
Checklist
Deployment Notes