Skip to content
This repository has been archived by the owner on Sep 30, 2024. It is now read-only.

Discovery timeouts, emergency read operations throttled by discovery timeouts #1351

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

shlomi-noach
Copy link
Collaborator

Followup to an issue reported outside this repo.

Scenario:

  • Instance becomes unavailable, and causing connections to hang
  • ReadTopologyInstanceBufferable (the main discovery function) runs again and again, totaling 3 concurrent executions, exhausting the connection pool limit for this instance.
  • any further queries, such as api/discover/the-instance/3306 are blocked on this instance
  • In a orchestrator/raft setup, someone calls api/discover/the-instance/3306
  • Now the raft protocol is blocked
  • All further rat communications are blocked
  • No failovers possible

Description

Timeouts already configured at DSN level. We add:

  • Explicit context.WithTimeout() in ReadTopologyInstanceBufferable, now using db.QueryRowContext() and sqlutils.QueryRowsMapContext()
  • in topology_recovery.go, emergentlyReadTopologyInstance() only runs at most 1 read (for a given instance) per MySQLDiscoveryReadTimeoutSeconds, to avoid further spamming the instance.

Expected result: worst case scenario an operation will hang up to MySQLDiscoveryReadTimeoutSeconds (configurable, default: 10)

@@ -45,6 +45,8 @@ func (applier *CommandApplier) ApplyCommand(op string, value []byte) interface{}
return applier.registerNode(value)
case "discover":
return applier.discover(value)
case "async-discover":
return applier.asyncDiscover(value)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shlomi-noach should publishDiscoverMasters also use async-discover?
because sync discover can slow down the raft log apply。
cc @yangeagle

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants