Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi: fix inconsistent state in gossip syncer #9424

Merged
merged 6 commits into from
Jan 23, 2025

Conversation

yyforyongyu
Copy link
Member

Fixed a long standing issue, which is also sometimes found in our itest,

     --- FAIL: TestLightningNetworkDaemon/tranche09/156-of-269/bitcoind/receiver_blinded_error (57.25s)
        harness_node.go:403: Starting node (name=Alice) with PID=14745
        harness_node.go:403: Starting node (name=Bob) with PID=14777
        harness_node.go:403: Starting node (name=Carol) with PID=14805
        harness_node.go:403: Starting node (name=Dave) with PID=14878
        harness_assertion.go:1879: 
            	Error Trace:	/home/runner/work/lnd/lnd/lntest/harness_assertion.go:1879
            	            				/home/runner/work/lnd/lnd/lntest/harness_assertion.go:1937
            	            				/home/runner/work/lnd/lnd/lntest/harness.go:2353
            	            				/home/runner/work/lnd/lnd/lntest/harness.go:2219
            	            				/home/runner/work/lnd/lnd/itest/lnd_route_blinding_test.go:373
            	            				/home/runner/work/lnd/lnd/itest/lnd_route_blinding_test.go:631
            	            				/home/runner/work/lnd/lnd/lntest/harness.go:297
            	            				/home/runner/work/lnd/lnd/itest/lnd_test.go:130
            	Error:      	Received unexpected error:
            	            	channel 90033bd9f64e9aeff15f8730927a56b7192c3d835ddfa6b0ad9fbd7fff2ec131:0 not found in graph: rpc error: code = Unknown desc = edge not found: op=90033bd9f64e9aeff15f8730927a56b7192c3d835ddfa6b0ad9fbd7fff2ec131:0
            	Test:       	TestLightningNetworkDaemon/tranche09/156-of-269/bitcoind/receiver_blinded_error
            	Messages:   	Dave: timeout finding channel in graph

The Issue

When we receive a query response from the remote, we require the syncer to be in a waiting for reply state,

lnd/discovery/syncer.go

Lines 1515 to 1524 in e0a920a

// Reply messages should only be expected in states where we're waiting
// for a reply.
case *lnwire.ReplyChannelRange, *lnwire.ReplyShortChanIDsEnd:
syncState := g.syncState()
if syncState != waitingQueryRangeReply &&
syncState != waitingQueryChanReply {
return fmt.Errorf("received unexpected query reply "+
"message %T", msg)
}

And this state is only set when we finishes sending the msg to the peer,

lnd/discovery/syncer.go

Lines 509 to 518 in e0a920a

err = g.cfg.sendToPeer(queryRangeMsg)
if err != nil {
log.Errorf("Unable to send chan range "+
"query: %v", err)
return
}
// With the message sent successfully, we'll transition
// into the next state where we wait for their reply.
g.setSyncState(waitingQueryRangeReply)

However, we may get a reply from the remote peer before we finish setting the state above. The consequence is we never get out of the initial historical sync stage, causing the node to refuse to propagate channel announcements to its peers.

We barely notice this as the error is ignored here (and the error returned from ProcessRemoteAnnouncement is not processed anyway...more future PRs),

lnd/discovery/gossiper.go

Lines 831 to 836 in e0a920a

// If we've found the message target, then we'll dispatch the
// message directly to it.
syncer.ProcessQueryMsg(m, peer.QuitSignal())
errChan <- nil
return errChan

The fix

Quite some debug logs have been added to help catch the bug, and the fix is to lock the state changes when sending msgs to the remote peer, so send msg -> update state is now atomic.

This commit adds more logs around the ChannelUpdate->edge policy process
flow.
From staticcheck: QF1002 - Convert untagged switch to tagged switch.
This is a pure refactor to add a dedicated handler when the gossiper is
in state syncingChans.
This commit fixes the following race,
1. syncer(state=syncingChans) sends QueryChannelRange
2. remote peer replies ReplyChannelRange
3. ProcessQueryMsg fails to process the remote peer's msg as its state
   is neither waitingQueryChanReply nor waitingQueryRangeReply.
4. syncer marks its new state waitingQueryChanReply, but too late.

The historical sync will now fail, and the syncer will be stuck at this
state. What's worse is it cannot forward channel announcements to other
connected peers now as it will skip the broadcasting during initial
graph sync.

This is now fixed to make sure the following two steps are atomic,
1. syncer(state=syncingChans) sends QueryChannelRange
2. syncer marks its new state waitingQueryChanReply.
The mocked peer used here blocks on `sendToPeer`, which is not the
behavior of the `SendMessageLazy` of `lnpeer.Peer`. To reflect the
reality, we now make sure the `sendToPeer` is non-blocking in the tests.
@yyforyongyu yyforyongyu added bug fix gossip size/micro small bug fix or feature, less than 15 mins of review, less than 250 labels Jan 17, 2025
Copy link
Contributor

coderabbitai bot commented Jan 17, 2025

Important

Review skipped

Auto reviews are limited to specific labels.

🏷️ Labels to auto review (1)
  • llm-review

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@ziggie1984 ziggie1984 self-requested a review January 17, 2025 11:02
Copy link
Collaborator

@ziggie1984 ziggie1984 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, nice catch

@@ -832,9 +832,13 @@ func (d *AuthenticatedGossiper) ProcessRemoteAnnouncement(msg lnwire.Message,

// If we've found the message target, then we'll dispatch the
// message directly to it.
syncer.ProcessQueryMsg(m, peer.QuitSignal())
err := syncer.ProcessQueryMsg(m, peer.QuitSignal())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you explain why the chainSyncer remaining in the waitingQueryChanReply will cause the chan_announcments received from the peer directly to not be relayed ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah it's very intertwined...so we skip broadcasting channel anns from our peers here,

lnd/discovery/gossiper.go

Lines 1574 to 1580 in e0a920a

if newAnns != nil && shouldBroadcast {
// TODO(roasbeef): exclude peer that sent.
deDuped.AddMsgs(newAnns...)
} else if newAnns != nil {
log.Trace("Skipping broadcast of announcements received " +
"during initial graph sync")
}

and the shouldBroadcast is determined here,

lnd/discovery/gossiper.go

Lines 1533 to 1535 in e0a920a

// We should only broadcast this message forward if it originated from
// us or it wasn't received as part of our initial historical sync.
shouldBroadcast := !nMsg.isRemote || d.syncMgr.IsGraphSynced()

which relies on this method,

// IsGraphSynced determines whether we've completed our initial historical sync.
// The initial historical sync is done to ensure we've ingested as much of the
// public graph as possible.
func (m *SyncManager) IsGraphSynced() bool {
return atomic.LoadInt32(&m.initialHistoricalSyncCompleted) == 1
}

and the var initialHistoricalSyncCompleted is marked via,

// markGraphSynced allows us to report that the initial historical sync has
// completed.
func (m *SyncManager) markGraphSynced() {
atomic.StoreInt32(&m.initialHistoricalSyncCompleted, 1)
}

and the above method is called inside processChanRangeReply here,

lnd/discovery/syncer.go

Lines 951 to 954 in e0a920a

// Ensure that the sync manager becomes aware that the
// historical sync completed so synced_to_graph is updated over
// rpc.
g.cfg.markGraphSynced()

and the processChanRangeReply is called here,

lnd/discovery/syncer.go

Lines 528 to 537 in e0a920a

select {
case msg := <-g.gossipMsgs:
// The remote peer is sending a response to our
// initial query, we'll collate this response,
// and see if it's the final one in the series.
// If so, we can then transition to querying
// for the new channels.
queryReply, ok := msg.(*lnwire.ReplyChannelRange)
if ok {
err := g.processChanRangeReply(queryReply)

Note that it be blocked on case msg := <-g.gossipMsgs:, as the msg is never sent to the channel here,

lnd/discovery/syncer.go

Lines 1515 to 1532 in e0a920a

// Reply messages should only be expected in states where we're waiting
// for a reply.
case *lnwire.ReplyChannelRange, *lnwire.ReplyShortChanIDsEnd:
syncState := g.syncState()
if syncState != waitingQueryRangeReply &&
syncState != waitingQueryChanReply {
return fmt.Errorf("received unexpected query reply "+
"message %T", msg)
}
msgChan = g.gossipMsgs
default:
msgChan = g.gossipMsgs
}
select {
case msgChan <- msg:

As the above case *lnwire.ReplyChannelRange, *lnwire.ReplyShortChanIDsEnd: will error out due to the state of the syncer not being updated to waitingQueryRangeReply yet.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed explanation, agree really nested.

docs/release-notes/release-notes-0.19.0.md Show resolved Hide resolved
Copy link
Collaborator

@ellemouton ellemouton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very nice investigation! 🔥 thanks for the detailed description

@@ -2430,7 +2433,8 @@ func (d *AuthenticatedGossiper) handleNodeAnnouncement(nMsg *networkMsg,
// TODO(roasbeef): get rid of the above

log.Debugf("Processed NodeAnnouncement: peer=%v, timestamp=%v, "+
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll try get around to updating the gossiper to use structured logging. That way we only need to add all this to the context once and then can just log.DebugS(ctx, "Processed NodeAnnouncement")

@guggero guggero merged commit 49affa2 into lightningnetwork:master Jan 23, 2025
35 checks passed
@yyforyongyu yyforyongyu deleted the fix-gossip-ann branch January 23, 2025 14:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug fix gossip size/micro small bug fix or feature, less than 15 mins of review, less than 250
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

4 participants