Add incremental ChannelManager persistence#4334
Draft
joostjager wants to merge 3 commits intolightningdevkit:mainfrom
Draft
Add incremental ChannelManager persistence#4334joostjager wants to merge 3 commits intolightningdevkit:mainfrom
joostjager wants to merge 3 commits intolightningdevkit:mainfrom
Conversation
Introduce ChannelManagerData<SP> as an intermediate DTO that holds all deserialized data from a ChannelManager before validation. This splits the read implementation into: 1. Stage 1: Pure deserialization into ChannelManagerData 2. Stage 2: Validation and reconstruction using the DTO The existing validation and reconstruction logic remains unchanged; only the deserialization portion was extracted into the DTO's ReadableArgs implementation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Extract the stage 2 validation and reconstruction logic from the ReadableArgs implementation into a standalone pub(crate) function. This enables reuse of the ChannelManager construction logic from deserialized data. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
👋 Hi! I see this is a draft PR. |
594c6a1 to
0d89953
Compare
This implements incremental persistence for ChannelManager, enabling more efficient persistence for nodes with many channels by only writing peer states that have changed since the last persist. Key changes: - Add `write_update` method to ChannelManager that writes only dirty peer states while always including global state (forward_htlcs, claimable_payments, pending_events, etc.) - Track latest update_id via AtomicU64, serialized in TLV field 23 - Use byte comparison against `last_persisted_peer_bytes` to detect which peers have changed - Add `apply_update` method to ChannelManagerData for merging incremental updates during recovery - Extract `channel_manager_from_data` as a crate-public function for use by the persistence utilities - Update background processor to persist incremental updates instead of full ChannelManager, with periodic consolidation (every 100 updates in production, 5 in tests) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
0d89953 to
986093a
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Implements incremental persistence for
ChannelManager, enabling efficient persistence for nodes with many channels by only writing peer states that have changed since the last persist.In a test with 2 channels across 2 peers, incremental updates are 47% smaller when one peer changed (3.8KB vs 7.2KB full) and 96% smaller when no peers changed (307 bytes vs 7.2KB). The savings scale with the number of unchanged peers.
Changes
write_updatemethod that writes only changed peer states while always including global stateapply_updatefor merging incremental updates during recovery viaread_manager_with_updatesDesign Decisions
Same serialization format for updates: Incremental updates use the exact same format as a full
ChannelManagerwrite—they just include fewer peers. This avoids inventing a new format and reuses all existing serialization code. Updates are partial snapshots that get merged on recovery.Binary comparison for change detection: Each peer's state is serialized and compared to what was last persisted (stored in
last_persisted_peer_bytes). Alternatives considered:Binary comparison is simple, correct by construction, and requires no maintenance as new code is added.