You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I propose that the cli:fetch utility and by extension the fetchRemarks function be refactored to allow for the dump file to periodically be written to instead of waiting till every remark has been fetched. The cli:fetch tool puts all fetched remarks in memory until it has reached the last block number (lastBlock). This is problematic when needing to catch up a million blocks for example.
Problem
I receive this error message after only processing about 160,000 blocks:
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
Temporary solution
I have found that increasing NodeJS's heap size has helped. However, if anything goes wrong before executing the only write, the whole fetch isn't saved and I need to retry the whole fetch again.
Code update options
Two options come to mind:
Process and save after each remark is fetched from the Kusama blockchain.
Process and save remarks in batches. A callback could be executed after so many remarks are fetched from the Kusama blockchain so that the remarks are processed and saved batches.
Option 2 may have better speed performance as there would be less disk IO. However, I am not certain if this speed performance would be significant compared to the Kusama blockchain request time. I suggest assuming that there will be a speed performance batching the remarks and implement the update using option 2.
I propose that the
cli:fetch
utility and by extension thefetchRemarks
function be refactored to allow for the dump file to periodically be written to instead of waiting till every remark has been fetched. Thecli:fetch
tool puts all fetched remarks in memory until it has reached the last block number (lastBlock
). This is problematic when needing to catch up a million blocks for example.Problem
I receive this error message after only processing about 160,000 blocks:
Temporary solution
I have found that increasing NodeJS's heap size has helped. However, if anything goes wrong before executing the only write, the whole fetch isn't saved and I need to retry the whole fetch again.
Code update options
Two options come to mind:
Option 2 may have better speed performance as there would be less disk IO. However, I am not certain if this speed performance would be significant compared to the Kusama blockchain request time. I suggest assuming that there will be a speed performance batching the remarks and implement the update using option 2.
Affected files
The text was updated successfully, but these errors were encountered: