You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Due to the way Nanocompore passes data between threads, when a transcript has a large amount of data associated with it (e.g. the transcript has very high coverage or is very long), Nanocompore gets stuck trying to push the data through the queue.
We are working on a permanent solution, but as a temporary fix when Nanocompore gets stuck indefinitely we recommend re-running it with --downsample-high-coverage set to a smaller number, e.g. 5000.
The text was updated successfully, but these errors were encountered:
tleonardi
changed the title
Error processing large amounts of data in a single transcript
Nanocompore gets stuck due to large amounts of data for a single transcript
Nov 27, 2020
Due to the way Nanocompore passes data between threads, when a transcript has a large amount of data associated with it (e.g. the transcript has very high coverage or is very long), Nanocompore gets stuck trying to push the data through the queue.
We are working on a permanent solution, but as a temporary fix when Nanocompore gets stuck indefinitely we recommend re-running it with
--downsample-high-coverage
set to a smaller number, e.g. 5000.The text was updated successfully, but these errors were encountered: