Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nanocompore gets stuck due to large amounts of data for a single transcript #163

Open
tleonardi opened this issue Nov 27, 2020 · 0 comments
Labels
bug Something isn't working Work in progress Someone is working on the issue

Comments

@tleonardi
Copy link
Owner

tleonardi commented Nov 27, 2020

Due to the way Nanocompore passes data between threads, when a transcript has a large amount of data associated with it (e.g. the transcript has very high coverage or is very long), Nanocompore gets stuck trying to push the data through the queue.

We are working on a permanent solution, but as a temporary fix when Nanocompore gets stuck indefinitely we recommend re-running it with --downsample-high-coverage set to a smaller number, e.g. 5000.

@tleonardi tleonardi added bug Something isn't working Work in progress Someone is working on the issue labels Nov 27, 2020
@tleonardi tleonardi pinned this issue Nov 27, 2020
@tleonardi tleonardi changed the title Error processing large amounts of data in a single transcript Nanocompore gets stuck due to large amounts of data for a single transcript Nov 27, 2020
tleonardi added a commit that referenced this issue Nov 27, 2020
tleonardi added a commit that referenced this issue Nov 27, 2020
tleonardi added a commit that referenced this issue Dec 8, 2020
@a-slide a-slide unpinned this issue Dec 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Work in progress Someone is working on the issue
Projects
None yet
Development

No branches or pull requests

1 participant