Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU memory error #29

Open
malkaddour opened this issue Jul 15, 2020 · 1 comment
Open

GPU memory error #29

malkaddour opened this issue Jul 15, 2020 · 1 comment

Comments

@malkaddour
Copy link

Hello, I've been able to set up the entire BP4D dataset for training using the correct txt files. However, a memory error always comes up after preprocessing. My GPU memory is capped at 8.1 GB, and even when I try decreasing the batch size down to 4, it yields the same error.
The code output is (batch size set to 4)
"RuntimeError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 7.93 GiB total capacity; 7.22 GiB already allocated; 14.06 MiB free; 7.42 GiB reserved in total by PyTorch)".

I receive the same error even when I set the batch size down to 1, and monitoring my GPU RAM I see that it reaches close to it's capacity before the error.
I also tried running it without GPU on a 126 GB RAM server, and it steadily increased RAM usage until the training was "killed".

Do you know if there is something in the training scheme that I should change to prevent this? Many thanks in advance, and for taking the time to read this.

@MStumpp
Copy link

MStumpp commented Oct 8, 2020

Did you figure this out? Same problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants