-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA out of memory issue. #119
Comments
Exact same problem here, my point clouds have 60k points each, less than the typical point cloud in the 3DMatch database. It cannot be a simple question of "too large cloud". The end of the error stack says :
I will try the proposed solution above but I suspect something else since everything works for the larger clouds in the provided datasets. |
This solution seems better, I will try it first : |
Well, instead of adjusting the parameter, I scaled down the 2 clouds in order to have about 5 units in size. It has the same effect of reducing the number of super points. The memory problem disapeared but the allignment is way way off. The objects do not even touch each others! I will continue to investigate. Any hint will be welcomed. |
Hi I'm trying to run demo.py of 3D match on my own point clouds (there are around 100K points in each after downsampling to 0.025m), but I get,
RuntimeError: CUDA out of memory. Tried to allocate 8.38 GiB (GPU 0; 47.43 GiB total capacity; 42.66 GiB already allocated; 2.91 GiB free; 43.22 GiB reserved in total by PyTorch)
I am running it on a A6000 GPU.
Thanks in advance.
The text was updated successfully, but these errors were encountered: