-
-
Notifications
You must be signed in to change notification settings - Fork 397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot use GPU on tensorflow and display the game at the same time ? #466
Comments
Hmm I have not had trouble using CUDA-based libraries (Theano, Tensorflow, PyTorch) with ViZDoom in the past like this. I do not think it should lock out GPU this way. Could you include full code to replicate the issue so I can try it on my linux machines, as well as tensorflow version you are using? |
Same here, never head problems with running it on gpu with theano, tf1 or pytorch. If I am not mistaken tensorflow is kind of greedy if it comes to reserving memory but it has never caused this kind of behavior for me. Are you using TF 2? |
@mihahauke TF "fixed" that issue in the latest 1.x and in 2.x versions, granted I have not tried TF2 with ViZDoom on Linux yet. |
@Miffyli good to know. Though I moved to Pytorch after tf1.12. |
@mihahauke Yes I'm using TF 2.3.1 @Miffyli Here's the full code, if tensorflow is commented like this, it works, but if i load it then i get the error.
|
I was able to run the code with and without tensorflow import as expected (Python 3.8.5 [conda env], Ubuntu 20.04 [KDE Neon], Tensorflow 2.3.1 [tried with and without GPU], ViZDoom from current |
Damn it ... Do you have any idea what i can search for ? |
Some random suggestions:
Other than this I do not know what to look for as the error seems rather arbitrary (it should not happen). |
Are you running the test sample on the server? |
Is this a normal behavior ?
When I start the environment and display it, it works fine at first.
But as soon as I load tensorflow (which is using my GPU) i get the error :
"ViZDoomErrorException: Could not initialize SDL video:
No available video device"
So I'm guessing that tensorflow is blocking teh access to my GPU or something ? Sorry I'm really new to using GPU's for neural networks and I don't know if that behavior is intended or if I'm doing something wrong.
I'm using python 3.8.3 on Linux mint 20.
The text was updated successfully, but these errors were encountered: