-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Defaulting to Running on CPU? #2
Comments
If it said "safe_open(pth, framework="pt", device="cpu")", that's because I load the weights in RAM (cpu) before using the GPU/VRAM (cuda). In theory it runs on GPU if you have a GPU.
On my GPU it does ~500it / 30s. If you have 1 it / 30 sec it could be running on CPU. 30 seconds is only for the loop that enhances the image, it doesn't include loading the model, and it doesn't stop the current iteration. |
Printing out torch.is_available() at line 19 as you indicated results in the output "True" while booting. I disabled 'CPU' as an option and just force 'GPU' without the torch test. Results: generating image I'm getting 1.3 it/s, running enhance on a 512x512 image yields ~14 seconds/it (inverse unit compared to before), and no visible change in GPU usage. I'm not sure what's happening |
10s/it means it's running on CPU, your CPU is probably more active when you click "enhance". You can try to force gpu by replacing ( https://github.com/Whiax/sd-webui-nulienhance/blob/main/enhance_image.py#L19 ): and ( https://github.com/Whiax/sd-webui-nulienhance/blob/main/scripts/nul_image_enhancer.py#L50 ): Are you using an NVIDIA GPU? If you're using an Apple GPU, you can try to replace "cuda"/"cpu" with "mps": |
I edited both lines, forcing use of the GPU. Now when I run it the gpu does increase the VRAM usage (but it doesn't seem to release it after, but neither does it prohibit subsequent usages of SD, so maybe it doesn't matter), and the CPU noticeably increases usage. However the speed in s/it (jumps all over the place from 3-9) but is still the same order of magnitude as before. It's an NVIDIA GPU, running on Windows |
I had the same error as the previous issue, almost to the same minute you updated and fixed the repo, so that's solved now. However, while it was broken I noticed it listed the device as CPU. Now that it's running, it only gets through about one iteration in ~38 seconds before (I guess) timing out--though I had maximum execution time at 30 seconds, perhaps it's counting from two different start times.
Anyway, is it running on CPU? I can't tell from the resource monitor, as there is no uptick in GPU usage, though there may be a slight uptick on one core of the processor (it's not dramatic).
I think I'm really interested in the extension, looking forward to using it, haha.
The text was updated successfully, but these errors were encountered: