Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: RuntimeError: Torch is not able to use GPU;... etc #544

Open
1 of 6 tasks
SourSnappleOG opened this issue Oct 9, 2024 · 6 comments
Open
1 of 6 tasks

[Bug]: RuntimeError: Torch is not able to use GPU;... etc #544

SourSnappleOG opened this issue Oct 9, 2024 · 6 comments

Comments

@SourSnappleOG
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

When following the instructions for downloading this for AMD:

git clone https://github.com/lshqqytiger/stable-diffusion-webui-directml && cd stable-diffusion-webui-directml && git submodule init && git submodule update

then clicking the webui-user.bat (for windows), it simply states that Torch isn't able to use the GPU and recommends using --skip-torch-cuda-test. This should be the main reason why we use this branch (to not have the AMD integration issue). Any Ideas?

Steps to reproduce the problem

use the link above (create a new folder with the clone, etc.)

click on webui-user.bat

What should have happened?

It should have ran and opened the ui

What browsers do you use to access the UI ?

No response

Sysinfo

... it won't even open...

Console logs

code\stable-diffusion-webui-directml\venv\lib\site-packages (22.2.1)
Collecting pip
  Using cached pip-24.2-py3-none-any.whl (1.8 MB)
Installing collected packages: pip
  Attempting uninstall: pip
    Found existing installation: pip 22.2.1
    Uninstalling pip-22.2.1:
      Successfully uninstalled pip-22.2.1
Successfully installed pip-24.2
venv "C:\Users\Drew\Code\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-11-gefddd05e
Commit hash: efddd05e11d9cc5339a41192457e6ff8ad06ae00
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting torch==2.3.1
  Using cached https://download.pytorch.org/whl/cu121/torch-2.3.1%2Bcu121-cp310-cp310-win_amd64.whl (2423.5 MB)
Collecting torchvision
  Using cached https://download.pytorch.org/whl/cu121/torchvision-0.19.1%2Bcu121-cp310-cp310-win_amd64.whl (5.8 MB)
Collecting filelock (from torch==2.3.1)
  Using cached filelock-3.16.1-py3-none-any.whl.metadata (2.9 kB)
Collecting typing-extensions>=4.8.0 (from torch==2.3.1)
  Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting sympy (from torch==2.3.1)
  Using cached sympy-1.13.3-py3-none-any.whl.metadata (12 kB)
Collecting networkx (from torch==2.3.1)
  Using cached networkx-3.3-py3-none-any.whl.metadata (5.1 kB)
Collecting jinja2 (from torch==2.3.1)
  Using cached jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting fsspec (from torch==2.3.1)
  Using cached fsspec-2024.9.0-py3-none-any.whl.metadata (11 kB)
Collecting mkl<=2021.4.0,>=2021.1.1 (from torch==2.3.1)
  Using cached https://download.pytorch.org/whl/mkl-2021.4.0-py2.py3-none-win_amd64.whl (228.5 MB)
Collecting numpy (from torchvision)
  Using cached numpy-2.1.2-cp310-cp310-win_amd64.whl.metadata (59 kB)
INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while.
Collecting torchvision
  Using cached torchvision-0.19.1-cp310-cp310-win_amd64.whl.metadata (6.1 kB)
  Using cached https://download.pytorch.org/whl/cu121/torchvision-0.19.0%2Bcu121-cp310-cp310-win_amd64.whl (5.8 MB)
  Using cached torchvision-0.19.0-1-cp310-cp310-win_amd64.whl.metadata (6.1 kB)
Collecting numpy<2 (from torchvision)
  Using cached numpy-1.26.4-cp310-cp310-win_amd64.whl.metadata (61 kB)
Collecting torchvision
  Using cached https://download.pytorch.org/whl/cu121/torchvision-0.18.1%2Bcu121-cp310-cp310-win_amd64.whl (5.7 MB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)
  Using cached pillow-10.4.0-cp310-cp310-win_amd64.whl.metadata (9.3 kB)
Collecting intel-openmp==2021.* (from mkl<=2021.4.0,>=2021.1.1->torch==2.3.1)
  Using cached https://download.pytorch.org/whl/intel_openmp-2021.4.0-py2.py3-none-win_amd64.whl (3.5 MB)
Collecting tbb==2021.* (from mkl<=2021.4.0,>=2021.1.1->torch==2.3.1)
  Using cached tbb-2021.13.1-py3-none-win_amd64.whl.metadata (1.1 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch==2.3.1)
  Using cached MarkupSafe-3.0.1-cp310-cp310-win_amd64.whl.metadata (4.1 kB)
Collecting mpmath<1.4,>=1.1.0 (from sympy->torch==2.3.1)
  Using cached https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
Using cached tbb-2021.13.1-py3-none-win_amd64.whl (286 kB)
Using cached pillow-10.4.0-cp310-cp310-win_amd64.whl (2.6 MB)
Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Using cached filelock-3.16.1-py3-none-any.whl (16 kB)
Using cached fsspec-2024.9.0-py3-none-any.whl (179 kB)
Using cached jinja2-3.1.4-py3-none-any.whl (133 kB)
Using cached networkx-3.3-py3-none-any.whl (1.7 MB)
Using cached numpy-2.1.2-cp310-cp310-win_amd64.whl (12.9 MB)
Using cached sympy-1.13.3-py3-none-any.whl (6.2 MB)
Using cached MarkupSafe-3.0.1-cp310-cp310-win_amd64.whl (15 kB)
Installing collected packages: tbb, mpmath, intel-openmp, typing-extensions, sympy, pillow, numpy, networkx, mkl, MarkupSafe, fsspec, filelock, jinja2, torch, torchvision
Successfully installed MarkupSafe-3.0.1 filelock-3.16.1 fsspec-2024.9.0 intel-openmp-2021.4.0 jinja2-3.1.4 mkl-2021.4.0 mpmath-1.3.0 networkx-3.3 numpy-2.1.2 pillow-10.4.0 sympy-1.13.3 tbb-2021.13.1 torch-2.3.1+cu121 torchvision-0.18.1+cu121 typing-extensions-4.12.2
Traceback (most recent call last):

git clone https://github.com/lshqqytiger/stable-diffusion-webui-directml && cd stable-diffusion-webui-directml && git submodule init && git submodule update

Additional information

No response

@CS1o
Copy link

CS1o commented Oct 12, 2024

Hey, open up a cmd and run pip cache purge
Then delete the venv folder and relaunch the webui-user.bat

If that wont help checkout my install Guides:
https://github.com/CS1o/Stable-Diffusion-Info/wiki/Webui-Installation-Guides

And provide a full cmd log if you get an error again.

@lshqqytiger
Copy link
Owner

You need to add --use-zluda or --use-directml for AMDGPUs. https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu?tab=readme-ov-file#installation-and-running

@Akolyte01
Copy link

I'm confused by the above statement. The link states that the webui needs to be started with those options. However this error occurs when attempting to install the webui, not run it.

If I attempt to run this command webui.bat --use-directml I get this error

raise AttributeError(f"module '{__name__}' has no attribute '{name}'")
AttributeError: module 'torch' has no attribute 'dml'

@CS1o
Copy link

CS1o commented Oct 28, 2024

Hey, to install aswell as to launch the webui you always have to use the webui-user.bat.
And you have to edit it (right click) to add the args like --use-zluda at the line commandline_args=
For a detailed guide check the link at my comment a bit above.

@Akolyte01
Copy link

Thank you for the clarification. However this doesn't seem to be working.
If I update webui-user.bat as mentioned I now get a new error.

I have tried these variations

set COMMANDLINE_ARGS=--use-zluda --update-check --skip-ort
set COMMANDLINE_ARGS=--use-zluda
set COMMANDLINE_ARGS="--use-zluda"

In all cases I get this error:

Traceback (most recent call last):
  File "E:\StableDiffusion\stable-diffusion-webui-directml\launch.py", line 48, in <module>
    main()
  File "E:\StableDiffusion\stable-diffusion-webui-directml\launch.py", line 39, in main
    prepare_environment()
  File "E:\StableDiffusion\stable-diffusion-webui-directml\modules\launch_utils.py", line 581, in prepare_environment
    from modules import zluda_installer
  File "E:\StableDiffusion\stable-diffusion-webui-directml\modules\zluda_installer.py", line 16, in <module>
    HIPSDK_TARGETS = ['rocblas.dll', 'rocsolver.dll', f'hiprtc{"".join([v.zfill(2) for v in rocm.version.split(".")])}.dll']
AttributeError: 'NoneType' object has no attribute 'split'

@lshqqytiger
Copy link
Owner

lshqqytiger commented Oct 28, 2024

If torch is already installed, --use-* argument will be ignored. In your case, as there's an incompatible version of torch installed, so --use-directml and --use-zluda are ignored.
Wipe venv folder or manually uninstall torch.
In addition, zluda requires HIP SDK as a dependency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants