Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: i have no idea what this is, only that this isnt working #512

Open
2 of 6 tasks
DracoMan671 opened this issue Aug 4, 2024 · 1 comment
Open
2 of 6 tasks

Comments

@DracoMan671
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

txt2img is not working right

Steps to reproduce the problem

playing a game that uses stable diffusion to generate character images

What should have happened?

worked?

What browsers do you use to access the UI ?

Mozilla Firefox, Microsoft Edge

Sysinfo

sysinfo-2024-08-04-07-08.json

Console logs

venv "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
fatal: detected dubious ownership in repository at 'C:/Users/draco/stable-diffusion-webui-amdgpu'
'C:/Users/draco/stable-diffusion-webui-amdgpu' is owned by:
        BUILTIN/Administrators (S-1-5-32-544)
but the current user is:
        THE-RIG/draco (S-1-5-21-4069348209-68958950-2233192803-1001)
To add an exception for this directory, call:

        git config --global --add safe.directory C:/Users/draco/stable-diffusion-webui-amdgpu
fatal: detected dubious ownership in repository at 'C:/Users/draco/stable-diffusion-webui-amdgpu'
'C:/Users/draco/stable-diffusion-webui-amdgpu' is owned by:
        BUILTIN/Administrators (S-1-5-32-544)
but the current user is:
        THE-RIG/draco (S-1-5-21-4069348209-68958950-2233192803-1001)
To add an exception for this directory, call:

        git config --global --add safe.directory C:/Users/draco/stable-diffusion-webui-amdgpu
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: 1.10.1
Commit hash: <none>
Using ZLUDA in C:\Users\draco\stable-diffusion-webui-amdgpu\.zluda
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --no-half-vae --listen --port=7860 --api --cors-allow-origins null --skip-torch-cuda-test --use-zluda
C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\diffusers\models\vq_model.py:20: FutureWarning: `VQEncoderOutput` is deprecated and will be removed in version 0.31. Importing `VQEncoderOutput` from `diffusers.models.vq_model` is deprecated and this will be removed in a future version. Please use `from diffusers.models.autoencoders.vq_model import VQEncoderOutput`, instead.
  deprecate("VQEncoderOutput", "0.31", deprecation_message)
C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\diffusers\models\vq_model.py:25: FutureWarning: `VQModel` is deprecated and will be removed in version 0.31. Importing `VQModel` from `diffusers.models.vq_model` is deprecated and this will be removed in a future version. Please use `from diffusers.models.autoencoders.vq_model import VQModel`, instead.
  deprecate("VQModel", "0.31", deprecation_message)
ONNX: version=1.18.1 provider=CUDAExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
ZLUDA device failed to pass basic operation test: index=None, device_name=AMD Radeon RX 6950 XT [ZLUDA]
CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

[-] ADetailer initialized. version: 24.8.0, num models: 10
CivitAI Browser+: Aria2 RPC started
fatal: detected dubious ownership in repository at 'C:/Users/draco/stable-diffusion-webui-amdgpu'
'C:/Users/draco/stable-diffusion-webui-amdgpu' is owned by:
        BUILTIN/Administrators (S-1-5-32-544)
but the current user is:
        THE-RIG/draco (S-1-5-21-4069348209-68958950-2233192803-1001)
To add an exception for this directory, call:

        git config --global --add safe.directory C:/Users/draco/stable-diffusion-webui-amdgpu
ControlNet preprocessor location: C:\Users\draco\stable-diffusion-webui-amdgpu\extensions\sd-webui-controlnet\annotator\downloads
2024-08-04 02:58:06,617 - ControlNet - INFO - ControlNet v1.1.455
Loading weights [7eb674963a] from C:\Users\draco\stable-diffusion-webui-amdgpu\models\Stable-diffusion\hassakuHentaiModel_v13.safetensors
2024-08-04 02:58:08,734 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: C:\Users\draco\stable-diffusion-webui-amdgpu\configs\v1-inference.yaml
C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 21.1s (prepare environment: 14.5s, initialize shared: 2.4s, other imports: 0.1s, load scripts: 4.0s, create ui: 1.2s, gradio launch: 4.2s, add APIs: 1.0s).
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 871, in load_model
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)  # Reload embeddings after model load as they may or may not fit the model
  File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\textual_inversion\textual_inversion.py", line 228, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\textual_inversion\textual_inversion.py", line 156, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
  File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\sd_hijack_clip.py", line 365, in encode_embedding_init_text
    embedded = embedding_layer.token_embedding.wrapped(ids.to(embedding_layer.token_embedding.wrapped.weight.device)).squeeze(0)
  File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\sparse.py", line 163, in forward
    return F.embedding(
  File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\functional.py", line 2264, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.



Stable diffusion model failed to load
Using already loaded model hassakuHentaiModel_v13.safetensors [7eb674963a]: done in 0.0s
2024-08-04 02:58:15,012 - ControlNet - WARNING - No ControlNetUnit detected in args. It is very likely that you are having an extension conflict.Here are args received by ControlNet: ().
*** API error: POST: http://localhost:7860/sdapi/v1/txt2img {'error': 'RuntimeError', 'detail': '', 'body': '', 'errors': 'CUDA error: invalid argument\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n'}
    Traceback (most recent call last):
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\anyio\streams\memory.py", line 98, in receive
        return self.receive_nowait()
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\anyio\streams\memory.py", line 93, in receive_nowait
        raise WouldBlock
    anyio.WouldBlock

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\middleware\base.py", line 78, in call_next
        message = await recv_stream.receive()
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\anyio\streams\memory.py", line 118, in receive
        raise EndOfStream
    anyio.EndOfStream

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\api\api.py", line 186, in exception_handling
        return await call_next(request)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\middleware\base.py", line 84, in call_next
        raise app_exc
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\middleware\base.py", line 70, in coro
        await self.app(scope, receive_or_disconnect, send_no_error)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\middleware\base.py", line 108, in __call__
        response = await self.dispatch_func(request, call_next)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\api\api.py", line 150, in log_and_time
        res: Response = await call_next(req)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\middleware\base.py", line 84, in call_next
        raise app_exc
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\middleware\base.py", line 70, in coro
        await self.app(scope, receive_or_disconnect, send_no_error)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\middleware\cors.py", line 92, in __call__
        await self.simple_response(scope, receive, send, request_headers=headers)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\middleware\cors.py", line 147, in simple_response
        await self.app(scope, receive, send)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
        await responder(scope, receive, send)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in __call__
        await self.app(scope, receive, self.send_with_gzip)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
        raise exc
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
        await self.app(scope, receive, sender)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
        raise e
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
        await self.app(scope, receive, send)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\routing.py", line 718, in __call__
        await route.handle(scope, receive, send)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\routing.py", line 276, in handle
        await self.app(scope, receive, send)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\routing.py", line 66, in app
        response = await func(request)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\fastapi\routing.py", line 237, in app
        raw_response = await run_endpoint_function(
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function
        return await run_in_threadpool(dependant.call, **values)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
        return await anyio.to_thread.run_sync(func, *args)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
        return await get_asynclib().run_sync_in_worker_thread(
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
        return await future
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
        result = context.run(func, *args)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\api\api.py", line 482, in text2imgapi
        processed = process_images(p)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\processing.py", line 849, in process_images
        res = process_images_inner(p)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\processing.py", line 1006, in process_images_inner
        model_hijack.embedding_db.load_textual_inversion_embeddings()
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\textual_inversion\textual_inversion.py", line 228, in load_textual_inversion_embeddings
        self.expected_shape = self.get_expected_shape()
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\textual_inversion\textual_inversion.py", line 156, in get_expected_shape
        vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\modules\sd_hijack_clip.py", line 365, in encode_embedding_init_text
        embedded = embedding_layer.token_embedding.wrapped(ids.to(embedding_layer.token_embedding.wrapped.weight.device)).squeeze(0)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\sparse.py", line 163, in forward
        return F.embedding(
      File "C:\Users\draco\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\functional.py", line 2264, in embedding
        return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
    RuntimeError: CUDA error: invalid argument
    CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
    For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
    Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


---

Additional information

No response

@CS1o
Copy link

CS1o commented Aug 6, 2024

Hey, you dont have a commit version and you have ownership (permission) problems. This could be caused by moving the webui or by reinstalling the OS.

To fix you have to setup the webui fresh. Follow my AMD Automatic1111 with ZLUDA Guide for that from here:
https://github.com/CS1o/Stable-Diffusion-Info/wiki/Installation-Guides

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants