Radeon RX 6800 XT Issues + GPU Selection Not Working #3750
Replies: 1 comment
-
Here is the result and console output from the exact same prompt with the eGPU disconnected, running on the built-in AMD Radeon Pro 5500 XT: (fooocus) @ Fooocus % python entry_with_update.py
|
Beta Was this translation helpful? Give feedback.
-
Hi Folks,
I've been using Fooocus for a while on my Intel-based Mac. It has an Internal AMD Radeon Pro 5500 XT, and I have an external AMD Radeon RX 6800 XT (eGPU). For a long while everything worked great with the eGPU, but sometime in the past month or so it stopped working and I have not been able to figure out why. I can see it is running on the eGPU in the activity monitor. When I disconnect the eGPU it runs on the internal GPU and works correctly.
With the eGPU, but generates random noise like this:
It used to work perfectly with this same eGPU on the same Mac prior to some recent Apple updates. I've re-installed everything clean and am running the default model with all the default settings. In this example I've just entered the prompt "Walking on the beach at sunset in Cancun". Here is the terminal output:
(fooocus) @-iMac Fooocus % python entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py']
Python 3.10.15 (main, Oct 3 2024, 02:33:33) [Clang 14.0.6 ]
Fooocus version: 2.5.5
[Cleanup] Attempting to delete content of temp dir /var/folders/dx/3xrbdwv9711cvk0x4t2r38j40000gn/T/fooocus
[Cleanup] Cleanup successful
Total VRAM 65536 MB, total RAM 65536 MB
Set vram state to: SHARED
Always offload VRAM
Device: mps
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
Refiner unloaded.
IMPORTANT: You are using gradio version 3.41.2, however version 4.44.1 is available, please upgrade.
Running on local URL: http://127.0.0.1:7865
To create a public link, set
share=True
inlaunch()
.model_type EPS
UNet ADM Dimension 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Base model loaded: /Volumes//Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors
VAE loaded: None
Request to load LoRAs [('sd_xl_offset_example-lora_1.0.safetensors', 0.1)] for model [/Volumes//Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [/Volumes//Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/Volumes//Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
Started worker with PID 45388
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] CLIP Skip = 2
[Parameters] Sharpness = 2
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] Seed = 1134758884471800279
[Parameters] CFG = 4
[Fooocus] Loading control models ...
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] Walking on the beach at sunset in Cancun, beautiful dynamic cinematic light, shining new magic, glowing stunning detail, creative, positive, cheerful, unique, pleasant, generous, cute, best, pure, magical, detailed, calm, gorgeous, inspired, vibrant, intricate, innocent, pretty, inspiring, sharp focus, professional, winning, highly decorated, colorful
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding negative #1 ...
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (896, 1152)
Preparation time: 5.36 seconds
Using karras scheduler.
[Fooocus] Preparing task 1/1 ...
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 8.95 seconds
0%| | 0/30 [00:00<?, ?it/s]huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using
tokenizers
before the fork if possible- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
/Volumes/Vault/Fooocus/modules/anisotropic.py:132: UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:13.)
s, m = torch.std_mean(g, dim=(1, 2, 3), keepdim=True)
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [05:32<00:00, 11.07s/it]
Requested to load AutoencoderKL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 1.22 seconds
[Fooocus] Saving image 1/1 to system ...
Image generated with private log at: /Volumes/*****/Fooocus/outputs/2024-11-22/log.html
Generating and saving time: 344.93 seconds
[Enhance] Skipping, preconditions aren't met
Processing time (total): 344.93 seconds
Total time: 408.74 seconds
I've tried to tell Fooocus to use the internal GPU using the --gpu-device-id argument, but it doesn't seem to have any effect. Running with device ID values of 0, 1, and 2 all run on the eGPU.
python entry_with_update.py --gpu-device-id 0
Any help with this would be greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions