You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My graphics card is AMD's 780m, and the GPU I'm using is AMD's 8845. Now I'm running this program in Windows using the AMD graphics card mode according to the project instructions. However, only 1024mb of video memory can be recognized, which forces me to run it in the CPU mode. In fact, I've allocated 8GB of memory to the AMD integrated graphics card as direct video memory, and I can even allocate 16GB of memory as its video memory. Is there any simple method, such as adding parameters in the.bat file, to force this program to run using the AMD integrated graphics card?
D:\Fooocus_win64_2-5-0>.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y
Found existing installation: torch 2.4.1
Uninstalling torch-2.4.1:
Successfully uninstalled torch-2.4.1
Found existing installation: torchvision 0.19.1
Uninstalling torchvision-0.19.1:
Successfully uninstalled torchvision-0.19.1
WARNING: Skipping torchaudio as it is not installed.
WARNING: Skipping torchtext as it is not installed.
WARNING: Skipping functorch as it is not installed.
WARNING: Skipping xformers as it is not installed.
D:\Fooocus_win64_2-5-0>.\python_embeded\python.exe -m pip install torch-directml
Requirement already satisfied: torch-directml in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (0.2.5.dev240914)
Collecting torch==2.4.1 (from torch-directml)
Using cached torch-2.4.1-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision==0.19.1 (from torch-directml)
Using cached torchvision-0.19.1-cp310-cp310-win_amd64.whl.metadata (6.1 kB)
Requirement already satisfied: filelock in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (3.12.2)
Requirement already satisfied: typing-extensions>=4.8.0 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (4.12.2)
Requirement already satisfied: sympy in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (1.12)
Requirement already satisfied: networkx in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (3.1)
Requirement already satisfied: jinja2 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (3.1.2)
Requirement already satisfied: fsspec in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (2023.6.0)
Requirement already satisfied: numpy in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torchvision==0.19.1->torch-directml) (1.26.4)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torchvision==0.19.1->torch-directml) (10.4.0)
Requirement already satisfied: MarkupSafe>=2.0 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from jinja2->torch==2.4.1->torch-directml) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from sympy->torch==2.4.1->torch-directml) (1.3.0)
Using cached torch-2.4.1-cp310-cp310-win_amd64.whl (199.4 MB)
Using cached torchvision-0.19.1-cp310-cp310-win_amd64.whl (1.3 MB)
Installing collected packages: torch, torchvision
WARNING: The scripts convert-caffe2-to-onnx.exe, convert-onnx-to-caffe2.exe and torchrun.exe are installed in 'D:\Fooocus_win64_2-5-0\python_embeded\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed torch-2.4.1 torchvision-0.19.1
[notice] A new release of pip is available: 24.1.2 -> 24.3.1
[notice] To update, run: D:\Fooocus_win64_2-5-0\python_embeded\python.exe -m pip install --upgrade pip
D:\Fooocus_win64_2-5-0>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --directml --listen 0.0.0.0
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py', '--directml', '--listen', '0.0.0.0']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.5.5
Failed to load config key: {"path_checkpoints": ["D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\checkpoints"]} is invalid or does not exist; will use {"path_checkpoints": ["../models/checkpoints/"]} instead.
Failed to load config key: {"path_loras": ["D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\loras"]} is invalid or does not exist; will use {"path_loras": ["../models/loras/"]} instead.
Failed to load config key: {"path_embeddings": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\embeddings"} is invalid or does not exist; will use {"path_embeddings": "../models/embeddings/"} instead.
Failed to load config key: {"path_vae_approx": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\vae_approx"} is invalid or does not exist; will use {"path_vae_approx": "../models/vae_approx/"} instead.
Failed to load config key: {"path_vae": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\vae"} is invalid or does not exist; will use {"path_vae": "../models/vae/"} instead.
Failed to load config key: {"path_upscale_models": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\upscale_models"} is invalid or does not exist; will use {"path_upscale_models": "../models/upscale_models/"} instead.
Failed to load config key: {"path_inpaint": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\inpaint"} is invalid or does not exist; will use {"path_inpaint": "../models/inpaint/"} instead.
Failed to load config key: {"path_controlnet": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\controlnet"} is invalid or does not exist; will use {"path_controlnet": "../models/controlnet/"} instead.
Failed to load config key: {"path_clip_vision": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\clip_vision"} is invalid or does not exist; will use {"path_clip_vision": "../models/clip_vision/"} instead.
Failed to load config key: {"path_fooocus_expansion": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\prompt_expansion\fooocus_expansion"} is invalid or does not exist; will use {"path_fooocus_expansion": "../models/prompt_expansion/fooocus_expansion"} instead.
Failed to load config key: {"path_wildcards": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\wildcards"} is invalid or does not exist; will use {"path_wildcards": "../wildcards/"} instead.
Failed to load config key: {"path_safety_checker": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\safety_checker"} is invalid or does not exist; will use {"path_safety_checker": "../models/safety_checker/"} instead.
Failed to load config key: {"path_sam": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\sam"} is invalid or does not exist; will use {"path_sam": "../models/sam/"} instead.
[Cleanup] Attempting to delete content of temp dir C:\Users\MRGUO6~1\AppData\Local\Temp\fooocus
[Cleanup] Cleanup successful
Using directml with device:
Total VRAM 1024 MB, total RAM 57166 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: privateuseone
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
Refiner unloaded.
Running on local URL: http://0.0.0.0:7865
model_type EPS
UNet ADM Dimension 2816
IMPORTANT: You are using gradio version 3.41.2, however version 4.44.1 is available, please upgrade.
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
To create a public link, set share=True in launch().
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Base model loaded: D:\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
VAE loaded: None
Request to load LoRAs [('sd_xl_offset_example-lora_1.0.safetensors', 0.1)] for model [D:\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [D:\Fooocus_win64_2-5-0\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [D:\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
Started worker with PID 7900
App started successful. Use the app with http://localhost:7865/ or 0.0.0.0:7865
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
My graphics card is AMD's 780m, and the GPU I'm using is AMD's 8845. Now I'm running this program in Windows using the AMD graphics card mode according to the project instructions. However, only 1024mb of video memory can be recognized, which forces me to run it in the CPU mode. In fact, I've allocated 8GB of memory to the AMD integrated graphics card as direct video memory, and I can even allocate 16GB of memory as its video memory. Is there any simple method, such as adding parameters in the.bat file, to force this program to run using the AMD integrated graphics card?
D:\Fooocus_win64_2-5-0>.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y
Found existing installation: torch 2.4.1
Uninstalling torch-2.4.1:
Successfully uninstalled torch-2.4.1
Found existing installation: torchvision 0.19.1
Uninstalling torchvision-0.19.1:
Successfully uninstalled torchvision-0.19.1
WARNING: Skipping torchaudio as it is not installed.
WARNING: Skipping torchtext as it is not installed.
WARNING: Skipping functorch as it is not installed.
WARNING: Skipping xformers as it is not installed.
D:\Fooocus_win64_2-5-0>.\python_embeded\python.exe -m pip install torch-directml
Requirement already satisfied: torch-directml in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (0.2.5.dev240914)
Collecting torch==2.4.1 (from torch-directml)
Using cached torch-2.4.1-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision==0.19.1 (from torch-directml)
Using cached torchvision-0.19.1-cp310-cp310-win_amd64.whl.metadata (6.1 kB)
Requirement already satisfied: filelock in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (3.12.2)
Requirement already satisfied: typing-extensions>=4.8.0 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (4.12.2)
Requirement already satisfied: sympy in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (1.12)
Requirement already satisfied: networkx in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (3.1)
Requirement already satisfied: jinja2 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (3.1.2)
Requirement already satisfied: fsspec in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.4.1->torch-directml) (2023.6.0)
Requirement already satisfied: numpy in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torchvision==0.19.1->torch-directml) (1.26.4)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torchvision==0.19.1->torch-directml) (10.4.0)
Requirement already satisfied: MarkupSafe>=2.0 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from jinja2->torch==2.4.1->torch-directml) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from sympy->torch==2.4.1->torch-directml) (1.3.0)
Using cached torch-2.4.1-cp310-cp310-win_amd64.whl (199.4 MB)
Using cached torchvision-0.19.1-cp310-cp310-win_amd64.whl (1.3 MB)
Installing collected packages: torch, torchvision
WARNING: The scripts convert-caffe2-to-onnx.exe, convert-onnx-to-caffe2.exe and torchrun.exe are installed in 'D:\Fooocus_win64_2-5-0\python_embeded\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed torch-2.4.1 torchvision-0.19.1
[notice] A new release of pip is available: 24.1.2 -> 24.3.1
[notice] To update, run: D:\Fooocus_win64_2-5-0\python_embeded\python.exe -m pip install --upgrade pip
D:\Fooocus_win64_2-5-0>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --directml --listen 0.0.0.0
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py', '--directml', '--listen', '0.0.0.0']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.5.5
Failed to load config key: {"path_checkpoints": ["D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\checkpoints"]} is invalid or does not exist; will use {"path_checkpoints": ["../models/checkpoints/"]} instead.
Failed to load config key: {"path_loras": ["D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\loras"]} is invalid or does not exist; will use {"path_loras": ["../models/loras/"]} instead.
Failed to load config key: {"path_embeddings": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\embeddings"} is invalid or does not exist; will use {"path_embeddings": "../models/embeddings/"} instead.
Failed to load config key: {"path_vae_approx": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\vae_approx"} is invalid or does not exist; will use {"path_vae_approx": "../models/vae_approx/"} instead.
Failed to load config key: {"path_vae": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\vae"} is invalid or does not exist; will use {"path_vae": "../models/vae/"} instead.
Failed to load config key: {"path_upscale_models": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\upscale_models"} is invalid or does not exist; will use {"path_upscale_models": "../models/upscale_models/"} instead.
Failed to load config key: {"path_inpaint": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\inpaint"} is invalid or does not exist; will use {"path_inpaint": "../models/inpaint/"} instead.
Failed to load config key: {"path_controlnet": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\controlnet"} is invalid or does not exist; will use {"path_controlnet": "../models/controlnet/"} instead.
Failed to load config key: {"path_clip_vision": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\clip_vision"} is invalid or does not exist; will use {"path_clip_vision": "../models/clip_vision/"} instead.
Failed to load config key: {"path_fooocus_expansion": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\prompt_expansion\fooocus_expansion"} is invalid or does not exist; will use {"path_fooocus_expansion": "../models/prompt_expansion/fooocus_expansion"} instead.
Failed to load config key: {"path_wildcards": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\wildcards"} is invalid or does not exist; will use {"path_wildcards": "../wildcards/"} instead.
Failed to load config key: {"path_safety_checker": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\safety_checker"} is invalid or does not exist; will use {"path_safety_checker": "../models/safety_checker/"} instead.
Failed to load config key: {"path_sam": "D:\BaiduYunDownload\\u8f6f\u4ef6\u5b89\u88c5\u5305\Fooocus_win64_2-5-0\Fooocus\models\sam"} is invalid or does not exist; will use {"path_sam": "../models/sam/"} instead.
[Cleanup] Attempting to delete content of temp dir C:\Users\MRGUO6~1\AppData\Local\Temp\fooocus
[Cleanup] Cleanup successful
Using directml with device:
Total VRAM 1024 MB, total RAM 57166 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: privateuseone
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
Refiner unloaded.
Running on local URL: http://0.0.0.0:7865
model_type EPS
UNet ADM Dimension 2816
IMPORTANT: You are using gradio version 3.41.2, however version 4.44.1 is available, please upgrade.
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
To create a public link, set
share=True
inlaunch()
.extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Base model loaded: D:\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
VAE loaded: None
Request to load LoRAs [('sd_xl_offset_example-lora_1.0.safetensors', 0.1)] for model [D:\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [D:\Fooocus_win64_2-5-0\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [D:\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
Started worker with PID 7900
App started successful. Use the app with http://localhost:7865/ or 0.0.0.0:7865
Beta Was this translation helpful? Give feedback.
All reactions