How to boost perfomance on M-Macs #3447
Replies: 3 comments 5 replies
-
It's a Macbook Air and you're already getting good results for what it is using the MPS backend. Used libraries have already been optimized in 2.5.0, further optimization requires #3084, but i haven't yet received any support on this. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
on my intel Mac I was getting 14it 900 to 1400 seconds a image. at that was the best I could get. I have just done a egpu set up at a cost of £350 and now under 1min a image |
Beta Was this translation helpful? Give feedback.
-
I've tested some of the commands from #129
Results:
🐢
python entry_with_update.py --attention-pytorch
this is the worst, 120s/it, 60s to load model
🐱
python entry_with_update.py --always-cpu --disable-offload-from-vram --unet-in-fp8-e5m2
25-20s/it, 45s to load model
🐆
python entry_with_update.py --unet-in-fp16 --attention-split
initially 20 then 14-15s/it, 40s to load model
🌟
python entry_with_update.py --all-in-fp16 --attention-pytorch --disable-offload-from-vram --always-high-vram --gpu-device-id 0 --async-cuda-allocation --unet-in-fp16 --vae-in-fp16 --clip-in-fp16
This is the best, 10s/it and only 24s to load the model
My Configuration:
MacBook Air 13", M1 2020, 16GB, 8-Core CPU & 8 Core GPU with macOS Sonoma 14.6
ATTENTION
Ambient Temperature: 35°C
I used a cooling system to not melt the mac. (Ice bricks) Obviously I recommend finding a more suitable solution to cooling a computer or not cooling it with a maximum slowdown of 10s/it in my case.
Update: Using it on a Mac with low RAM can cause massive SWAP usage and speed up disk wear
Beta Was this translation helpful? Give feedback.
All reactions