Replies: 6 comments 3 replies
-
|
To follow the tutorial from the https://www.nextdiffusion.ai/tutorials/create-uncensored-videos-with-wan22-remix-in-comfyui-t2v and get a text-to-video workflow working, it was necessary to install a C compiler and the Python developer package with the following commands: |
Beta Was this translation helpful? Give feedback.
-
|
@donlaiq is there any chance you can add in details on how to add in flash-attention which should increase the speed of ComfyUI considerably. |
Beta Was this translation helpful? Give feedback.
-
|
Just tested it and your instructions worked perfectly. Short of figuring out how to solve out of memory problems with ComfyUI on an LXC on Proxmox and this is actually the optimal install for this. |
Beta Was this translation helpful? Give feedback.
-
CON*** DE LA LoRALately, I've been fiddling with several combinations of different workflows, models and LoRAs. However, if you change, for example, to a base Wan 2.2 model and disconnect the links between the output lora from the nodes WanVideo Lora Select (Optional Lightning LoRA's group) and the input lora from the nodes WanVideo Model Loader (Load Models group), then you'll get a blurry video like this one: ComfyUI_00033_.mp4On the other hand, if the right Lightning LoRA is connected and you use the same prompt with the same seed, then you'll get something like this: ComfyUI_00032_.mp4Supposedly, you should be able to omit this LoRA altogether, but maybe you should also increment the fields of the Steps group by a factor of 5X or more. |
Beta Was this translation helpful? Give feedback.
-
|
吐司AI x 通义万相,带来WAN2.6全新AI生成体验https://mp.weixin.qq.com/s/iO9DZlIh5kB-XWZaa95zrA |
Beta Was this translation helpful? Give feedback.
-
Character SwappingWanAnimate_00007-audio.webmMr. Torvalds is not as happy about the imminent arrival of the Linux 7.0 kernel as he is about the possibility of swapping characters using Wan 2.2 Animate with these two workflows in Linux and with this hardware architecture:
That's the real truth about his dance. The previous video was made using the first workflow with just a few tweaks. However, it has a defect that I wasn’t able to figure out how to solve in this particular workflow. When there’s more than one character in the scene, it always chooses the character nearest to the camera. For example, from this video of these lovely parents, https://www.youtube.com/shorts/qQ0mdELgmTQ , it always chooses the woman (left side), and it will produce an output like this one: linus_next_diffusion.webmThis issue could be solved with the second workflow. In this case, the execution consists of two steps.
Once this step is done, you can click the Run button again, wait…and wait…and wait… 😴…and finally, you’ll get something like this: linus_comfyui.mp4By the way, if you want to learn to dance the Macarena or something else, you can watch the full video of this lady: https://www.youtube.com/watch?v=wBroZqTxhd8 This is me in a daring effort to try it: ComfyUI_00179_.mp4The solution should come from something like this: https://www.youtube.com/watch?v=742C1VAu0Eo . Unfortunately, I’m getting the following warning a couple of times before the process is gently killed:
At least, it doesn’t freeze the machine with an irrecoverable black screen, which means I can keep using the OS. |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
🎅 Ho ho ho! Santa-donlaiq is here, and he brings some gifts! 🎅
DESCRIPTION
I got a new PC with hardware I never worked with before (first time with AMD, first time without Intel and NVIDIA).
It came with Windows 11 installed, but when I got it, the first thing I did was install a Fedora distro, and I tried to run a ComfyUI generation video workflow. No Windows iteration whatsoever.
The experience was totally unsuccessful, so I tried my luck with an Ubuntu distro. This time, I felt like I was closer to making it work, but I couldn't.
I wanted to try this feature no matter what, so I decided to reinstall Windows 11.
I was playing around and basically learning the basics of ComfyUI, and finally I came up with a working workflow.
When I was satisfied after trying the features of the new device, I decided to come back to Linux and try again.
Mr. Windows, I don't want to make it personal, but I’m not very fond of you.
FIRST STEPS
According to this analysis, https://www.phoronix.com/review/opensuse-tw-cachyos, CachyOS is more performant, but I had never tried openSUSE, and it seemed like a good opportunity to do so.
Long story short, I could make it work. I could generate videos with ComfyUI. This was meant to be a tutorial for my future self, especially to have a base if I try it again with another distro.
Ignore this or that error is not the best recommendation (as I will suggest in the next lines), but it's the best solution I found for the moment.
The first step is to modify the BIOS, changing the value at
Advanced->AMD CBS->Options->NBIO Common->GFX Configuration->Dedicated Graphics Memoryfrom 96 to 64 to increase the RAM used by the OS.
I'm not quite sure which tutorial you should follow to install ROCm drivers system-wide, or if it's even necessary on openSUSE or another distro. What’s important to remember is to add this line:
sudo usermod -a -G video,render $LOGNAMEand then reboot.
INSTALLING COMFYUI
Basically, I followed the steps from https://www.reddit.com/r/ROCm/comments/1no2apl/how_to_install_comfyui_comfyuimanager_on_windows/.
After choosing a folder to clone the GitHub repository, you should follow these steps or run these commands:
git clone https://github.com/comfyanonymous/ComfyUI.gitrequirements.txtfile in ComfyUI folder. Delete the torch, torchaudio, and torchvision lines, but leave the torchsde line. Save and close the file. This step prevents the installation of NVIDIA drivers.cd ComfyUIpython3 -m venv .venvsource .venv/bin/activatepip install --index-url https://rocm.nightlies.amd.com/v2/gfx1151/ "rocm[libraries,devel]"pip install --index-url https://rocm.nightlies.amd.com/v2/gfx1151/ --pre torch torchaudio torchvisionpip install -r requirements.txtcd custom_nodesgit clone https://github.com/Comfy-Org/ComfyUI-Manager.gitcd ..python3 main.py --highvramAnd I get the error you can see in the next image
so, I took some ideas from #9604 to solve it.
At this point, if I run this command
python -m pip freeze --all | grep torchI get the following output
The end of every line references a date in the format [year][month][day]. @alshdavid suggests changing torchaudio, and it makes sense because it has an earlier date. In my case, I chose to change torch and torchvision because they have earlier dates.
The trick is to paste this URL in the browser https://rocm.nightlies.amd.com/v2/gfx1151/ and start manual work.
In my case, I followed the link torch and looked for the highest Linux version with the date '20251220' (the same as torchaudio). Then, I did the same for torchvision. The important part to preserve goes from the number after the first hyphen until the end of the date. In my case, the command to run was formed like this:
python3 -m pip install torch==2.9.1+rocm7.11.0a20251220 torchvision==0.24.0+rocm7.11.0a20251220 --index-url https://rocm.nightlies.amd.com/v2/gfx1151/Run it.
THE WORKFLOW SETUP
I followed this tutorial: https://www.nextdiffusion.ai/tutorials/creating-uncensored-videos-with-wan22-remix-in-comfyui-i2v. You are not forced to work with nudes, but you are not forced to avoid them either. Besides, the tutorials are really good and worked very well… in Windows.
Before starting, so we don't need to restart ComfyUI later, open the file ComfyUI/user/__manager/config.ini, change the option security_level from normal to weak, and save the file.
As stated by the tutorial, paste all the model files in their corresponding folders within ComfyUI/models.
Now you can run
python3 main.py --highvramand paste the URLlocalhost:8188in your browser.Drag the JSON workflow to ComfyUI, and you will be welcomed with the This workflow has missing nodes popup.
Close it and go to Manager. Check that the default option is selected in Channel. Then press the Install Missing Custom Nodes button. Another popup window appears. Close it, and install all the suggested nodes with the latest version. Press the Restart button, then Confirm, and after a few seconds, press Confirm again.
This time, the This workflow has missing nodes popup has only two missing nodes. Close it, and go back to Manager. From the Channel list, choose dev, and then press the Install Missing Custom Nodes button. Ignore and close the new error popup, and install the ComfyUI_crdong option. Ignore the installation error, press Restart, then Confirm, and finally Confirm again.
Now, only easy int is missing. Ignore it, we are good to go.
THE WORKFLOW MODIFICATION
That's how it works for me.
As you can see, Set Resolution is the problematic node.
Double-click on an empty space in ComfyUI. Type int, and select the Int node. This node will determine the final resolution of the video. I find 512 is a good value for the first field. The resolution is good enough, and the video is generated at a reasonable speed, but feel free to experiment with it.
Change the second field from randomize to fixed. Connect its output to the same places Set Resolution is currently pointing, and delete the node Set Resolution.
Select both nodes from the group Torch & BlockSwap Settings and disable them with CTRL+B.
Select both nodes from the group Optional - Lightning LORA's and enable them with CTRL+B.
Change the attention_mode values for both WanVideo Model Loader nodes within the Load Models group from sageattn to sdpa.
Within the group Steps, change the value of the node Steps from 8 to 4, and the value of the node Split_step from 4 to 2.
The value num_frames within the node WanVideo ImageToVideo Encode (Resolution group) determines the length of the video: x seconds = (num_frames - 1) / 16.
THE WORKFLOW IN ACTION
You have to choose a picture to start with by dragging and dropping the file over the Load Image node (Upload Image group), or by navigating the filesystem after clicking the Choose file to upload button.
Then, you write the prompt in the CLIP Text Encode (Positive Prompt) node (Prompt group) and finally... wait for it... click the Run button.
AN EXAMPLE PROMPT
Prompt: "Smiling and clapping, showing an expression of surprise. From his head appears a dialog box with the sentence 'I should hire donlaiq!'"
And after 10 minutes, that was the result:
ComfyUI_00005_.mp4
A FINAL REQUEST
Hi Elon,
I chose a random image from the internet, and I didn't find your private email address to ask for your permission, so please don't sue me!
Best regards,
donlaiq.
Beta Was this translation helpful? Give feedback.
All reactions