-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update ldm patched && DORA support #3454
base: develop
Are you sure you want to change the base?
Conversation
currently issues with calculate_sigmas call
Some parameters (default_image_only_indicator) defaults to None to avoid exceptions Some deprecated method unwrapped Samples/Schedulers - tested, seems to be ok - karas, euler, heun, restart, some others && samplers - uni_pc/uni_pc_bh2: updated from latest comfy, original code had function parameters mismatch ControlNets - tested, seems to be ok Did not tested: Refiners (SDXL/SD15), Inpainting
@IPv6 thanks a lot, much appreciated! There still seem to be unresolved Merge Conflicts, which you can solve by merging main to your branch and to resolve them locally. |
…ate-ldm-patched # Conflicts: # args_manager.py # ldm_patched/k_diffusion/sampling.py # ldm_patched/modules/latent_formats.py # ldm_patched/modules/model_sampling.py # ldm_patched/modules/samplers.py # requirements_versions.txt
@mashb1t Updated with conflict resolve. Uncertain about some changes:
|
Thanks for the updates, can check this on the weekend! |
fixes AttributeError: 'Namespace' object has no attribute 'favicon_path'
LGTM after fixing a bug with favicon (missing args). It's quite the spam in the logs due to logging all requests done by HTTPX, but manageable. 2 concerns:
Sadly i don't have the time, neither can i implement new features nor can i support as extensively as last time with the 2.5.0 enhance upgrade until November, as there's a lot going on in my private and work life. During this period, nobody will be able to do code updates as i'm sadly the only maintainer and i can only assume lllyasviel has ditched the project despite mentioning to come back after the release of SD3 (see #2154 (comment)). I'm open to your suggestions, hope you understand the situation. Thank you once more for your support, much appreciated!! |
Totally understandable, Fooocus "simplicity aura" has this drawback of endless flow of end-user problems // And such update can mess a lot, most probably in some hidden cases. Anyway, I think any fork that wish to implement something ldm-fresh will be able to use this patch as a starting point. For me Fooocus already quite mature and frankly there is nothing really needs to be added given perfect Fooocus focus. It works, that`s it :) |
I got a question that I would appreciate if you could answer. If I switch to this branch and run Fooocus it is going to run? Is so, could DoRa be easily added? Or that would require a lot of work? Thanks in advance! |
@mohamed-benali this branch supports DoRA out of the box, you can switch to it and just select it as any other LoRA. |
Thats amazing! Since there will be very few updates, unless @lllyasviel takes this back wink wink, I will start using this branch. Hopefully I won't find any blocking bug. Thanks for your work!! It's very appreciated! |
After pocking around and trying this version I have to say I am very impressed! It's sooo much faster, specially the initial setup which is almost 0 now. And the actual generations are faster too. Thank you for updating this @IPv6 @mashb1t , it's very appreciated!! (only issue seems to be the anime inpaiting after step 24ish of 30 steps, which seems to error. But I can use the official version to inpaint and this one to generate faster) |
…s expected, with synthetic refiner switch (without exceptions)
also hit this problem some days ago and fixed it. The problem was in synthetic refiner for inpainting. this inpainting stuff blowed my mind :) Anyway added a fix to this branch - IPv6@03fabf0 Theoretically you can apply it locally, if needed. with patch inpainting works for me, with and without with controlnets. and presumably SDXL refiners in general should work New ldm indeed seems to make inference faster - at least with "--highvram --use-pytorch-cross-attention --force-fp16" |
Thank you so much @IPv6. It's working for me too :) Just have 1 question. Are DORAs supposed to work the same as LORAs? They don't seem to work for me, so just asking to know if I am missing something or they just don't work. (thanks in advance!) |
Hard to tell, with new ldm they should work (as mashb1t noted), but ldm is just the basement of all that machinery. There are also higher-level checks, and i suspect they need some upgrading to recognise DORAs as valid LORAs models. There is no such specific upgrades in this patch, unfortunately. But it should be easy to add, since ldm do all the heavy lifting. |
Do you know which module/files need to be touched? If it's that easy I might try to see if I can do it. |
Hard to tell (not using them, so never looked into it). looking at code there is suspicious lora parsing in "match_lora" method in core.py. Possibly DORA have some unexpected internal layers that are not recognized, this is not a part of ldm_patched. It is possible to add some loggin in relevant calls to find out is DORA loads (as it should) or rejected. In latter case it needs to be updated by someone who familiar with torch model insides // |
Thanks for the insight! Just as a note, the DORA never errors when I select it or generate an image. It's just that it has no effect. |
I have made some testing and it appears that DORAs give a After investigating more I found that it's missing some keys on I added the missing DORAs portion from here (all the code across the I added it locally and it worked for me!! DORAs work properly now!! I would push it to this branch but I cannot since I don't have edit permissions. |
Whoah, pretty cool! So it was lora parsing after all If you attach your updated lora.py - i can push it to the branch, just to keep all updates in one place |
Here it is: I changed the format to |
Thanks! Pushed it to the branch - IPv6@1157a1d |
Perfect!! With the fixes for Inpainting and DORAs this will be my main source for Fooocus. No need to switch between main and this one. Good job @IPv6!! @santiagosfx I saw in here #3192 you were asking for DORAs support. If you use this branch they will work correctly! |
Sorry for the newbie question, but how do I use a branch instead of the main repository? |
Don`t know regarding colab, but for using branch you need to checkout project from top commit of the branch - IPv6@1157a1d |
I managed to get the branch running, but only halfway. |
This param was changed in Comfy, so updated ldm require new setting instead of old one. try --highvram --use-pytorch-cross-attention --force-fp16 |
It keeps giving me the same error, the only difference now is that it does it in the middle of loading the models. |
Any idea how hard it would be to add support for SD3.5? Would be as difficult as adding the DORA support? Or far higher? |
SD3.5 has different architecture comparing to SDXL, so it is the same as with Flux - almost full rewrite is needed // |
managed to cleanup errors and now it is working with updated ldm_patched
calculate_sigmas - some type unification added
some parameters (default_image_only_indicator) now defaults to None to avoid exceptions
some deprecated method unwrapped
samples/schedulers - tested, seems to be ok, karas, euler, heun, restart, some others && samplers - no problems
uni_pc/uni_pc_bh2: updated from latest comfy, original code had function parameters mismatch
controlNets - tested, seems to be ok
did not tested: Refiners (SDXL/SD15), Inpainting