Simple way to do an accurate FaceSwap #2644
Replies: 9 comments 6 replies
-
This method may not work for close-up or high-resolution images |
Beta Was this translation helpful? Give feedback.
-
Do you set it to FaceSwap or ImagePrompt? |
Beta Was this translation helpful? Give feedback.
-
According to the size of the picture is 896x1152, and the standard human body ratio of 7.5 heads, the proportion of the head in the image is only about 150-200 pixels of square.Twice is about 300-400. |
Beta Was this translation helpful? Give feedback.
-
Literally none of these look like the intended person. @lllyasviel what would it take to get official support for InstantID? No face swap model out there beats it, and there's already a fork that has implemented it. Would you take a PR? |
Beta Was this translation helpful? Give feedback.
-
Disclaimer: new user I can't find this setting in the Advanced sidebar. Is this the right place? Thanks. |
Beta Was this translation helpful? Give feedback.
-
I tried the method - it doesn't work. Generates a completely different person. I played with the resolutions and sizes of source photos and target photos, turned off styles, changed/switched everything that can be changed and switched))) |
Beta Was this translation helpful? Give feedback.
-
@Alextest-art InstantID works. Fooocus should implement it. |
Beta Was this translation helpful? Give feedback.
-
@infinity0 in your method the upscaling 1.5x with face swap and mixing vary subtle setting are you still gonna be able to get high quality results cause i find in backend settings that it only completes the faceswap in just 18 steps |
Beta Was this translation helpful? Give feedback.
-
Nice one, I will go have a try on the face swap and get back if there's something confused. |
Beta Was this translation helpful? Give feedback.
-
There's a few threads complaining about the FaceSwap quality, because the "obvious" way to do it without going into debugging settings in fact does give bad results that don't resemble the target. There's a few other threads here that give some good tips but they use hard-to-remember debugging options and/or link to a long video.
Instead, my method here is simple, reliable, and easy-to-remember:
Note: this method (and many other methods) work best as a single Generate step that doesn't mix in other things you might want to be doing. For example, if you're generating a new scene, get that right with the wrong face first, then do this separately.
Examples:
Input image, with face to be replaced, generated by Fooocus
Face image, of Elon Musk found online. I claim fair use here.
Output with weight = 0.75
Output with weight = 1.0 - more aspects of the face image stand out, arguably 0.75 blends in better here. But your use-case may suit weight = 1.0 better.
Output with weight = 0.6 - blends in better than 0.75 but features of the original face also show through. Again, your use-case may suit this better.
Another output with weight = 0.6 - simply retrying a few times can give you results closer to what you want
Bad examples:
Output with Stop At = 0.9 - clearly this is not Elon Musk
Output with Vary (Subtle), Stop At = 1.0, Weight = 0.75 - this resembles Elon Musk but the eyes and certain other things are all weird. The results are even worse if your face image is bad quality.
Output with Inpainting (Method: Improve Detail), Stop At = 1.0, Weight = 0.75 - this vaguely resembles Elon Musk but is clearly inferior to Upscale.
Beta Was this translation helpful? Give feedback.
All reactions