-
Hi I have been trying for couple hours use a 3d render as a base for the AI output without any real success. I can get a single frame rendered based on the strip to image but from strip to video produces results way just like the input strip as if no styling happens. I tried Zerscope XL and img2imd SD XL without any success. I rendered my 3d seq (20 frames) and trying to use it as the base for the stylers via the prompts. Is there a tutorial to do it properly? thanks |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
The open source img2vid and vid2vid solutions are not very strong. If you select zeroscope and and input a video or image and output video, there is a Strip Power value which is now exposed again. If you lower this value to ex. 0.18 you'll get some text prompt impact, but not a lot. The img2img video option results in flicker. I've done the paint videos with that. ControlNet on frame by frame will also flicker. So, we'll have to see if something like AnimateDiff or https://github.com/williamyang1991/Rerender_A_Video will get implemented in the Diffusers module so Pallaidium/and all other opensource generativeAI can get to the level of PIKA or MIdjourney's gen2, but we're not there yet. |
Beta Was this translation helpful? Give feedback.
The open source img2vid and vid2vid solutions are not very strong. If you select zeroscope and and input a video or image and output video, there is a Strip Power value which is now exposed again. If you lower this value to ex. 0.18 you'll get some text prompt impact, but not a lot.
The img2img video option results in flicker. I've done the paint videos with that. ControlNet on frame by frame will also flicker.
So, we'll have to see if something like AnimateDiff or https://github.com/williamyang1991/Rerender_A_Video will get implemented in the Diffusers module so Pallaidium/and all other opensource generativeAI can get to the level of PIKA or MIdjourney's gen2, but we're not there yet.