-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about training 240 frames~ #97
Comments
It is possible. Actually, we are limited by GPU memory (80G A800), so we only train up to 60 frames. |
thanks for your reply, and have you ever think about reduce the requirement of GPU memory ? |
and another question is did you ever think about use SVD to generate video ? is the quality of SVD generated video not good enough ? |
We've done a lot to save GPU memory. You may check the details of our implementation.
Currently, you can refer to Vista, which is based on SVD but without fine-grained controllability. In our new work, we will discuss the related problem. The new paper will come out soon. Stay tuned. |
thanks a lot. |
and can i ask what's the difference between video generate and image generate ? just increase the number of batch size ? |
They are fundamentally different. Images are 2D, but videos are 3D (with temporal dim). From the resources' perspective, one simple example is that, many high-res image generations only support training with batch size=1. The training/inference consumption of video can easily explode, and the model needs to gain more capability. |
thanks for your answer, and i found that you didn't use any image to generate latents (you just use bev) to generate video, and my questions is that how about use 8 images + 8 random latents to generate 16 frames video, can this help to increase the continuity in time to generate long video ? |
I think you want to ask about future frame prediction. This can be thought of as a downstream task of the video generation model. There are some inference tricks to do so, similar to image inpainting. Anyway, it relies on the ability of the video generation model. |
thank you very much |
Could you please offer me some help about those inferences tricks, or give some link, i want to try ~ |
This issue is stale because it has been open for 7 days with no activity. If you do not have any follow-ups, the issue will be closed soon. |
hi, thanks for your open source again. i just find there has no difference between 16 frames yaml and 61 frames yaml except sc_attn_index, so i'm wondering that if i can training 240 frames just change model sc_attn_index ? Looking forward to your reply ~
The text was updated successfully, but these errors were encountered: