Hello, thank you for the open-source SLA project. My experience with using it on wan2.2 is as follows:
- The training performance has improved by over 30%;
- In scenarios where FA2 previously did not encounter OOM (Out of Memory), the SLA project now experiences OOM issues.
Finally, I would like to ask two questions:
- Why does this (OOM issue) occur?
- Are there any optimization techniques for SLA, particularly regarding computing efficiency and VRAM (Video Random Access Memory) usage?