Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

when training in 256x256, the output is 256x512 pixels #14

Open
notnot opened this issue Mar 20, 2022 · 1 comment
Open

when training in 256x256, the output is 256x512 pixels #14

notnot opened this issue Mar 20, 2022 · 1 comment

Comments

@notnot
Copy link

notnot commented Mar 20, 2022

First, thanks for this great repository. It is very useful to study the sylegan2 architecture!

When training in 256x256 resolution, the output images have a size of 256x512 (h x w), These are in fact two images stacked on top of each other. I can easily 'unstack' this output by reshaping the tensor, but i wonder why it happens? If my batch size is 2, i get 4 outputs. This will become problematic when i will increase resolution and need to generate just a single 512x152. I don't want the system to actually generate a 512x1024.

The two 'stacked' images are also quite similar.

@notnot
Copy link
Author

notnot commented Mar 20, 2022

Ok, after diving deeper into the code i have discovered that Trainer.gen_samples() is stacking two variants of outputs. It is a feature, not a bug, I will adapt the code to suit my needs.

@notnot notnot changed the title when training in 256x56, the output is 256x512 pixels when training in 256x256, the output is 256x512 pixels Mar 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant