Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How were the ground truth normal maps computed for training the pix2pix network? #197

Open
janehwu opened this issue Jul 18, 2023 · 0 comments

Comments

@janehwu
Copy link

janehwu commented Jul 18, 2023

I understand the normal maps are computed in camera space, but could you please elaborate on what the exact transformation is from world space normals to camera space? E.g. in camera space, how are the x,y,z axes defined?

I've been looking at this code, but when I pass a predicted mesh given by the demo code to _render_normal I get a blank image (which suggests these are not the right transformations):

def _render_normal(self, mesh, deg):

In the demo code, it looks like the normals are directly predicted by the network, so I'm having trouble deciphering what the coordinate system is:

color = index(image_tensor[:1], uv).detach().cpu().numpy()[0].T

In summary, I'm trying to use the normal maps predicted by your pretrained pix2pix network, but to do so I need to know how the ground truth normal maps used to train this network were computed.

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant