Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seems that the abs activation is redundant? #13

Open
yzslab opened this issue Dec 12, 2024 · 4 comments
Open

Seems that the abs activation is redundant? #13

yzslab opened this issue Dec 12, 2024 · 4 comments

Comments

@yzslab
Copy link

yzslab commented Dec 12, 2024

Hi, I found that the opacities will not be optimized after replacing the activation function with the abs here, due to the absence of the replace_tensor_to_optimizer operation.
Since the range of the Sigmoid is $$(0, 1)$$ and the opacities are fixed, they will never fall below 0. Therefore, it seems that the abs activation is redundant, as simply returning the same tensor suffices.
I tested removing the abs by the modification below and it produced the same metrics:

diff --git a/scene/gaussian_model.py b/scene/gaussian_model.py
index 8c65c4c..0e46ba8 100644
--- a/scene/gaussian_model.py
+++ b/scene/gaussian_model.py
@@ -46,14 +46,14 @@ class GaussianModel:
             self.inverse_opacity_activation = inverse_sigmoid
         else:
             print("Absolute rendering mode")
-            self.opacity_activation = torch.abs
+            self.opacity_activation = identity_gate
             self.inverse_opacity_activation = identity_gate
 
         self.rotation_activation = torch.nn.functional.normalize
 
     def modify_functions(self):
         old_opacities = self.get_opacity.clone()
-        self.opacity_activation = torch.abs
+        self.opacity_activation = identity_gate
         self.inverse_opacity_activation = identity_gate
         self._opacity = self.opacity_activation(old_opacities)

Is this expected?

@saswat0
Copy link
Collaborator

saswat0 commented Dec 16, 2024

@yzslab You are correct. The abs activation is kept just as a failsafe and we don't expect any performance improvements with it

@yzslab
Copy link
Author

yzslab commented Dec 16, 2024

@yzslab You are correct. The abs activation is kept just as a failsafe and we don't expect any performance improvements with it

Hi, does the modify_functions correspond to the Section 3.4 of your paper?

Therefore, we convert the regular, capped Gaussian primitives to high-opacity Gaussians after reaching the midpoint of our training (15K iterations). This involves replacing the opacity activation with abs and clamping blending weights to 1 from above during rendering. As shown by our ablation, this change positively impacts quality metrics, particularly PSNR.

Could you please confirm if this conclusion is still valid?

@saswat0
Copy link
Collaborator

saswat0 commented Dec 20, 2024

@yzslab Apologies for the late response.

Thanks for pointing out this issue. We've patched the bug with d1fced9

@yzslab
Copy link
Author

yzslab commented Dec 24, 2024

@yzslab Apologies for the late response.

Thanks for pointing out this issue. We've patched the bug with d1fced9

Thanks, but have you tested it? Seems that such a simple solution can not produce the expected results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants