3DGS vulnerabilities are underexplored. We highlight new potential threats to robotic learning for autonomous navigation and other safety-critical 3DGS applications.
CLOAK is a poisoning attack method - benign images are replaced with adversarial images in the 3DGS training dataset for targeted camera viewpoints.
DAGGER generalizes projected gradient descent attack by exploiting the differentiability of the 3DGS scene representation to manipulate splat color, scaling, translation, rotation, or alpha attributes to fool object detectors.
Led by Matthew Hull, 3D Gaussian Vulnerabilities is a result of a collaboration between the Polo Club of Data Science at Georgia Tech and Technology Innovation Institute. 3DGaussian Splat Vulnerabilities has been created by Matthew Hull, Haoyang Yang, Pratham Mehta, Mansi Phute, Aeree Cho, Haoran Wang, Matthew Lau, Wenke Lee, Willian Lunardi, Martin Andreoni, and Duen Horng Chau.
To learn more about 3D Gaussian Splat Vulnerabilities, please read our paper, presented at CVPR 2025 Workshop on Neural Fields Beyond Conventional Cameras:
@misc{hull20253dgsvulnerabilities,
title={3D Gaussian Splat Vulnerabilities},
author={Matthew Hull and Haoyang Yang and Pratham Mehta and Mansi Phute and Aeree Cho and Haoran Wang and Matthew Lau and Wenke Lee and Willian Lunardi and Martin Andreoni and Duen Horng Chau},
booktitle = {CVPR Workshop on Neural Fields Beyond Conventional Cameras},
year={2025}
}



