DGA: Direction-Guided Attack Against Optical Aerial Detection in Camera Shooting Direction-Agnostic Scenarios
TGRS, 2024 Yue Zhou . Shuqi Sun . Xue Jiang . Guozheng Xu . Fengyuan Hu . Ze Zhang . Xingzhao Liu .
Patch-based adversarial attacks have increasingly aroused concerns due to their application potential in military and civilian fields. In aerial imagery, numerous targets exhibit inherent directionality, such as vehicles and ships, giving rise to the emergence of oriented object detection tasks; similarly, adversarial patches also exhibit intrinsic orientation due to their lack of perfect symmetry. Existing methods presuppose a static alignment between the adversarial patch's orientation and the camera's coordinate system -- an assumption that is frequently violated in aerial images, whose effectiveness degrades in real-world scenarios. In this paper, we investigate the often-neglected aspect of patch orientation in adversarial attacks and its impact on camouflage effectiveness, particularly when the orientation is not congruent with the target. A new Directional Guided Attack (DGA) framework is proposed for deceiving real-world aerial detectors, which shows robust and adaptable attack performance in camera shooting direction agnostic (CSDA) scenarios. The core idea of DGA is to utilize affine transformations to constrain the relative orientation of the patch to the target and introduce three types of loss to reduce target detection confidence, make the color printable, and smooth the patch color. We introduce a direction-guided evaluation methodology to bridge the gap between patch performance in the digital domain and its actual real-world efficacy. Moreover, we establish a drone-based vehicle detection dataset (SJTU-4K), which labels the orientation of the target, to assess the robustness of patches under various shooting altitudes and views. Extensive proportionally scaled and 1:1 experiments are performed in physical scenarios, demonstrating the superiority and potential of the proposed framework for real-world attacks.
mmengine == 0.6.0
mmcv == 2.0.0rc4
mmdet == 3.0.0rc6
mmyolo == 3.0.0rc6
mmrotate == 1.0.0rc1
conda create -n camors python=3.8 -y
conda activate camors
conda install pytorch==1.12.0 torchvision==0.13.0 cudatoolkit=11.3 -c pytorch
pip install -U openmim
pip install yapf==0.40.1
mim install mmengine==0.6.0
mim install "mmcv==2.0.0rc4"
cd mmdetection
pip install -v -e .
cd ../mmyolo
pip install -v -e .
pip install albumentations
cd ../mmrotate
pip install -v -e .
Attacks on neural networks are often conducted through the use of stickers or patches, where the color of the patches or stickers does not need to resemble the background. However, the area covered by the stickers should not be too large, with the perturbation characterized by narrow scope and significant disturbance.
Plan of Methods:
- ✔️ DPatch (AAAIW'2019)
- ✔️ OBJ (2019)
- ✔️ APPA (TGRS'2022)
- ✔️ DGA (TGRS'2024)
- 🕒 Patch-Noobj (RS'2021)
- 🕒 APA (RS'2022)
- ➕ APC (2020)
- ➕ AerialAttack (WACV'2022)
- ➕ AdvSticker (TPAMI'2022)
- ➕ SOPP (TPAMI'2022)
- ➕ Adversarial Defense in Aerial Detection
Training the detector model:
python tools/train.py projects/camors/configs/yolov5_s-v61_syncbn_1xb2-100e_sjtu-1024.py
Training the adversarial patch (Ensure that you are in the mmrotate
directory):
python tools/train.py projects/camors/configs/dga/dga_yolov5_s-v61_syncbn_1xb2-5e_sjtu-1024.py
Test the adversarial patch:
- Uncomment
patch_dir
in thedga_yolov5_s-v61_syncbn_1xb2-5e_sjtu-1024.py
, as it is used to specify the patch. - Run
python tools/test.py projects/camors/configs/dga/dga_yolov5_s-v61_syncbn_1xb2-5e_sjtu-1024.py \
work_dirs/yolov5_s-v61_syncbn_1xb2-100e_sjtu-1024/epoch_100.pth


To collect the data we require, we design a reasonable scheme for data capture. First, we chose 20 scenes on our campus as our experimental site, including streets and car parks, where many kinds of cars are often seen. Afterward, we used the DJI Mini 3 drone as the capturing tool. In our scheme, the flight height ranges from 20 m to 120 m, and there were 9 flight heights in total. The resolution to the raw images is 4000
The dataset is available now.
jbox(Code:jlfr)
If you use this project for attacks in your research, please consider citing
@article{zhou2024dga,
title={DGA: Direction-Guided Attack Against Optical Aerial Detection in Camera Shooting Direction-Agnostic Scenarios},
author={Zhou, Yue and Sun, Shuqi and Jiang, Xue and Xu, Guozheng and Hu, Fengyuan and Zhang, Ze and Liu, Xingzhao},
journal={IEEE Transactions on Geoscience and Remote Sensing},
volume={62},
pages={1--22},
year={2024}
}
@article{zhou2023camonet,
title={CamoNet: A Target Camouflage Network for Remote Sensing Images Based on Adversarial Attack},
author={Zhou, Yue and Jiang, Wanghan and Jiang, Xue and Chen, Lin and Liu, Xingzhao},
journal={Remote Sensing},
volume={15},
number={21},
year={2023}
}