The official implementation of the paper Cross-modulated Attention Transformer for RGBT Tracking.
We train and test our model on two Nvidia 2080ti GPU.
Download SOT pretrained model from OSTrack. Or download from CAFormer_SOTPretrained.pth.tar
cd your_proj_path
python utils/make_pretrained.py # if you download model weight from ostrack
sh experiments/caformer/train.shcd your_proj_path
# For RGBT234
sh experiments/caformer/test234.sh
# For LasHeR
sh experiments/caformer/test245.shYou can use the files in eval_tracker to quickly evaluate the tracking results.
| Training Dataset | RGBT234 | LasHeR | VTUAV-ST | model & result |
|---|---|---|---|---|
| LasHeR | 88.3/66.4 | 70.0/55.6 | -/- | download |
| VTUAV | -/- | -/- | 88.6/76.2 | download |
@inproceedings{CAFormer,
title={Cross-modulated Attention Transformer for RGBT Tracking},
author={Yun Xiao, Jiacong Zhao, Andong Lu, Chenglong Li, Bing Yin, Yin Lin, Cong Liu},
booktitle={AAAI Conference on Artificial Intelligence},
year={2025}
}
Thanks for the OSTrack, which helps us to quickly implement our ideas.