Skip to content

Latest commit

 

History

History
88 lines (69 loc) · 2.19 KB

TRAIN.md

File metadata and controls

88 lines (69 loc) · 2.19 KB

SiamBAN Training Tutorial

This implements training of SiamBAN.

Add SiamBAN to your PYTHONPATH

export PYTHONPATH=/path/to/smalltrack:$PYTHONPATH

Prepare training dataset

Prepare training dataset, detailed preparations are listed in training_dataset directory.

Download pretrained backbones

Download pretrained backbones from here and put them in pretrained_models directory

Training

To train a model, run train.py with the desired configs:

cd experiments/smalltrack_r50_l234

Multi-processing Distributed Data Parallel Training

Refer to Pytorch distributed training for detailed description.

Single node, multiple GPUs (We use 3 GPUs):

CUDA_VISIBLE_DEVICES=0,1,2
python -m torch.distributed.launch \
    --nproc_per_node=3 \
    --master_port=2333 \
    ../../tools/train.py --cfg config.yaml

Testing

After training, you can test snapshots on VOT dataset. For example, you need to test snapshots from 10 to 20 epoch.

START=10
END=20
seq $START 1 $END | \
    xargs -I {} echo "snapshot/checkpoint_e{}.pth" | \
    xargs -I {} \ 
    python -u ../../tools/test.py \
        --snapshot {} \
	--config config.yaml \
	--dataset UAV20L 2>&1 | tee logs/test_dataset.log

Or:

mpiexec -n 3 python ../../tools/test_epochs.py  \
    --start_epoch 10  \
    --end_epoch 20  \
    --gpu_nums 3  \
    --threads 3  \
    --dataset UAV20L

Evaluation

python ../../tools/eval.py 	 \
	--tracker_path ./results \ # result path
	--dataset UAV20L        \ # dataset name
	--num 4 		 \ # number thread to eval
	--tracker_prefix 'ch*'   # tracker_name

Hyper-parameter Search

The tuning toolkit will not stop unless you do.

python ../../tools/tune.py  \
    --dataset UAV20L  \
    --snapshot snapshot/checkpoint_e20.pth  \
    --gpu_id 0