(Previosely: DDPM-CD: Remote Sensing Change Detection using Denoising Diffusion Probabilistic Models)
Wele Gedara Chaminda Bandara, Nithin Gopalakrishnan Nair, Vishal M. Patel
Offical Pytorch implementation of DDPM-CD: Denoising Diffusion Probabilistic Models as Feature Extractors for Change Detection / Remote Sensing Change Detection using Denoising Diffusion Probabilistic Models
- 🎉 DDPM-CD has been accepted at IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2025.
- ❗ Paper-v3: We have (completely) revised the wrting of our paper. Please refer to v3 on arxiv.
Images sampled from the DDPM model pre-trained on off-the-shelf remote sensing images.
- Images generated from the pre-trained diffusion model trained on off-the-shelf remote sensing images.
- The generated images exhibit common objects typically observed in real remote sensing imagery, including buildings, trees, roads, vegetation, water surfaces, etc.
- This showcases the remarkable capability of diffusion models to grasp essential semantics from the training dataset.
- Although our primary focus isn't image synthesis, we explore the effectiveness of DDPM as a feature extractor for change detection.
We fine-tune a lightweight change classifier utilizing the feature representations produced by the pre-trained DDPM alongside change labels
Before using this repository, make sure you have the following prerequisites installed:
You can install PyTorch with the following command (in Linux OS):
conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia
To get started, clone this repository:
git clone https://github.com/wgcban/ddpm-cd.git
Next, create the conda environment named ddpm-cd
by executing the following command:
conda env create -f environment.yml
Then activate the environment:
conda activate ddpm-cd
Download the datasets and place them in the dataset
folder. ->See Section 5.1 for download links.
If you wish to only test, download the pre-trained DDPM and fine-tuned DDPM-CD models and place them in the experiments
folder. ->See Section 7 for links.
All the train-val-test statistics will be automatically upload to wandb
, and please refer wandb-quick-start
documentation if you are not familiar with using wandb
.
Dump all the remote sensing data sampled from Google Earth Engine and any other publically available remote sensing images to dataset folder or create a simlink.
We use ddpm_train.json
to setup the configurations. Update the dataset name
and dataroot
in the json file. Then run the following command to start training the diffusion model. The results and log files will be save to experiments
folder. Also, we upload all the metrics to wandb.
python ddpm_train.py --config config/ddpm_train.json -enable_wandb -log_eval
In case, if you want to resume the training from previosely saved point, provide the path to saved model in path/resume_state
, else keep it as null
.
If you want generate samples from the pre-trained DDPM, first update the path to trained diffusion model in [path
][resume_state
]. Then run the following command.
python ddpm_train.py --config config/ddpm_sampling.json --phase val
The generated images will be saved in experiments
.
Download the change detection datasets from the following links. Place them inside your datasets
folder.
Then, update the paths to those folders here [datasets
][train
][dataroot
], [datasets
][val
][dataroot
], [datasets
][test
][dataroot
] in levir.json
, whu.json
, dsifn.json
, and cdd.json
.
Udate the path to pre-trained diffusion model weights (*_gen.pth
and *_opt.pth
) here [path
][resume_state
] in levir.json
, whu.json
, dsifn.json
, and cdd.json
..
Indicate the time-steps using to extract feature representations in [model_cd
][t
]. As shown in the ablation section of the paper, our best model is obtained with time-steps: {50,100,400}. However, time-steps of {50,100} works well too.
Run the following code to start the training.
- Training on LEVIR-CD:
python ddpm_cd.py --config config/levir.json -enable_wandb -log_eval
- Training on WHU-CD:
python ddpm_cd.py --config config/whu.json -enable_wandb -log_eval
- Training on DSIFN-CD:
python ddpm_cd.py --config config/dsifn.json -enable_wandb -log_eval
- Training on CDD:
python ddpm_cd.py --config config/cdd.json -enable_wandb -log_eval
The results will be saved in experiments
and also upload to wandb
.
To obtain the predictions and performance metrics (IoU, F1, and OA), first provide the path to pre-trained diffusion model here [path
][resume_state
] and path to trained change detection model (the best model) here [path_cd
][resume_state
] in levir_test.json
, whu_test.json
, dsifn_test.json
, and cdd_test.json
. Also make sure you specify the time steps used in fine-tuning here: [model_cd
][t
].
Run the following code to start the training.
- Test on LEVIR-CD:
python ddpm_cd.py --config config/levir_test.json --phase test -enable_wandb -log_eval
- Test on WHU-CD:
python ddpm_cd.py --config config/whu_test.json --phase test -enable_wandb -log_eval
- Test on DSIFN-CD:
python ddpm_cd.py --config config/dsifn_test.json --phase test -enable_wandb -log_eval
- Test on CDD:
python ddpm_cd.py --config config/cdd_test.json --phase test -enable_wandb -log_eval
Predictions will be saved in experiments
and performance metrics will be uploaded to wandb.
Pre-trained diffusion model can be download from: Dropbox
Fine-tunes chande detection networks can be download from following links:
-
"t": [50, 100]
- LEVIR-CD
Dropbox-cd-levir-50-100
- WHU-CD
Dropbox-cd-whu-50-100
- DSIFN-CD
Dropbox-cd-dsifn-50-100
- CDD-CD
Dropbox-cd-cdd-50-100
- LEVIR-CD
-
"t": [50, 100, 400] (Best Model)
- LEVIR-CD
Dropbox-cd-levir-50-100-400
- WHU-CD
Dropbox-cd-whu-50-100-400
- DSIFN-CD
Dropbox-cd-dsifn-50-100-400
- CDD-CD
Dropbox-cd-cdd-50-100-400
- LEVIR-CD
-
"t": [50, 100, 400, 650]
- LEVIR-CD
Dropbox-cd-levir-50-100-400-650
- WHU-CD
Dropbox-cd-whu-50-100-400-650
- DSIFN-CD
Dropbox-cd-dsifn-50-100-400-650
- CDD-CD
Dropbox-cd-cdd-50-100-400-650
- LEVIR-CD
If you face a problem when downloading from the DropBox try one of the following options:
- [GoogleDrive] All pre-trained models in GooleDrive: GoogleDrive-pretrianed-models
- [GitHub] Pre-trained-models in GitHub
LEVIR-CD-Train-Val-Reports-Wandb
WHU-CD-Train-Val-reports-Wandb
DSIFN-CD-Train-Val-Reports-Wandb
CDD-CD-Train-Val-Reports-Wandb
The average quantitative change detection results on the LEVIR-CD, WHU-CD, DSIFN-CD, and CDD test- sets. “-” indicates not reported or not available to us. (IN1k) indicates pre-training process is initialized with the ImageNet pre-trained weights. IN1k, IBSD, and GE refers to ImageNet1k, Inria Building Segmentation Dataset, and Google Earth.
-
LEVIR-CD
(a) Pre-change image, (b) Post-change image, (c) FC-EF, (d) FC-Siam-diff, (e) FC-Siam- conc, (f) DT-SCN, (g) BIT, (h) ChangeFormer, (i) DDPM-CD (ours), and (j) Ground-truth. Note: true positives (change class) are indicated in white, true negatives (no-change class) are indicated in black, and false positives plus false negatives indicates in red.
-
WHU-CD
(a) Pre-change image, (b) Post-change image, (c) FC-EF, (d) FC-Siam-diff, (e) FC-Siam- conc, (f) DT-SCN, (g) BIT, (h) ChangeFormer, (i) DDPM-CD (ours), and (j) Ground-truth. Note: true positives (change class) are indicated in white, true negatives (no-change class) are indicated in black, and false positives plus false negatives indicates in red
-
DSIFN-CD
(a) Pre-change image, (b) Post-change image, (c) FC-EF, (d) FC-Siam-diff, (e) FC-Siam- conc, (f) DT-SCN, (g) BIT, (h) ChangeFormer, (i) DDPM-CD (ours), and (j) Ground-truth. Note: true positives (change class) are indicated in white, true negatives (no-change class) are indicated in black, and false positives plus false negatives indicates in red
-
CDD
(a) Pre-change image, (b) Post-change image, (c) FC-EF, (d) FC-Siam-diff, (e) FC-Siam- conc, (f) DT-SCN, (g) BIT, (h) ChangeFormer, (i) DDPM-CD (ours), and (j) Ground-truth. Note: true positives (change class) are indicated in white, true negatives (no-change class) are indicated in black, and false positives plus false negatives indicates in red
@misc{bandara2024ddpmcdv2,
title = {Remote Sensing Change Detection (Segmentation) using Denoising Diffusion Probabilistic Models},
author = {Bandara, Wele Gedara Chaminda and Nair, Nithin Gopalakrishnan and Patel, Vishal M.},
year = {2022},
eprint={2206.11892},
archivePrefix={arXiv},
primaryClass={cs.CV},
doi = {10.48550/ARXIV.2206.11892},
}
@misc{bandara2024ddpmcdv3,
title={DDPM-CD: Denoising Diffusion Probabilistic Models as Feature Extractors for Change Detection},
author={Wele Gedara Chaminda Bandara and Nithin Gopalakrishnan Nair and Vishal M. Patel},
year={2024},
eprint={2206.11892},
archivePrefix={arXiv},
primaryClass={cs.CV},
doi = {10.48550/ARXIV.2206.11892},
}
- The code of diffusion model is from
here
.