Skip to content

A Two-Stage Cascade Model with Variational Autoencoders and Attention Gates for MRI Brain Tumor Segmentation (BraTS 2020 Challenge; BrainLes2020 paper)

Notifications You must be signed in to change notification settings

zhangshuang317/two-stage-VAE-Attention-gate-BraTS2020

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pytorch

The model architecture

The Model Struture
Source:

C. Lyu, H. Shu. (2021) A Two-Stage Cascade Model with Variational Autoencoders and Attention Gates for MRI Brain Tumor Segmentation. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries (BrainLes 2020), LNCS 12658, pp. 435-447. https://doi.org/10.1007/978-3-030-72084-1_39 or https://arxiv.org/abs/2011.02881

Our algorithm ranks top 9 in the Brats 2020 Challenge.

Installation

The implementation is based on Python 3.6.2 and Pytorch 1.0.0. You can use the following command to get the environment ready.

cd path_to_code
pip install -r requirements

Data Preparation

  1. Normalization: This step does (a) for each patient, puts multiple modalities(.nii.gz files) and the segmentation label into a into a .npy file. Note that one npy file is roughly 200MB. Please make sure your disk has enough space. (b) normalize the MRI images to have mean 0 and std 1
cd path_to_code
python normalization.py -y 202002

Use -y 2018 or -y 2020 to specify which dataset(Brats2018 / 2020 training) to be normalized. 202001/202002 indicates Brats2020 Validation/Testing dataset.

  1. Train Test Partition: Use the script for split. the partition result is store in train_list.txt and valid_list.txt.
python train_test_split.py

Usage

The Implementations of two stages are separated.

Training

  • For training the first-stage model, use:
cd Stage1_VAE
python main.py -e num_epoch -l 128 -g num_gpus -f folder_for_models_saving
  • for training the second-stage model, use:
cd Stage2_AttVAE
python main_multi.py -e num_epoch -l 128 -g num_gpus -f folder_for_models_saving

where the -l controls for the patch size, which is set to be 128 during our training process. Please change other arguments according to your preference.

Testing

  • For testing the first-stage model, use:
cd Stage1_VAE
python predict_tta.py -g num_gpus -s folder_to_pth -f first_stage_model_weights.pth
  • for testing the second-stage model, use the following command:
cd Stage2_AttVAE
python predict_tta.py -g num_gpus -m folder_for_first-stage_output -f second_stage_model_weights.pth -s folder_to_pth

where -m sepecifies the segmentation result of which first-stage model will be used to make prediction.

Ensemble

You can use our script for model ensemble.

Details

Note that the shape of input MRI images need to have 5 dimensions including batch size, with channels-first format. i.e., the shape should look like (bs, c, H, W, D), where:

  • c, the number of channels are divisible by 4.
  • H, W, D, are all divisible by 16.

Please find more details in our paper and code.

About

A Two-Stage Cascade Model with Variational Autoencoders and Attention Gates for MRI Brain Tumor Segmentation (BraTS 2020 Challenge; BrainLes2020 paper)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%