Skip to content

Official repository for our paper titled "UnDIVE: Generalized Underwater Video Enhancement Using Generative Priors"

Notifications You must be signed in to change notification settings

suhas-srinath/undive

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

UnDIVE

Generalized Underwater Video Enhancement Using Generative Priors

Suhas Srinath1 Aditya Chandrasekar2 Hemang Jamadagni3 Rajiv Soundararajan1 Prathosh A P1

1 Indian Institute of Science, 2 Qualcomm, 3 National Institute of Technology Karnataka

WACV 2025 (conference paper) ArXiv Preprint(arXiv 2411.05886)

Abstract

With the rise of marine exploration, underwater imaging has gained significant attention as a research topic. Underwater video enhancement has become crucial for real-time computer vision tasks in marine exploration. However, most existing methods focus on enhancing individual frames and neglect video temporal dynamics, leading to visually poor enhancements. Furthermore, the lack of ground-truth references limits the use of abundant available underwater video data in many applications. To address these issues, we propose a two-stage framework for enhancing underwater videos. The first stage uses a denoising diffusion probabilistic model to learn a generative prior from unlabeled data, capturing robust and descriptive feature representations. In the second stage, this prior is incorporated into a physics-based image formulation for spatial enhancement, while also enforcing temporal consistency between video frames. Our method enables real-time and computationally-efficient processing of high-resolution underwater videos at lower resolutions, and offers efficient enhancement in the presence of diverse water-types. Extensive experiments on four datasets show that our approach generalizes well and outperforms existing enhancement methods.

Overview

Usage

For a help Menu
Run python <filename> -h

Setup Environment

conda create -n undive -f environment.yml
conda activate undive

Download Pretrain Model Weights

Download the ZIP file and extract into ./PretrainedModels

DDPM_100.pth : DDPM trained on UIEB
UIEB_pretrain_150.pth : Pre-training on UIEB
UnDIVE_100.pth : Model fine-tuned with temporal consistancy loss on UVE-38k.

Prepare Dataset

📂 Data/
│── 📂 Train/ 
│   ├── 📂 bsr_images/
│   │   ├── 📼 videos
|   |   |   ├── 🖼️ frames.png
│   ├── 📂 gt/
│   │   ├── 📼 videos
|   |   |   ├── 🖼️ frames.png
│   ├── 📂 backward_flow/
│   │   ├── 📼 videos
|   |   |   ├── 📂 high_flow/
|   |   |   |   ├── 🖼️ flow.npy
|   |   |   ├── 📂 low_flow/
|   |   |   |   ├── 🖼️ flow.npy
│   ├── 📂 forward_flow/
│   │   ├── 📼 videos
|   |   |   ├── 📂 high_flow/
|   |   |   |   ├── 🖼️ flow.npy
|   |   |   ├── 📂 low_flow/
|   |   |   |   ├── 🖼️ flow.npy
│── 📂 Image_Pretrain/
│   ├── 📂 bsr_images/
|   |   ├── 🖼️ frames.png
│   ├── 📂 gt/
|   |   ├── 🖼️ frames.png
│── 📂 Test/
│   ├── 📂 bsr_images/
│   │   ├── 📼 videos
|   |   |   ├── 🖼️ frames.png

Backscatter Removal

python ./BackScatterRemoval/bsr.py --video-path <video_frames> --depthmap-path <depthmaps> --output-path <bsr_images>

Optical Flows

Use Fast Flow Net to compute Optical Flows.
Move the files ./OpticalFlows/run_forward.py and ./OpticalFlows/run_backward.py to FastFlowNet/
Run OpticalFlows/get_flows.py

python OpticalFlows/get_flows.py --orig-root <Train Data> --ffn-root <FastFlowNet> --flow-root <Output Flows>

Generative Prior

Setup Environment

conda create -n undive_ddpm -f ./GenerativePrior/environment.yml
conda activate undive_ddpm

DDPM Training

python ./GenerativePrior/DDPM.py

DDPM Inference

python ./GenerativePrior/inference.py 

Training

Image Pretraining

python UIEB_pretrain.py --image-data-path <image pretrain root> --test-video <test video root>

UnDIVE Training

python UnDIVE_train.py --video-data-path <video pretrain root> --test-video <test video root>

Inference

python inference.py --test-video <test video root>

Citation

If you find UnDIVE useful in your research or applications, please consider giving us a star 🌟 and citing it using the following:

@InProceedings{Srinath_2025_WACV,
    author    = {Srinath, Suhas and Chandrasekar, Aditya and Jamadagni, Hemang and Soundararajan, Rajiv and A P, Prathosh},
    title     = {UnDIVE: Generalized Underwater Video Enhancement using Generative Priors},
    booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)},
    month     = {February},
    year      = {2025},
    pages     = {8983-8994}
}

About

Official repository for our paper titled "UnDIVE: Generalized Underwater Video Enhancement Using Generative Priors"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages