Skip to content
/ LiSA Public

[CVPR 2024 Highlight] LiSA: LiDAR Localization with Semantic Awareness

Notifications You must be signed in to change notification settings

Ybchun/LiSA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

9c10d06 · Jun 18, 2024

History

6 Commits
Jun 10, 2024
Jun 10, 2024
Jun 10, 2024
Jun 10, 2024
Jun 10, 2024
Jun 10, 2024
Jun 18, 2024
Jun 10, 2024
Jun 10, 2024

Repository files navigation

LiSA: LiDAR Localization with Semantic Awareness

CVPR 2024 Highlight

⚙️ Environment

  • Spconv
conda install -f lisa-spconv.yaml
conda activate lisa-spconv
cd LiSA-spconv/third_party
python setup.py install
  • MinkowskiEngine
conda install -f lisa-mink.yaml

🔨 Dataset

We support the Oxford Radar RobotCar and NCLT datasets right now.

We also use PQEE to enhance the Oxford and provide the corrected pose, QEOxford.

The data of the Oxford, QEOxford and NCLT dataset should be organized as follows:

  • (QE)Oxford
data_root
├── 2019-01-11-14-02-26-radar-oxford-10k
│   ├── velodyne_left
│   │   ├── xxx.bin
│   │   ├── xxx.bin
│   │   ├── …
│   ├── sphere_velodyne_left_feature32
│   │   ├── xxx.bin
│   │   ├── xxx.bin
│   │   ├── …
│   ├── velodyne_left_calibrateFalse.h5
│   ├── velodyne_left_False.h5
│   ├── rot_tr.bin
│   ├── tr.bin
│   ├── tr_add_mean.bin
├── …
├── (QE)Oxford_pose_stats.txt
├── train_split.txt
├── valid_split.txt
  • NCLT
data_root
├── 2012-01-22
│   ├── velodyne_left
│   │   ├── xxx.bin
│   │   ├── xxx.bin
│   │   ├── …
│   ├── sphere_velodyne_left_feature32
│   │   ├── xxx.bin
│   │   ├── xxx.bin
│   │   ├── …
│   ├── velodyne_left_False.h5
├── …
├── NCLT_pose_stats.txt
├── train_split.txt
├── valid_split.txt

The files used are provided in the dataset directory.

🎨 Data prepare

We use SphereFormer for data preprocessing (just used for training) and generate corresponding semantic feature. You need to download the code, put dataset.py into util and put get_seg_fearure.py into /.

🌟 Visualization

QEOxford

image

NCLT

image

💃 Run

train

CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_addr 127.0.0.34 --master_port 29503 train_ddp.py

test

python test.py

🤗 Model zoo

The models of LiSA on Oxford, QEOxford, and NCLT can be downloaded here.

🙏 Acknowledgements

We appreciate the code of SGLoc, SphereFormer and DiffKD they shared.

🎓 Citation

If you find this codebase useful for your research, please use the following entry.

@inproceedings{yang2024lisa,
  title={LiSA: LiDAR Localization with Semantic Awareness},
  author={Yang, Bochun and Li, Zijun and Li, Wen and Cai, Zhipeng and Wen, Chenglu and Zang, Yu and Muller, Matthias and Wang, Cheng},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={15271--15280},
  year={2024}
}

About

[CVPR 2024 Highlight] LiSA: LiDAR Localization with Semantic Awareness

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages