Code for the paper PointGroup:Dual-Set Point Grouping for 3D Instance Segmentation, CVPR 2020 (Oral).
Authors: Li Jiang, Hengshuang Zhao, Shaoshuai Shi, Shu Liu, Chi-Wing Fu, Jiaya Jia
Instance segmentation is an important task for scene understanding. Compared to the fully-developed 2D, 3D instance segmentation for point clouds have much room to improve. In this paper, we present PointGroup, a new end-to-end bottom-up architecture, specifically focused on better grouping the points by exploring the void space between objects. We design a two-branch network to extract point features and predict semantic labels and offsets, for shifting each point towards its respective instance centroid. A clustering component is followed to utilize both the original and offset-shifted point coordinate sets, taking advantage of their complementary strength. Further, we formulate the ScoreNet to evaluate the candidate instances, followed by the Non-Maximum Suppression (NMS) to remove duplicates.
- Python 3.7.0
- Pytorch 1.1.0
- CUDA 9.0
conda create -n pointgroup python==3.7
source activate pointgroup
(1) Clone the PointGroup repository.
git clone https://github.com/llijiang/PointGroup.git --recursive
cd PointGroup
(2) Install the dependent libraries.
pip install -r requirements.txt
conda install -c bioconda google-sparsehash
(3) For the SparseConv, we apply the implementation of spconv. The repository is recursively downloaded at step (1). We use the version 1.0 of spconv.
Note: We further modify spconv\spconv\functional.py
to make grad_output
contiguous. Make sure you use our modified spconv
.
- To compile
spconv
, firstly install the dependent libraries.
conda install libboost
conda install -c daleydeng gcc-5 # need gcc-5.4 for sparseconv
Add the $INCLUDE_PATH$
that contains boost
in lib/spconv/CMakeLists.txt
. (Not necessary if it could be found.)
include_directories($INCLUDE_PATH$)
- Compile the
spconv
library.
cd lib/spconv
python setup.py bdist_wheel
- Run
cd dist
and use pip to install the generated.whl
file.
(4) Compile the pointgroup_ops
library.
cd lib/pointgroup_ops
python setup.py develop
If any header files could not be found, run the following commands.
python setup.py build_ext --include-dirs=$INCLUDE_PATH$
python setup.py develop
$INCLUDE_PATH$
is the path to the folder containing the header files that could not be found.
(1) Download the ScanNet v2 dataset.
(2) Put the data in the corresponding folders.
-
Copy the files
[scene_id]_vh_clean_2.ply
,[scene_id]_vh_clean_2.labels.ply
,[scene_id]_vh_clean_2.0.010000.segs.json
and[scene_id].aggregation.json
into thedataset/scannetv2/train
anddataset/scannetv2/val
folders according to the ScanNet v2 train/val split. -
Copy the files
[scene_id]_vh_clean_2.ply
into thedataset/scannetv2/test
folder according to the ScanNet v2 test split. -
Put the file
scannetv2-labels.combined.tsv
in thedataset/scannetv2
folder.
The dataset files are organized as follows.
PointGroup
├── dataset
│ ├── scannetv2
│ │ ├── train
│ │ │ ├── [scene_id]_vh_clean_2.ply & [scene_id]_vh_clean_2.labels.ply & [scene_id]_vh_clean_2.0.010000.segs.json & [scene_id].aggregation.json
│ │ ├── val
│ │ │ ├── [scene_id]_vh_clean_2.ply & [scene_id]_vh_clean_2.labels.ply & [scene_id]_vh_clean_2.0.010000.segs.json & [scene_id].aggregation.json
│ │ ├── test
│ │ │ ├── [scene_id]_vh_clean_2.ply
│ │ ├── scannetv2-labels.combined.tsv
(3) Generate input files [scene_id]_inst_nostuff.pth
for instance segmentation.
cd dataset/scannetv2
python prepare_data_inst.py --data_split train
python prepare_data_inst.py --data_split val
python prepare_data_inst.py --data_split test
CUDA_VISIBLE_DEVICES=0 python train.py --config config/pointgroup_run1_scannet.yaml
You can start a tensorboard session by
tensorboard --logdir=./exp --port=6666
(1) If you want to evaluate on validation set, prepare the .txt
instance ground-truth files as the following.
cd dataset/scannetv2
python prepare_data_inst_gttxt.py
Make sure that you have prepared the [scene_id]_inst_nostuff.pth
files before.
(2) Test and evaluate.
a. To evaluate on validation set, set split
and eval
in the config file as val
and True
. Then run
CUDA_VISIBLE_DEVICES=0 python test.py --config config/pointgroup_run1_scannet.yaml
An alternative evaluation method is to set save_instance
as True
, and evaluate with the ScanNet official evaluation script.
b. To run on test set, set (split
, eval
, save_instance
) as (test
, False
, True
). Then run
CUDA_VISIBLE_DEVICES=0 python test.py --config config/pointgroup_run1_scannet.yaml
c. To test with a pretrained model, run
CUDA_VISIBLE_DEVICES=0 python test.py --config config/pointgroup_default_scannet.yaml --pretrain $PATH_TO_PRETRAIN_MODEL$
We provide a pretrained model trained on ScanNet v2 dataset. Download it here. Its performance on ScanNet v2 validation set is 35.2/57.1/71.4 in terms of mAP/mAP50/mAP25.
To visualize the point cloud, you should first install mayavi. Then you could visualize by running
cd util
python visualize.py --data_root $DATA_ROOT$ --result_root $RESULT_ROOT$ --room_name $ROOM_NAME$ --room_split $ROOM_SPLIT$ --task $TASK$
The visualization task could be input
, instance_gt
, instance_pred
, semantic_pred
and semantic_gt
.
Quantitative results on ScanNet test set at the submisison time.
- Distributed multi-GPU training
If you find this work useful in your research, please cite:
@article{jiang2020pointgroup,
title={PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation},
author={Jiang, Li and Zhao, Hengshuang and Shi, Shaoshuai and Liu, Shu and Fu, Chi-Wing and Jia, Jiaya},
journal={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2020}
}
This repo is built upon several repos, e.g., SparseConvNet, spconv and ScanNet.
If you have any questions or suggestions about this repo, please feel free to contact me ([email protected]).