Code for the paper SphereRPN: Learning Spheres for High-Quality Region Proposals on 3D Point Clouds Object Detection, ICIP 2021.
Authors: Thang Vu, Kookhoi Kim, Haeyong Kang, Xuan Thanh Nguyen, Tung M. Luu, Chang D. Yoo
This work was partly supported by Institute for Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (2021-0-01381, Development of Causal AI through Video Understanding and Reinforcement Learning, and Its Applications to Real Environments), and partly supported by Institute for Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No. 2019-0-01396, Development of framework for analyzing, detecting, mitigating of bias in AI model and training data).
- Python 3.7.0
- Pytorch 1.1.0
- CUDA 9.0
conda create -n sphere_rpn python==3.7
source activate sphere_rpn
(1) Clone the repository.
git clone https://github.com/thangvubk/SphereRPN.git --recursive
cd SphereRPN
(2) Install the dependent libraries.
pip install -r requirements.txt
conda install -c bioconda google-sparsehash
(3) For the SparseConv, we apply the implementation of spconv. The repository is recursively downloaded at step (1). We use the version 1.0 of spconv.
Note: We further modify spconv\spconv\functional.py
to make grad_output
contiguous. Make sure you use our modified spconv
.
- To compile
spconv
, firstly install the dependent libraries.
conda install libboost
conda install -c daleydeng gcc-5 # need gcc-5.4 for sparseconv
Add the $INCLUDE_PATH$
that contains boost
in lib/spconv/CMakeLists.txt
. (Not necessary if it could be found.)
include_directories($INCLUDE_PATH$)
- Compile the
spconv
library.
cd lib/spconv
python setup.py bdist_wheel
- Run
cd dist
and use pip to install the generated.whl
file.
(4) Compile the pointgroup_ops
library.
cd lib/pointgroup_ops
python setup.py develop
If any header files could not be found, run the following commands.
python setup.py build_ext --include-dirs=$INCLUDE_PATH$
python setup.py develop
$INCLUDE_PATH$
is the path to the folder containing the header files that could not be found.
(1) Download the ScanNet v2 dataset.
(2) Put the data in the corresponding folders.
-
Copy the files
[scene_id]_vh_clean_2.ply
,[scene_id]_vh_clean_2.labels.ply
,[scene_id]_vh_clean_2.0.010000.segs.json
and[scene_id].aggregation.json
into thedataset/scannetv2/train
anddataset/scannetv2/val
folders according to the ScanNet v2 train/val split. -
Copy the files
[scene_id]_vh_clean_2.ply
into thedataset/scannetv2/test
folder according to the ScanNet v2 test split. -
Put the file
scannetv2-labels.combined.tsv
in thedataset/scannetv2
folder.
The dataset files are organized as follows.
PointGroup
├── dataset
│ ├── scannetv2
│ │ ├── train
│ │ │ ├── [scene_id]_vh_clean_2.ply & [scene_id]_vh_clean_2.labels.ply & [scene_id]_vh_clean_2.0.010000.segs.json & [scene_id].aggregation.json
│ │ ├── val
│ │ │ ├── [scene_id]_vh_clean_2.ply & [scene_id]_vh_clean_2.labels.ply & [scene_id]_vh_clean_2.0.010000.segs.json & [scene_id].aggregation.json
│ │ ├── test
│ │ │ ├── [scene_id]_vh_clean_2.ply
│ │ ├── scannetv2-labels.combined.tsv
(3) Generate input files [scene_id]_inst_nostuff.pth
for instance segmentation.
cd dataset/scannetv2
python prepare_data_inst.py --data_split train
python prepare_data_inst.py --data_split val
python prepare_data_inst.py --data_split test
CUDA_VISIBLE_DEVICES=0 python train.py --config config/pointgroup_default_scannet.yaml
You can start a tensorboard session by
tensorboard --logdir=./exp --port=6666
CUDA_VISIBLE_DEVICES=0 python test.py --config config/pointgroup_default_scannet.yaml