Skip to content

wangyian-me/AdaAffordCode

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AdaAfford: Learning to Adapt Manipulation Affordance for 3D Articulated Objects via Few-shot Interactions

Overview

We propose a novel framework, named AdaAfford, that learns to perform very few test-time interactions for quickly adapting the affordance priors to more accurate instance-specific posteriors.

About the paper

Arxiv Version: https://arxiv.org/abs/2112.00246

Project Page: https://hyperplane-lab.github.io/AdaAfford

Dependencies

This code has been tested on Ubuntu 18.04 with Cuda 10.1, Python 3.6, and PyTorch 1.7.0.

First, install SAPIEN following

pip install http://download.cs.stanford.edu/orion/where2act/where2act_sapien_wheels/sapien-0.8.0.dev0-cp36-cp36m-manylinux2014_x86_64.whl

For other Python versions, you can use one of the following

pip install http://download.cs.stanford.edu/orion/where2act/where2act_sapien_wheels/sapien-0.8.0.dev0-cp35-cp35m-manylinux2014_x86_64.whl
pip install http://download.cs.stanford.edu/orion/where2act/where2act_sapien_wheels/sapien-0.8.0.dev0-cp37-cp37m-manylinux2014_x86_64.whl
pip install http://download.cs.stanford.edu/orion/where2act/where2act_sapien_wheels/sapien-0.8.0.dev0-cp38-cp38-manylinux2014_x86_64.whl

Please do not use the default pip install sapien as SAPIEN is still being actively developed and updated.

Then install PointNet++ as we need to process the point cloud.

git clone --recursive https://github.com/erikwijmans/Pointnet2_PyTorch
cd Pointnet2_PyTorch
# [IMPORTANT] comment these two lines of code:
#   https://github.com/erikwijmans/Pointnet2_PyTorch/blob/master/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/sampling_gpu.cu#L100-L101
pip install -r requirements.txt
pip install -e .

The other requirements are included in env.yaml.

For visualization, please install blender v2.79 and put the executable in your environment path. Also, the prediction result can be visualized using MeshLab or the RenderShape tool in Thea.

Citations

Please cite our work if you find it useful:

@article{wang2022adaafford,
    title={{AdaAfford}: Learning to Adapt Manipulation Affordance for 3D Articulated Objects via Few-shot Interactions},
    author={Yian Wang and Ruihai Wu and  Kaichun Mo and Jiaqi Ke and Qingnan Fan and Leonidas Guibas and Hao Dong},
    booktitle={European conference on computer vision},
    year={2022}
}

License

MIT Licence

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •