This repository provides the official implementation of the paper:
Collision-Free Humanoid Traversal in Cluttered Indoor Scenes
Han Xue et al.
arXiv preprint: arXiv:2601.16035.
project page: https://axian12138.github.io/CAT/.
The project addresses the problem of enabling humanoid robots to safely traverse cluttered indoor scenes, which we define as environments that simultaneously exhibit:
- Full-spatial constraints: obstacles jointly present at the ground, lateral, and overhead levels, restricting the humanoid’s motion in all spatial dimensions.
- Intricate geometries: obstacles with complex, irregular shapes that go beyond simple primitives such as rectangular blocks or regular polyhedra.
In this repository, we present:
- Humanoid Potential Field (HumanoidPF): a structured representation encoding spatial relationships between the humanoid body and surrounding obstacles;
- Hybrid scene generation: realistic 3D indoor scene crops combined with procedurally synthesized obstacles;
- Reinforcement learning for specialist and generalist policies, respectively trained on specific scenes and distilled to a generalist policy.
- Project Status
- Installation
- Repository Structure
- Hybrid Obstacle Generation & HumanoidPF
- Traversal Skill Learning
- Related Projects
- Citation
- License
- Acknowledgement
- Contact Us
- 🧩 Procedural obstacle generation and HumanoidPF construction
- 🧩 Specialist policy training code
- 🗂️ Pre-trained specialist models and scene data
- 🧩 Specialist-to-generalist policy distillation code
- 🗂️ Pre-trained generalist models
- 🗂️ Expanded scene datasets
- 🚀 Sim-to-real deployment utilities
git clone https://github.com/Axian12138/Click-and-Traverse.git
cd Click-and-TraverseCUDA 12.5 is recommended.
export PATH=/usr/local/cuda-12.5/bin:$PATH # adjust if needed
uv sync -i https://pypi.org/simpleCreate and customize the .env file in the repository root. This file defines runtime configurations such as:
- working directory paths
- logging (e.g., WandB account)
- experiment identifiers
source .venv/bin/activate
source .env
python -m cat_ppo.utils.mj_playground_initPre-trained checkpoints and scene assets can be downloaded from:
Place downloaded data under the data/ directory.
Click-and-Traverse/
├── LICENSE
├── README.md
├── pyproject.toml
├── train_batch.py
├── train_ppo.py
├── .env
├── cat_ppo/ # Core RL framework
│ ├── envs/
│ ├── learning/
│ ├── eval/
│ └── utils/
├── data/ # Assets, logs (checkpoints)
│ ├── assets/
│ | ├── mujoco_menagerie/ # after mj_playground_init
│ | ├── RandObs/ # random obstacles
│ | ├── TypiObs/ # typical obstacles
│ | └── unitree_g1/ # humanoid assets
│ └── logs/
| └── G1_mj_axis/ # downloaded checkpoints
└── procedural_obstacle_generation/ # Obstacle generation
├── main.py
├── pf_modular.py # HumanoidPF construction
├── random_obstacle.py
├── typical_obstacle.py
└── utils.py
Two categories of obstacle scenes are supported:
- Typical obstacles: manually designed, semantically meaningful scenes
- Random obstacles: procedurally generated scenes with controllable difficulty
HumanoidPF representations are generated synchronously for all scenes.
Outputs are saved to:
data/assets/TypiObs/data/assets/RandObs/
export PATH=/usr/local/cuda-12.5/bin:$PATH
source .env
source .venv/bin/activate
cd procedural_obstacle_generationEdit main.py and call:
generate_typical_obstacle(obs_name)Parameters:
obs_name: the obstacle configuration (see comments inmain.py).
Call in main.py:
generate_random_obstacle(difficulty, seed, dL, dG, dO)Parameters:
difficulty: global difficulty levelseed: random seeddL: lateral obstacle difficultydG: ground obstacle difficultydO: overhead obstacle difficulty
export PATH=/usr/local/cuda-12.5/bin:$PATH
source .env
source .venv/bin/activate
python train_batch.pySupported tasks:
G1Cat: default task (can be directly used for sim-to-real deployment)G1CatPri: privileged task (privileged observation is more informative for distilling generalist policies)
Refer to train_batch.py for args details.
train_batch.py will automatically convert checkpoints to ONNX format. If you customize the policy architecture, you may need to convert checkpoints to ONNX manually:
python -m cat_ppo.eval.brax2onnx \
--task G1Cat \
--exp_name exp_nameTo evaluate the model without privileged observation, run:
python -m cat_ppo.eval.mj_onnx_play --task G1Cat --exp_name 12221455_G1LocoPFR10_SlowV2OdonoiseV2narrow1_xP3xMxK00xchest --obs_name bendTo evaluate the model with privileged observation, run:
python -m cat_ppo.eval.mj_onnx_play --task G1CatPri --pri --exp_name G1CatPri_narrow1 --obs_name narrow1- R2S2: Whole-body-control with various real-world-ready motor skills. & code
- Any2Track: Foundational motion tracking to track any motions under any disturbances. & code
If you find this work useful, please cite:
@misc{xue2026collisionfreehumanoidtraversalcluttered,
title = {Collision-Free Humanoid Traversal in Cluttered Indoor Scenes},
author = {Xue, Han and Liang, Sikai and Zhang, Zhikai and Zeng, Zicheng and Liu, Yun and Lian, Yunrui and Wang, Jilong and Liu, Qingtao and Shi, Xuesong and Li, Yi},
year = {2026},
eprint = {2601.16035},
archivePrefix= {arXiv},
primaryClass = {cs.RO},
url = {https://arxiv.org/abs/2601.16035}
}This project is released under the terms of the LICENSE file included in this repository.
We thank the MuJoCo Playground for providing a convenient simulation framework.
If you'd like to discuss anything, feel free to send an email to xue-h21@mails.tsinghua.edu.cn or add WeChat: xh15158435129.
Contributions are welcome. Please open an issue to discuss major changes or submit a pull request directly.


