This is the implementation of the LEADER introduced in this paper LEADER: Learning Attention over Driving Behaviors for Planning under Uncertainty. Please take a look at more driving videos here: https://sites.google.com/view/leader-paper/home
If you find it useful in your research, please cite it using :
@inproceedings{daneshleader,
title={LEADER: Learning Attention over Driving Behaviors for Planning under Uncertainty},
author={Danesh, Mohamad H and Cai, Panpan and Hsu, David},
booktitle={6th Annual Conference on Robot Learning},
year={2022}
}
We have used SUMMIT as the simulation of driving in real-world. SUMMIT captures the full complexity of real-world, unregulated, densely-crowded urban environments, such as complex road structures and traffic behaviors, and are thus insufficient for testing or training robust driving algorithms. It is a high-fidelity simulator that facilitates the development and testing of crowd-driving algorithm extending CARLA to support the following additional features:
-
Real-World Maps: generates real-world maps from online open sources (e.g. OpenStreetMap) to provide a virtually unlimited source of complex environments.
-
Unregulated Behaviors: agents may demonstrate variable behavioral types (e.g. demonstrating aggressive or distracted driving) and violate simple rule-based traffic assumptions (e.g. stopping at a stop sign).
-
Dense Traffic: controllable parameter for the density of heterogeneous agents such as pedestrians, buses, bicycles, and motorcycles.
-
Realistic Visuals and Sensors: extending off CARLA there is support for a rich set of sensors such as cameras, Lidar, depth cameras, semantic segmentation etc.
We used an expert planner in SUMMIT that explicitly reasons about interactions among traffic agents and the uncertainty on human driver intentions and types. The core is a POMDP model conditioned on human hidden states and urban road contexts. The model is solved using an efficient parallel planner, HyP-DESPOT. A detailed description of the model can be found in this paper.
The repository structure has the following conceptual architecture:
-
summit connector: A python package for communicating with SUMMIT. It publishes ROS topics on state and context information.
-
crowd pomdp planner: The POMDP planner package. It receives ROS topics from the Summit_connector package and executes the belief tracking and POMDP planning loop.
-
car hyp despot: A static library package that implements the context-based POMDP model and the HyP-DESPOT solver. It exposes planning and belief tracking functions to be called in crowd_pomdp_planner.
-
py scripts: Where model definition and initialization, model training and testing, and communication with the compiled simulation and planner happens.
- Ubuntu 18.04
- CUDA 10.0
- Python 3.6
- ROS-Melodic
- catkin_tools: catkin_tools is used for code building in this package instead of the default catkin_make that comes with ROS.
cd && mkdir -p catkin_ws/src
cd catkin_ws
catkin config --merge-devel
catkin build
cd ~/catkin_ws/src
git clone https://github.com/modanesh/LEADER.git
mv LEADER/* .
mv LEADER/.git .
mv LEADER/.gitignore .
rm -r LEADER
Now all ROS packages should be in ~/catkin_ws/src
.
cd ~/catkin_ws
catkin config --merge-devel
catkin build --cmake-args -DCMAKE_BUILD_TYPE=Release
And then run: source ~/catkin_ws/devel/setup.bash
Download the SUMMIT simulator. Compile from source or download a stable release. Install the simulator to ~/summit
.
Launch the planner and start training the agent using the following commands:
cd ~/catkin_ws/src/py_scripts
./run_training.sh
Once the training is done, it will save models in the ~/catkin_ws/src/py_scripts/models
. Then, the following command may test the agent:
./run_testing.sh
cd ~/catkin_ws/src/py_scripts
python statistics.py --folder /path/to/driving_data/joint_pomdp_baseline/
This code base is inspired by the Context-POMDP implemenetation, accessible here: https://github.com/AdaCompNUS/context-pomdp