Skip to content

Latest commit

 

History

History
109 lines (67 loc) · 6.85 KB

README.md

File metadata and controls

109 lines (67 loc) · 6.85 KB

Risk-Aware Scene Sampling for Dynamic Assurance of Autonomous Systems

High-risk simulation scene generation has recently gained significant interest in the Autonomous Vehicle Domain. Traditional sampling-based approaches have been widely used for this purpose. Passive samplers like random search and grid search have been used in the industry. These samplers do not use the feedback of previous simulation results in the sampling process. To perform active sampling, we propose two new samplers called Random Neighborhood Sampler (RNS) and Guided Bayesian Optimization (GBO) in this work. These samplers are based on the conventional random and Bayesian Optimization search techniques and we add capabilities of active sampling, constraint-based sampling, and balancing the exploration vs. exploitation to guide them towards sampling clusters of high-risk scenes. We applied these samplers to an \ac{av} case study in CARLA Simulation. This repo has the steps to run the simulation scene generation with the samplers as discussed in the paper. For this, we leverage the CARLA Autonomous Driving Challenge https://leaderboard.carla.org/ . Some examples of the scenes generated in this work are shown in the gif below.

(Left) Cloudy Scene in an intersection. The AV also has a camera fault (camera occlusion), which makes its driving slightly suceptible. (Center) Sunset scene in a curvy road. There is another vehicle right in front of the AV. (Right) Cloudy Scene with slight rain.

Technical Appendix

The technical appendix for the paper can be found here

Downloads

Manual Downloads

  1. You will also need to install CARLA 0.9.9, along with the additional maps. Download CARLA 0.9.9 from here for more instructions. (Our setup works with CARLA 0.9.9 version. Using another version of the simulator will result in a version and API mismatch error.)

  2. Download the LEC weights from here. The LEC model architecture was taken from Learning By Cheating

Save the model.ckpt file to carla-challange/carla_project folder.

  1. Download the trained B-VAE assurance monitor weights from here

Unzip and save the weights to carla-challange/leaderboard/team_code/detector_code/ood_detector_weights

Automated Downloads (Preferred)

Enter into this repo and execute this script ./downloads.sh to download these three requirements automatically into the required folders.

Setup Virtual Environment

To run the scene generation workflow with CARLA, clone this repo.

git clone https://github.com/Shreyasramakrishna90/Risk-Aware-Scene-Generation.git

Then, create a conda environment to run the experiments.

To run this setup first create a virtual environment with python 3.7
conda create -n carla-sampling python=3.7
conda activate carla-sampling
python3 -m pip install -r requirements.txt

Running the Carla setup

Create Folders for Docker Volumes

Create three folders named routes, simulation-data and images inside this directory. These folders are the data volumes for the carla client docker. Run the following

mkdir routes               #stores the scene information.
mkdir simulation-data      #stores the sensor information
mkdir images               #stores images if chosen by the user

Alternately, enter into this repo and execute this script ./make_volume_folders.sh to set up these empty folders.

Launch Simulation & Sampler

Next get into the path of this repo and execute the following script inside the carla-challange folder. This script has a few variables that need to be set before execution.

  1. PROJECT_PATH: Set this to the location of this repo.
  2. CARLA_ROOT: Set this to the location of the CARLA_0.9.9 simulator folder.
  3. PORT: The simulator port (default:3000)
  4. HAS_DISPLAY: 1 = display simulation run, 2 = headless mode (no display)
cd carla-challange
./run_agent.sh n    #where n is the number of scenes to be generated. If not selected, 2 scenes will be generated by default.

This script launches both the simulator and the carla client. The simulation data is stored in the routes, simulation-data, and images folders.

This should start running the carla setup with the default random sampler. Different samplers and scene variables can be seected from scene_description.yml

Scene Generation & Samplers

We use a scenario description DSML written in textX to generate different temporal scene parameters (weather parameters, time-of-day,traffic density), spatial scene parameters (road segments) and agent sensor faults. These variables and samplers can be selected using the scene specification file carla-challange/sdl/scene/scene_description.yml

Samplers

The goal of this work is to test different samplers for sequential scene generation. We have integrated and the user can select from the following samplers.

  1. Manual Sampler - A sampler in which the user can manually specify the values for the scene variables.
  2. Random Sampler - A sampler in which the scene variables are sampled uniformly at random from their respective distributions.
  3. Grid Sampler - A sampler that exhaustively samples all the combinations of the scene variables in a given grid.
  4. Halton Samppler - A pseudo-random sampler that samples the search space using co-prime as its bases.
  5. Random Neighborhood Search - The sampler executes the sequenctial-search strategy discussed in the paper.
  6. Guided Bayesian Optimization - The sampler extends the conventional Bayesian Optimization sampler with sampling rules and uses them for sampling the high-risk scenes.

References

The experiments in this work are built using these two works.

  1. ReSonAte: A Runtime Risk Assessment Framework for Autonomous Systems paper - This is our previous work which introduced the ReSonAte risk estimation framework. We use this setup for computing the ReSonAte score in this work. GitHub

  2. Learning By Cheating paper - The Learning-Enabled Component (LEC) controller for the AV is borrowed from this work. We also used their autopilot controller to generate scenes in the training and calibration phases. GitHub