Skip to content

Latest commit

 

History

History
105 lines (72 loc) · 4.01 KB

BENCHMARKING.md

File metadata and controls

105 lines (72 loc) · 4.01 KB

How to benchmark Beluga

Run rosbag benchmark parameterizing the number of particles

This script will run the benchmark with the specified number of particles and record:

  • The timem output: CPU usage, RSS memory, virtual memory.
  • A rosbag with the reference and estimated trajectories.
  • Perf events (optionally).

To run, use:

ros2 run beluga_benchmark parameterized_run <NUMBER_OF_PARTICLES_EXPERIMENT_1> <NUMBER_OF_PARTICLES_EXPERIMENT_2>

The results of the different runs will be stored in folders named benchmark_${N_PARTICLES}_particles_output, where N_PARTICLES are the numbers specified in the above command.

To run the same experiment using another AMCL node, e.g. nav2, use:

ros2 run beluga_benchmark parameterized_run <NUMBER_OF_PARTICLES_EXPERIMENT_1> <NUMBER_OF_PARTICLES_EXPERIMENT_2> --package nav2_amcl --executable amcl

For other options, e.g. using a different rosbag, see:

ros2 run beluga_benchmark parameterized_run --help

Visualizing timem results of one run

Use the following command:

ros2 run beluga_benchmark timem_results <PATH_TO_OUTPUT_DIR_OF_RUN>

Where the specified directory needs to have a timem-output.json file, e.g. the output directory generated by parameterized_run. The script will print cpu usage, peak rss and elapsed time. It will also plot virtual and RSS memory over time.

Visualizing estimated trajectory APE of one run

Use the following command:

evo_ape bag2 <PATH_TO_BAGFILE> /odometry/ground_truth /pose -p

For nav2_amcl, replace /pose with /amcl_pose. The bagfiles generated by parameterized_run can be found in the generated output directories for each run. This will print APE metrics (mean, median, max, std, rmse, etc), and also plot APE over time.

Comparing parameterized runs

The following command allows to compare the results of different benchmarking runs in a single plot for each metric being measured.

This can be used to compare different beluga_amcl and/or nav2_amcl runs, or to compare the same node with different base configuration settings.

The command is

ros2 run beluga_benchmark compare_results -s <PATH1> -l <LABEL1> -s <PATH2> -l <LABEL2> ...

where PATH1 and PATH2 are the paths to the output directories of the benchmarking runs to compare, and LABEL1 and LABEL2 are the labels to use in the plot for each of them. Any number of runs can be added in the same plot by providing additional -s <PATH> -l <LABEL> pairs.

Additionally, the Y axis of the plot can be configured to be plotted in log scale using the --use-logy flag. If not specificied, the plot will default to linear scale.

Notice that the result paths passed to compare_results must be the ones where parameterized_run was run, i.e. the one containing the benchmark_*_particles_output directories.

The script will plot the following metrics vs the number of particles:

  • RSS memory
  • CPU usage
  • APE mean
  • APE median
  • APE max
  • APE rmse

Saving benchmarking results to a CSV file

The following command allows to save the results of different benchmarking runs to a CSV file, assuming that beam data has been stored in the beam_beluga_seq and beam_nav2_amcl directories, and likelihood data has been stored in the likelihood_beluga_seq and likelihood_nav2_amcl directories.

ros2 run beluga_benchmark compare_results \
    -s beam_beluga_seq -l beluga          \
    -s beam_nav2_amcl  -l nav2            \
    --save-csv beam_model_data.csv

ros2 run beluga_benchmark compare_results \
    -s likelihood_beluga_seq -l beluga    \
    -s likelihood_nav2_amcl  -l nav2      \
    --save-csv likelihood_model_data.csv

The files beam_model_data.csv and likelihood_model_data.csv will be created in the current directory, containing all of the data captured in the benchmark runs, where columns will have been prefixed with the corresponding label passed to compare_results (e.g. beluga_rss, nav2_rss, etc).

References