Skip to content

Commit

Permalink
improve readability
Browse files Browse the repository at this point in the history
Signed-off-by: ismetatabay <[email protected]>
  • Loading branch information
ismetatabay committed Nov 7, 2023
1 parent 60b6511 commit eab3432
Showing 1 changed file with 20 additions and 20 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ in this specific environment.

### Enabling camera-lidar fusion

To enable camera-lidar fusion, you need to first calibrate both your camera and lidar.
- To enable camera-lidar fusion, you need to first calibrate both your camera and lidar.
Following that, you will need to utilize the `image_info`
and `rectified_image` topics in order to employ the `tensorrt_yolo` node.
Once these ROS 2 topics are prepared,
Expand Down Expand Up @@ -45,7 +45,7 @@ file:
...
```

Also, you need to update the roi_sync.param.yaml parameter file according to your camera number.
- Also, you need to update the roi_sync.param.yaml parameter file according to your camera number.
Firstly,
please refer to the roi_cluster_fusion documentation for more information about this package.
Then, you will update your camera offsets.
Expand All @@ -60,7 +60,7 @@ Please be careful with the offset array size; it must be equal to your camera co
+ input_offset_ms: [0.0, 0.0, 0.0, 0.0] # 4 cameras
```

If you have used different namespaces for your camera and ROI topics,
- If you have used different namespaces for your camera and ROI topics,
you will need to add the input topics for camera_info,
image_raw,
and rois messages in the [`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml) file.
Expand All @@ -76,11 +76,11 @@ and rois messages in the [`tier4_perception_component.launch.xml`](https://githu

### Tuning ground segmentation

The ground segmentation package removes the ground points from the input point cloud for the perception pipeline.
- The ground segmentation package removes the ground points from the input point cloud for the perception pipeline.
In our campus environment, there are a lot of high slopes and rough roads.
Therefore, this condition makes it difficult to accurately segment ground and non-ground points.

For example, when we pass over speed bumps,
- For example, when we pass over speed bumps,
there are a lot of false positives (ghost points) that appear as non-ground points,
as shown in the image below.

Expand All @@ -92,18 +92,18 @@ points on the high-slope roads with default configurations.
</figcaption>
</figure>

These ghost points affect the motion planner of Autoware,
- These ghost points affect the motion planner of Autoware,
causing the vehicle to stop even though there is no obstacle on the road during autonomous driving.
We will reduce the number of false positive non-ground points
by fine-tuning the ground segmentation in Autoware.

There are three different ground segmentation algorithms included in Autoware:
- There are three different ground segmentation algorithms included in Autoware:
`ray_ground_filter`, `scan_ground_filter`, and `ransac_ground_filter`.
The default method is the `scan_ground_filter`.
Please refer to the [`ground_segmentation` package documentation](https://autowarefoundation.github.io/autoware.universe/main/perception/ground_segmentation/)
for more information about these methods and their parameter definitions.

Firstly,
- Firstly,
we will change the `global_slope_max_angle_deg` value from 10 to 30 degrees at [`ground_segmentation.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/obstacle_segmentation/ground_segmentation/ground_segmentation.param.yaml) parameter file.
This change will reduce our false positive non-ground points.
However, be cautious when increasing the threshold,
Expand All @@ -114,7 +114,7 @@ as it may lead to an increase in the number of false negatives.
+ global_slope_max_angle_deg: 30.0
```

Then we will update the split_height_distance parameter from 0.2 to 0.35 meters.
- Then we will update the split_height_distance parameter from 0.2 to 0.35 meters.
This adjustment will help in reducing false positive non-ground points,
especially on step-like road surfaces or in cases of misaligned multiple lidar configurations.

Expand All @@ -123,7 +123,7 @@ especially on step-like road surfaces or in cases of misaligned multiple lidar c
+ split_height_distance: 0.35
```

Now, we will change the non_ground_height_threshold value from 0.2 to 0.3 meters.
- Now, we will change the non_ground_height_threshold value from 0.2 to 0.3 meters.
This will help us in reducing false positive non-ground points,
but it may also decrease the number of true positive non-ground points
that are below this threshold value.
Expand All @@ -133,7 +133,7 @@ that are below this threshold value.
+ non_ground_height_threshold: 0.3
```

The following image illustrates the results after these fine-tunings with the ground remover package.
- The following image illustrates the results after these fine-tunings with the ground remover package.

<figure markdown>
![after-ground-segmentation](images/after-tuning-ground-segmentation.png){ align=center }
Expand All @@ -143,15 +143,15 @@ the false positive points will disappear from the same location.
</figcaption>
</figure>

You need to update the ground segmenation according to your environment.
- You need to update the ground segmenation according to your environment.
These examples are provided for high slopes and rough road conditions.
If you have better conditions,
you can adjust your parameters
by referring to the [`ground_segmentation` package documentation page](https://autowarefoundation.github.io/autoware.universe/main/perception/ground_segmentation/).

### Tuning euclidean clustering

The `euclidean_clustering` package applies Euclidean clustering methods
- The `euclidean_clustering` package applies Euclidean clustering methods
to cluster points into smaller parts for classifying objects.
Please refer to [`euclidean_clustering` package documentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/euclidean_cluster) for more information.
This package is used in the detection pipeline of Autoware architecture.
Expand All @@ -160,19 +160,19 @@ There are two different euclidean clustering methods included in this package:
In the default design of Autoware,
the `voxel_grid_based_euclidean_cluster` method serves as the default Euclidean clustering method.

In the YTU campus environment, there are many small objects like birds,
- In the YTU campus environment, there are many small objects like birds,
dogs, cats, balls, cones, etc. To detect, track,
and predict these small objects, we aim to assign clusters to them as small as possible.

Firstly, we will change our object filter method from lanelet_filter to position_filter
- Firstly, we will change our object filter method from lanelet_filter to position_filter
to detect objects that are outside the lanelet boundaries at the [`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml).

```diff
- <arg name="detected_objects_filter_method" default="lanelet_filter" description="options: lanelet_filter, position_filter"/>
+ <arg name="detected_objects_filter_method" default="position_filter" description="options: lanelet_filter, position_filter"/>
```

After changing the filter method for objects,
- After changing the filter method for objects,
the output of our perception pipeline looks like the image below:

<figure markdown>
Expand All @@ -182,7 +182,7 @@ the output of our perception pipeline looks like the image below:
</figcaption>
</figure>

Now, we can detect unknown objects that are outside the lanelet map,
- Now, we can detect unknown objects that are outside the lanelet map,
but we still need to update the filter range
or disable the filter for unknown objects in the [`object_position_filter.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/object_filter/object_position_filter.param.yaml) file.

Expand All @@ -196,14 +196,14 @@ or disable the filter for unknown objects in the [`object_position_filter.param.
+ lower_bound_y: -100.0
```

Also, you can simply disable the filter for unknown labeled objects.
- Also, you can simply disable the filter for unknown labeled objects.

```diff
- UNKNOWN : true
+ UNKNOWN : false
```

After that,
- After that,
we can update our clustering parameters
since we can detect all objects regardless of filtering objects with the lanelet2 map.
As we mentioned earlier, we want to detect small objects.
Expand All @@ -215,7 +215,7 @@ we will decrease the minimum cluster size to 1 in the [`voxel_grid_based_euclide
+ min_cluster_size: 1
```

After making these changes, our perception output is shown in the following image:
- After making these changes, our perception output is shown in the following image:

<figure markdown>
![update-clustering](images/after-tuning-clustering.png){ align=center }
Expand Down

0 comments on commit eab3432

Please sign in to comment.