Skip to content

Commit 2efb9d3

Browse files
committed
improve readability
Signed-off-by: ismetatabay <[email protected]>
1 parent 60b6511 commit 2efb9d3

File tree

1 file changed

+77
-77
lines changed
  • docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/perception-tuning

1 file changed

+77
-77
lines changed

docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/perception-tuning/index.md

+77-77
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,11 @@ in this specific environment.
1515

1616
### Enabling camera-lidar fusion
1717

18-
To enable camera-lidar fusion, you need to first calibrate both your camera and lidar.
19-
Following that, you will need to utilize the `image_info`
20-
and `rectified_image` topics in order to employ the `tensorrt_yolo` node.
21-
Once these ROS 2 topics are prepared,
22-
we can proceed with enabling camera-lidar fusion as our chosen perception method:
18+
- To enable camera-lidar fusion, you need to first calibrate both your camera and lidar.
19+
Following that, you will need to utilize the `image_info`
20+
and `rectified_image` topics in order to employ the `tensorrt_yolo` node.
21+
Once these ROS 2 topics are prepared,
22+
we can proceed with enabling camera-lidar fusion as our chosen perception method:
2323

2424
!!! note "Enabling camera lidar fusion on [`autoware.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/2255356e0164430ed5bc7dd325e3b61e983567a3/autoware_launch/launch/autoware.launch.xml#L42)"
2525

@@ -45,25 +45,25 @@ file:
4545
...
4646
```
4747

48-
Also, you need to update the roi_sync.param.yaml parameter file according to your camera number.
49-
Firstly,
50-
please refer to the roi_cluster_fusion documentation for more information about this package.
51-
Then, you will update your camera offsets.
52-
For example,
53-
if you have four cameras for the perception detection pipeline,
54-
and you haven't measured their timestamps,
55-
you can set these camera offsets to "0" as the initial value.
56-
Please be careful with the offset array size; it must be equal to your camera count.
48+
- Also, you need to update the roi_sync.param.yaml parameter file according to your camera number.
49+
Firstly,
50+
please refer to the roi_cluster_fusion documentation for more information about this package.
51+
Then, you will update your camera offsets.
52+
For example,
53+
if you have four cameras for the perception detection pipeline,
54+
and you haven't measured their timestamps,
55+
you can set these camera offsets to "0" as the initial value.
56+
Please be careful with the offset array size; it must be equal to your camera count.
5757

5858
```diff
5959
- input_offset_ms: [61.67, 111.67, 45.0, 28.33, 78.33, 95.0] # 6 cameras
6060
+ input_offset_ms: [0.0, 0.0, 0.0, 0.0] # 4 cameras
6161
```
6262

63-
If you have used different namespaces for your camera and ROI topics,
64-
you will need to add the input topics for camera_info,
65-
image_raw,
66-
and rois messages in the [`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml) file.
63+
- If you have used different namespaces for your camera and ROI topics,
64+
you will need to add the input topics for camera_info,
65+
image_raw,
66+
and rois messages in the [`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml) file.
6767

6868
```diff
6969
- <arg name="image_raw0" default="/sensing/camera/camera0/image_rect_color" description="image raw topic name"/>
@@ -76,13 +76,13 @@ and rois messages in the [`tier4_perception_component.launch.xml`](https://githu
7676

7777
### Tuning ground segmentation
7878

79-
The ground segmentation package removes the ground points from the input point cloud for the perception pipeline.
80-
In our campus environment, there are a lot of high slopes and rough roads.
81-
Therefore, this condition makes it difficult to accurately segment ground and non-ground points.
79+
- The ground segmentation package removes the ground points from the input point cloud for the perception pipeline.
80+
In our campus environment, there are a lot of high slopes and rough roads.
81+
Therefore, this condition makes it difficult to accurately segment ground and non-ground points.
8282

83-
For example, when we pass over speed bumps,
84-
there are a lot of false positives (ghost points) that appear as non-ground points,
85-
as shown in the image below.
83+
- For example, when we pass over speed bumps,
84+
there are a lot of false positives (ghost points) that appear as non-ground points,
85+
as shown in the image below.
8686

8787
<figure markdown>
8888
![default-ground-segmentation](images/ground-remover-ghost-points.png){ align=center }
@@ -92,48 +92,48 @@ points on the high-slope roads with default configurations.
9292
</figcaption>
9393
</figure>
9494

95-
These ghost points affect the motion planner of Autoware,
96-
causing the vehicle to stop even though there is no obstacle on the road during autonomous driving.
97-
We will reduce the number of false positive non-ground points
98-
by fine-tuning the ground segmentation in Autoware.
95+
- These ghost points affect the motion planner of Autoware,
96+
causing the vehicle to stop even though there is no obstacle on the road during autonomous driving.
97+
We will reduce the number of false positive non-ground points
98+
by fine-tuning the ground segmentation in Autoware.
9999

100-
There are three different ground segmentation algorithms included in Autoware:
101-
`ray_ground_filter`, `scan_ground_filter`, and `ransac_ground_filter`.
102-
The default method is the `scan_ground_filter`.
103-
Please refer to the [`ground_segmentation` package documentation](https://autowarefoundation.github.io/autoware.universe/main/perception/ground_segmentation/)
104-
for more information about these methods and their parameter definitions.
100+
- There are three different ground segmentation algorithms included in Autoware:
101+
`ray_ground_filter`, `scan_ground_filter`, and `ransac_ground_filter`.
102+
The default method is the `scan_ground_filter`.
103+
Please refer to the [`ground_segmentation` package documentation](https://autowarefoundation.github.io/autoware.universe/main/perception/ground_segmentation/)
104+
for more information about these methods and their parameter definitions.
105105

106-
Firstly,
107-
we will change the `global_slope_max_angle_deg` value from 10 to 30 degrees at [`ground_segmentation.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/obstacle_segmentation/ground_segmentation/ground_segmentation.param.yaml) parameter file.
108-
This change will reduce our false positive non-ground points.
109-
However, be cautious when increasing the threshold,
110-
as it may lead to an increase in the number of false negatives.
106+
- Firstly,
107+
we will change the `global_slope_max_angle_deg` value from 10 to 30 degrees at [`ground_segmentation.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/obstacle_segmentation/ground_segmentation/ground_segmentation.param.yaml) parameter file.
108+
This change will reduce our false positive non-ground points.
109+
However, be cautious when increasing the threshold,
110+
as it may lead to an increase in the number of false negatives.
111111

112112
```diff
113113
- global_slope_max_angle_deg: 10.0
114114
+ global_slope_max_angle_deg: 30.0
115115
```
116116

117-
Then we will update the split_height_distance parameter from 0.2 to 0.35 meters.
118-
This adjustment will help in reducing false positive non-ground points,
119-
especially on step-like road surfaces or in cases of misaligned multiple lidar configurations.
117+
- Then we will update the split_height_distance parameter from 0.2 to 0.35 meters.
118+
This adjustment will help in reducing false positive non-ground points,
119+
especially on step-like road surfaces or in cases of misaligned multiple lidar configurations.
120120

121121
```diff
122122
- split_height_distance: 0.2
123123
+ split_height_distance: 0.35
124124
```
125125

126-
Now, we will change the non_ground_height_threshold value from 0.2 to 0.3 meters.
127-
This will help us in reducing false positive non-ground points,
128-
but it may also decrease the number of true positive non-ground points
129-
that are below this threshold value.
126+
- Now, we will change the non_ground_height_threshold value from 0.2 to 0.3 meters.
127+
This will help us in reducing false positive non-ground points,
128+
but it may also decrease the number of true positive non-ground points
129+
that are below this threshold value.
130130

131131
```diff
132132
- non_ground_height_threshold: 0.2
133133
+ non_ground_height_threshold: 0.3
134134
```
135135

136-
The following image illustrates the results after these fine-tunings with the ground remover package.
136+
- The following image illustrates the results after these fine-tunings with the ground remover package.
137137

138138
<figure markdown>
139139
![after-ground-segmentation](images/after-tuning-ground-segmentation.png){ align=center }
@@ -143,37 +143,37 @@ the false positive points will disappear from the same location.
143143
</figcaption>
144144
</figure>
145145

146-
You need to update the ground segmenation according to your environment.
147-
These examples are provided for high slopes and rough road conditions.
148-
If you have better conditions,
149-
you can adjust your parameters
150-
by referring to the [`ground_segmentation` package documentation page](https://autowarefoundation.github.io/autoware.universe/main/perception/ground_segmentation/).
146+
- You need to update the ground segmenation according to your environment.
147+
These examples are provided for high slopes and rough road conditions.
148+
If you have better conditions,
149+
you can adjust your parameters
150+
by referring to the [`ground_segmentation` package documentation page](https://autowarefoundation.github.io/autoware.universe/main/perception/ground_segmentation/).
151151

152152
### Tuning euclidean clustering
153153

154-
The `euclidean_clustering` package applies Euclidean clustering methods
155-
to cluster points into smaller parts for classifying objects.
156-
Please refer to [`euclidean_clustering` package documentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/euclidean_cluster) for more information.
157-
This package is used in the detection pipeline of Autoware architecture.
158-
There are two different euclidean clustering methods included in this package:
159-
`euclidean_cluster` and `voxel_grid_based_euclidean_cluster`.
160-
In the default design of Autoware,
161-
the `voxel_grid_based_euclidean_cluster` method serves as the default Euclidean clustering method.
154+
- The `euclidean_clustering` package applies Euclidean clustering methods
155+
to cluster points into smaller parts for classifying objects.
156+
Please refer to [`euclidean_clustering` package documentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/euclidean_cluster) for more information.
157+
This package is used in the detection pipeline of Autoware architecture.
158+
There are two different euclidean clustering methods included in this package:
159+
`euclidean_cluster` and `voxel_grid_based_euclidean_cluster`.
160+
In the default design of Autoware,
161+
the `voxel_grid_based_euclidean_cluster` method serves as the default Euclidean clustering method.
162162

163-
In the YTU campus environment, there are many small objects like birds,
164-
dogs, cats, balls, cones, etc. To detect, track,
165-
and predict these small objects, we aim to assign clusters to them as small as possible.
163+
- In the YTU campus environment, there are many small objects like birds,
164+
dogs, cats, balls, cones, etc. To detect, track,
165+
and predict these small objects, we aim to assign clusters to them as small as possible.
166166

167-
Firstly, we will change our object filter method from lanelet_filter to position_filter
168-
to detect objects that are outside the lanelet boundaries at the [`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml).
167+
- Firstly, we will change our object filter method from lanelet_filter to position_filter
168+
to detect objects that are outside the lanelet boundaries at the [`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml).
169169

170170
```diff
171171
- <arg name="detected_objects_filter_method" default="lanelet_filter" description="options: lanelet_filter, position_filter"/>
172172
+ <arg name="detected_objects_filter_method" default="position_filter" description="options: lanelet_filter, position_filter"/>
173173
```
174174

175-
After changing the filter method for objects,
176-
the output of our perception pipeline looks like the image below:
175+
- After changing the filter method for objects,
176+
the output of our perception pipeline looks like the image below:
177177

178178
<figure markdown>
179179
![default-clustering](images/initial-clusters.png){ align=center }
@@ -182,9 +182,9 @@ the output of our perception pipeline looks like the image below:
182182
</figcaption>
183183
</figure>
184184

185-
Now, we can detect unknown objects that are outside the lanelet map,
186-
but we still need to update the filter range
187-
or disable the filter for unknown objects in the [`object_position_filter.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/object_filter/object_position_filter.param.yaml) file.
185+
- Now, we can detect unknown objects that are outside the lanelet map,
186+
but we still need to update the filter range
187+
or disable the filter for unknown objects in the [`object_position_filter.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/object_filter/object_position_filter.param.yaml) file.
188188

189189
```diff
190190
upper_bound_x: 100.0
@@ -196,26 +196,26 @@ or disable the filter for unknown objects in the [`object_position_filter.param.
196196
+ lower_bound_y: -100.0
197197
```
198198

199-
Also, you can simply disable the filter for unknown labeled objects.
199+
- Also, you can simply disable the filter for unknown labeled objects.
200200

201201
```diff
202202
- UNKNOWN : true
203203
+ UNKNOWN : false
204204
```
205205

206-
After that,
207-
we can update our clustering parameters
208-
since we can detect all objects regardless of filtering objects with the lanelet2 map.
209-
As we mentioned earlier, we want to detect small objects.
210-
Therefore,
211-
we will decrease the minimum cluster size to 1 in the [`voxel_grid_based_euclidean_cluster.param.yaml` file](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/clustering/voxel_grid_based_euclidean_cluster.param.yaml).
206+
- After that,
207+
we can update our clustering parameters
208+
since we can detect all objects regardless of filtering objects with the lanelet2 map.
209+
As we mentioned earlier, we want to detect small objects.
210+
Therefore,
211+
we will decrease the minimum cluster size to 1 in the [`voxel_grid_based_euclidean_cluster.param.yaml` file](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/clustering/voxel_grid_based_euclidean_cluster.param.yaml).
212212

213213
```diff
214214
- min_cluster_size: 10
215215
+ min_cluster_size: 1
216216
```
217217

218-
After making these changes, our perception output is shown in the following image:
218+
- After making these changes, our perception output is shown in the following image:
219219

220220
<figure markdown>
221221
![update-clustering](images/after-tuning-clustering.png){ align=center }

0 commit comments

Comments
 (0)