You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/perception-tuning/index.md
+77-77
Original file line number
Diff line number
Diff line change
@@ -15,11 +15,11 @@ in this specific environment.
15
15
16
16
### Enabling camera-lidar fusion
17
17
18
-
To enable camera-lidar fusion, you need to first calibrate both your camera and lidar.
19
-
Following that, you will need to utilize the `image_info`
20
-
and `rectified_image` topics in order to employ the `tensorrt_yolo` node.
21
-
Once these ROS 2 topics are prepared,
22
-
we can proceed with enabling camera-lidar fusion as our chosen perception method:
18
+
-To enable camera-lidar fusion, you need to first calibrate both your camera and lidar.
19
+
Following that, you will need to utilize the `image_info`
20
+
and `rectified_image` topics in order to employ the `tensorrt_yolo` node.
21
+
Once these ROS 2 topics are prepared,
22
+
we can proceed with enabling camera-lidar fusion as our chosen perception method:
23
23
24
24
!!! note "Enabling camera lidar fusion on [`autoware.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/2255356e0164430ed5bc7dd325e3b61e983567a3/autoware_launch/launch/autoware.launch.xml#L42)"
25
25
@@ -45,25 +45,25 @@ file:
45
45
...
46
46
```
47
47
48
-
Also, you need to update the roi_sync.param.yaml parameter file according to your camera number.
49
-
Firstly,
50
-
please refer to the roi_cluster_fusion documentation for more information about this package.
51
-
Then, you will update your camera offsets.
52
-
For example,
53
-
if you have four cameras for the perception detection pipeline,
54
-
and you haven't measured their timestamps,
55
-
you can set these camera offsets to "0" as the initial value.
56
-
Please be careful with the offset array size; it must be equal to your camera count.
48
+
-Also, you need to update the roi_sync.param.yaml parameter file according to your camera number.
49
+
Firstly,
50
+
please refer to the roi_cluster_fusion documentation for more information about this package.
51
+
Then, you will update your camera offsets.
52
+
For example,
53
+
if you have four cameras for the perception detection pipeline,
54
+
and you haven't measured their timestamps,
55
+
you can set these camera offsets to "0" as the initial value.
56
+
Please be careful with the offset array size; it must be equal to your camera count.
If you have used different namespaces for your camera and ROI topics,
64
-
you will need to add the input topics for camera_info,
65
-
image_raw,
66
-
and rois messages in the [`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml) file.
63
+
-If you have used different namespaces for your camera and ROI topics,
64
+
you will need to add the input topics for camera_info,
65
+
image_raw,
66
+
and rois messages in the [`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml) file.
67
67
68
68
```diff
69
69
- <arg name="image_raw0" default="/sensing/camera/camera0/image_rect_color" description="image raw topic name"/>
@@ -76,13 +76,13 @@ and rois messages in the [`tier4_perception_component.launch.xml`](https://githu
76
76
77
77
### Tuning ground segmentation
78
78
79
-
The ground segmentation package removes the ground points from the input point cloud for the perception pipeline.
80
-
In our campus environment, there are a lot of high slopes and rough roads.
81
-
Therefore, this condition makes it difficult to accurately segment ground and non-ground points.
79
+
-The ground segmentation package removes the ground points from the input point cloud for the perception pipeline.
80
+
In our campus environment, there are a lot of high slopes and rough roads.
81
+
Therefore, this condition makes it difficult to accurately segment ground and non-ground points.
82
82
83
-
For example, when we pass over speed bumps,
84
-
there are a lot of false positives (ghost points) that appear as non-ground points,
85
-
as shown in the image below.
83
+
-For example, when we pass over speed bumps,
84
+
there are a lot of false positives (ghost points) that appear as non-ground points,
@@ -92,48 +92,48 @@ points on the high-slope roads with default configurations.
92
92
</figcaption>
93
93
</figure>
94
94
95
-
These ghost points affect the motion planner of Autoware,
96
-
causing the vehicle to stop even though there is no obstacle on the road during autonomous driving.
97
-
We will reduce the number of false positive non-ground points
98
-
by fine-tuning the ground segmentation in Autoware.
95
+
-These ghost points affect the motion planner of Autoware,
96
+
causing the vehicle to stop even though there is no obstacle on the road during autonomous driving.
97
+
We will reduce the number of false positive non-ground points
98
+
by fine-tuning the ground segmentation in Autoware.
99
99
100
-
There are three different ground segmentation algorithms included in Autoware:
101
-
`ray_ground_filter`, `scan_ground_filter`, and `ransac_ground_filter`.
102
-
The default method is the `scan_ground_filter`.
103
-
Please refer to the [`ground_segmentation` package documentation](https://autowarefoundation.github.io/autoware.universe/main/perception/ground_segmentation/)
104
-
for more information about these methods and their parameter definitions.
100
+
-There are three different ground segmentation algorithms included in Autoware:
101
+
`ray_ground_filter`, `scan_ground_filter`, and `ransac_ground_filter`.
102
+
The default method is the `scan_ground_filter`.
103
+
Please refer to the [`ground_segmentation` package documentation](https://autowarefoundation.github.io/autoware.universe/main/perception/ground_segmentation/)
104
+
for more information about these methods and their parameter definitions.
105
105
106
-
Firstly,
107
-
we will change the `global_slope_max_angle_deg` value from 10 to 30 degrees at [`ground_segmentation.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/obstacle_segmentation/ground_segmentation/ground_segmentation.param.yaml) parameter file.
108
-
This change will reduce our false positive non-ground points.
109
-
However, be cautious when increasing the threshold,
110
-
as it may lead to an increase in the number of false negatives.
106
+
-Firstly,
107
+
we will change the `global_slope_max_angle_deg` value from 10 to 30 degrees at [`ground_segmentation.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/obstacle_segmentation/ground_segmentation/ground_segmentation.param.yaml) parameter file.
108
+
This change will reduce our false positive non-ground points.
109
+
However, be cautious when increasing the threshold,
110
+
as it may lead to an increase in the number of false negatives.
111
111
112
112
```diff
113
113
- global_slope_max_angle_deg: 10.0
114
114
+ global_slope_max_angle_deg: 30.0
115
115
```
116
116
117
-
Then we will update the split_height_distance parameter from 0.2 to 0.35 meters.
118
-
This adjustment will help in reducing false positive non-ground points,
119
-
especially on step-like road surfaces or in cases of misaligned multiple lidar configurations.
117
+
-Then we will update the split_height_distance parameter from 0.2 to 0.35 meters.
118
+
This adjustment will help in reducing false positive non-ground points,
119
+
especially on step-like road surfaces or in cases of misaligned multiple lidar configurations.
120
120
121
121
```diff
122
122
- split_height_distance: 0.2
123
123
+ split_height_distance: 0.35
124
124
```
125
125
126
-
Now, we will change the non_ground_height_threshold value from 0.2 to 0.3 meters.
127
-
This will help us in reducing false positive non-ground points,
128
-
but it may also decrease the number of true positive non-ground points
129
-
that are below this threshold value.
126
+
-Now, we will change the non_ground_height_threshold value from 0.2 to 0.3 meters.
127
+
This will help us in reducing false positive non-ground points,
128
+
but it may also decrease the number of true positive non-ground points
129
+
that are below this threshold value.
130
130
131
131
```diff
132
132
- non_ground_height_threshold: 0.2
133
133
+ non_ground_height_threshold: 0.3
134
134
```
135
135
136
-
The following image illustrates the results after these fine-tunings with the ground remover package.
136
+
-The following image illustrates the results after these fine-tunings with the ground remover package.
@@ -143,37 +143,37 @@ the false positive points will disappear from the same location.
143
143
</figcaption>
144
144
</figure>
145
145
146
-
You need to update the ground segmenation according to your environment.
147
-
These examples are provided for high slopes and rough road conditions.
148
-
If you have better conditions,
149
-
you can adjust your parameters
150
-
by referring to the [`ground_segmentation` package documentation page](https://autowarefoundation.github.io/autoware.universe/main/perception/ground_segmentation/).
146
+
-You need to update the ground segmenation according to your environment.
147
+
These examples are provided for high slopes and rough road conditions.
148
+
If you have better conditions,
149
+
you can adjust your parameters
150
+
by referring to the [`ground_segmentation` package documentation page](https://autowarefoundation.github.io/autoware.universe/main/perception/ground_segmentation/).
151
151
152
152
### Tuning euclidean clustering
153
153
154
-
The `euclidean_clustering` package applies Euclidean clustering methods
155
-
to cluster points into smaller parts for classifying objects.
156
-
Please refer to [`euclidean_clustering` package documentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/euclidean_cluster) for more information.
157
-
This package is used in the detection pipeline of Autoware architecture.
158
-
There are two different euclidean clustering methods included in this package:
159
-
`euclidean_cluster` and `voxel_grid_based_euclidean_cluster`.
160
-
In the default design of Autoware,
161
-
the `voxel_grid_based_euclidean_cluster` method serves as the default Euclidean clustering method.
to cluster points into smaller parts for classifying objects.
156
+
Please refer to [`euclidean_clustering` package documentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/euclidean_cluster) for more information.
157
+
This package is used in the detection pipeline of Autoware architecture.
158
+
There are two different euclidean clustering methods included in this package:
159
+
`euclidean_cluster` and `voxel_grid_based_euclidean_cluster`.
160
+
In the default design of Autoware,
161
+
the `voxel_grid_based_euclidean_cluster` method serves as the default Euclidean clustering method.
162
162
163
-
In the YTU campus environment, there are many small objects like birds,
164
-
dogs, cats, balls, cones, etc. To detect, track,
165
-
and predict these small objects, we aim to assign clusters to them as small as possible.
163
+
-In the YTU campus environment, there are many small objects like birds,
164
+
dogs, cats, balls, cones, etc. To detect, track,
165
+
and predict these small objects, we aim to assign clusters to them as small as possible.
166
166
167
-
Firstly, we will change our object filter method from lanelet_filter to position_filter
168
-
to detect objects that are outside the lanelet boundaries at the [`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml).
167
+
-Firstly, we will change our object filter method from lanelet_filter to position_filter
168
+
to detect objects that are outside the lanelet boundaries at the [`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml).
@@ -182,9 +182,9 @@ the output of our perception pipeline looks like the image below:
182
182
</figcaption>
183
183
</figure>
184
184
185
-
Now, we can detect unknown objects that are outside the lanelet map,
186
-
but we still need to update the filter range
187
-
or disable the filter for unknown objects in the [`object_position_filter.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/object_filter/object_position_filter.param.yaml) file.
185
+
-Now, we can detect unknown objects that are outside the lanelet map,
186
+
but we still need to update the filter range
187
+
or disable the filter for unknown objects in the [`object_position_filter.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/object_filter/object_position_filter.param.yaml) file.
188
188
189
189
```diff
190
190
upper_bound_x: 100.0
@@ -196,26 +196,26 @@ or disable the filter for unknown objects in the [`object_position_filter.param.
196
196
+ lower_bound_y: -100.0
197
197
```
198
198
199
-
Also, you can simply disable the filter for unknown labeled objects.
199
+
-Also, you can simply disable the filter for unknown labeled objects.
200
200
201
201
```diff
202
202
- UNKNOWN : true
203
203
+ UNKNOWN : false
204
204
```
205
205
206
-
After that,
207
-
we can update our clustering parameters
208
-
since we can detect all objects regardless of filtering objects with the lanelet2 map.
209
-
As we mentioned earlier, we want to detect small objects.
210
-
Therefore,
211
-
we will decrease the minimum cluster size to 1 in the [`voxel_grid_based_euclidean_cluster.param.yaml` file](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/clustering/voxel_grid_based_euclidean_cluster.param.yaml).
206
+
-After that,
207
+
we can update our clustering parameters
208
+
since we can detect all objects regardless of filtering objects with the lanelet2 map.
209
+
As we mentioned earlier, we want to detect small objects.
210
+
Therefore,
211
+
we will decrease the minimum cluster size to 1 in the [`voxel_grid_based_euclidean_cluster.param.yaml` file](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/clustering/voxel_grid_based_euclidean_cluster.param.yaml).
212
212
213
213
```diff
214
214
- min_cluster_size: 10
215
215
+ min_cluster_size: 1
216
216
```
217
217
218
-
After making these changes, our perception output is shown in the following image:
218
+
-After making these changes, our perception output is shown in the following image:
0 commit comments