Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase Point Cloud Density in SLAM Mapping with unitree GO2 EDU Bot #492

Open
Bharannn opened this issue Apr 30, 2024 · 6 comments
Open

Comments

@Bharannn
Copy link

Hi there,

I hope this message finds you well. I've been using the unitree GO2 EDU bot equipped with the HESAI XT-16 LIDAR for SLAM mapping purposes. While performing SLAM and saving the map, I've observed that the point cloud density is lower than expected, even when utilizing the LIDAR at its highest RPM (1200 rpm) as per the documentation here.

I'm reaching out to inquire if there's a way to adjust the point cloud density directly through the SLAM process. I've reviewed the documentation provided but haven't found any specific instructions on how to address this issue.

Would it be possible for you to provide guidance on any configuration or parameter files that might allow me to adjust the point cloud density during SLAM mapping? Alternatively, if there are any other recommended solutions or approaches to increase the density of the point cloud output, I would greatly appreciate your insights.

Thank you for your attention to this matter, and I look forward to your assistance in resolving this issue.

@XiaomingQu0913
Copy link

lio-sam only saves the plane points and edge points of the keyframe. If you want to save the full point cloud, you need to implement it yourself: 1. Read the implementation in ImageProjection::projectPointCloud() and save the undistorted points and their timestamps. 2.stich the point clouds in 1 according to the trajectory. lio-sam does not save every frame, you may need to interpolate.

@XiaomingQu0913
Copy link

You'd better reassign a thread for file writing operations to avoid affecting real-time performance

@arpitpara
Copy link

arpitpara commented May 24, 2024

I've been checking the same, but it seem like no filters present in image_projection.cpp to downsample the data other than downsampling rate if present.
I feel it should be map_optimisation.cpp where all the registration of the cloud is happening. the points are filtered through leaf size. I reduced the leaf sizes to 0.001 and it gives me warning that data overload might happen.

Leaf size is too small for the input dataset. Integer indices would overflow.[pcl::VoxelGrid::applyFilter] Leaf size is too small for the input dataset. Integer indices would overflow
Screenshot from 2024-05-24 10-23-04

@XiaomingQu0913
Copy link

XiaomingQu0913 commented May 25, 2024

I've been checking the same, but it seem like no filters present in image_projection.cpp to downsample the data other than downsampling rate if present. I feel it should be map_optimisation.cpp where all the registration of the cloud is happening. the points are filtered through leaf size. I reduced the leaf sizes to 0.001 and it gives me warning that data overload might happen.

Leaf size is too small for the input dataset. Integer indices would overflow.[pcl::VoxelGrid::applyFilter] Leaf size is too small for the input dataset. Integer indices would overflow Screenshot from 2024-05-24 10-23-04

map_optimization downsampling is only used to build a local map of icp and has no effect on the density of the map you need. This warning is because the index of voxel exceeds the maximum value of the int32 type. It is not recommended that you set leaf_size so small, which may cause loam to not obtain a good enough line or plane when performing point-line or point-surface matching.
The map you get from the source code is actually the feature points of the keyframes, so there may be two reasons for the sparse map points:

  • Key frames are at least 1/2 of lidar frames, and may be sparser at slow speeds
  • in feature extraction, points that are neither edge nor plane points are discarded

If you want to obtain a dense point cloud, it is not recommended that you obtain it from the source code. You may need to stitch it yourself based on the trajectory.

@arpitpara
Copy link

Yeah Understood. Well Thanks for the explanation.
I am thinking of recording a PCAP file parallelly for the same time frame as mapping. I am getting a trajectory.txt or .pcd file that I can use further to stitch all the raw frames further through trajectory file.
How can I record a PCAP file for the same in the code itself. or If I record through tcpdump will it work?

@gertrnn
Copy link

gertrnn commented Oct 9, 2024

Hi there,

I hope this message finds you well. I've been using the unitree GO2 EDU bot equipped with the HESAI XT-16 LIDAR for SLAM mapping purposes. While performing SLAM and saving the map, I've observed that the point cloud density is lower than expected, even when utilizing the LIDAR at its highest RPM (1200 rpm) as per the documentation here.

I'm reaching out to inquire if there's a way to adjust the point cloud density directly through the SLAM process. I've reviewed the documentation provided but haven't found any specific instructions on how to address this issue.

Would it be possible for you to provide guidance on any configuration or parameter files that might allow me to adjust the point cloud density during SLAM mapping? Alternatively, if there are any other recommended solutions or approaches to increase the density of the point cloud output, I would greatly appreciate your insights.

Thank you for your attention to this matter, and I look forward to your assistance in resolving this issue.

Hello, I have the following question: Were you able to achieve mapping by solely following Unitree's official documentation? Or did you use the LIO-SAM repository?
I have tried to map using the official LIO-SAM repository with the Hesai XT16, but I get an error saying that I'm missing an IMU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants