You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firstly - Congrats on this amazing planner and thanks for this contribution.
I am trying to replicate this setup using a jetson orin nano, FC with PX4 1.14, D435i and T265. I get the odometry from T265 and depth (& pointcloud) from D435.
This is my launch file (Edit of single_run_in_exp)
<launch>
<!-- number of moving objects -->
<arg name="obj_num" value="1" />
<arg name="drone_id" value="0"/>
<arg name="map_size_x" value="100"/>
<arg name="map_size_y" value="50"/>
<arg name="map_size_z" value="3.0"/>
<arg name="odom_topic" value="/camera/odom/sample"/>
<!-- main algorithm params -->
<include file="$(find ego_planner)/launch/advanced_param_exp.xml">
<arg name="drone_id" value="$(arg drone_id)"/>
<arg name="map_size_x_" value="$(arg map_size_x)"/>
<arg name="map_size_y_" value="$(arg map_size_y)"/>
<arg name="map_size_z_" value="$(arg map_size_z)"/>
<arg name="odometry_topic" value="$(arg odom_topic)"/>
<arg name="obj_num_set" value="$(arg obj_num)" />
<!-- camera pose: transform of camera frame in the world frame -->
<!-- depth topic: depth image, 640x480 by default -->
<!-- don't set cloud_topic if you already set these ones! -->
<arg name="camera_pose_topic" value="/pcl_render_node/camera_pose"/>
<arg name="depth_topic" value="/pcl_render_node/depth"/>
<!-- topic of point cloud measurement, such as from LIDAR -->
<!-- don't set camera pose and depth, if you already set this one! -->
<arg name="cloud_topic" value="/camera_d435/depth/color/points"/>
<!-- intrinsic params of the depth camera -->
<arg name="cx" value="321.735473632812"/>
<arg name="cy" value="243.194213867188"/>
<arg name="fx" value="384.127471923828"/>
<arg name="fy" value="384.127471923828"/>
<!-- maximum velocity and acceleration the drone will reach -->
<arg name="max_vel" value="1.0" />
<arg name="max_acc" value="4.0" />
<!--always set to 1.5 times grater than sensing horizen-->
<arg name="planning_horizon" value="6" />
<arg name="use_distinctive_trajs" value="false" />
<!-- 1: use 2D Nav Goal to select goal -->
<!-- 2: use global waypoints below -->
<arg name="flight_type" value="1" />
<!-- global waypoints -->
<!-- It generates a piecewise min-snap traj passing all waypoints -->
<arg name="point_num" value="1" />
<arg name="point0_x" value="15" />
<arg name="point0_y" value="0" />
<arg name="point0_z" value="1" />
<arg name="point1_x" value="0.0" />
<arg name="point1_y" value="0.0" />
<arg name="point1_z" value="1.0" />
<arg name="point2_x" value="15.0" />
<arg name="point2_y" value="0.0" />
<arg name="point2_z" value="1.0" />
<arg name="point3_x" value="0.0" />
<arg name="point3_y" value="0.0" />
<arg name="point3_z" value="1.0" />
<arg name="point4_x" value="15.0" />
<arg name="point4_y" value="0.0" />
<arg name="point4_z" value="1.0" />
</include>
<!-- trajectory server -->
<node pkg="ego_planner" name="drone_$(arg drone_id)_traj_server" type="traj_server" output="screen">
<!-- <remap from="position_cmd" to="/setpoints_cmd"/> -->
<remap from="/position_cmd" to="planning/pos_cmd"/>
<remap from="~planning/bspline" to="drone_$(arg drone_id)_planning/bspline"/>
<param name="traj_server/time_forward" value="1.0" type="double"/>
</node>
<!-- Static Transform between camera_link and camera_d435_link -->
<<node pkg="tf2_ros" type="static_transform_publisher" name="t265_to_d435" args="0 0 0 0 0 0 camera_odom_frame camera_d435_link" />
</launch>
There are issues. I don't know if these issues are interrelated.
There is no frame called world. Absence of a world frame prevents me from doing any transforms and displaying the pointcloud.
When the planner starts, the body's visualization moves much faster than the actual system. It also follows weird trajectories and keep going in to random point in random directions before reaching the target node. (No obstacles, open environment)
3.I read that the actual experiments were carried out with d435i only, (Odometry was estimated using VINS (Using the IR cameras)). I am using two differnt cameras for pose and depth. Do I need to add some code to get it working or is changing the topic and static transforms in the launch file enough?
I echo'd /planning/pos_cmd (to check before passing to mavros). The velocity and acceleration data don't go back to 0 once the point is reached. [In ego-planner, it does go to 0]. How can I change that?
[have tried enabling CUDA and using depth instead of cloud, makes no difference]
When giving depth, I don't have a topic that publishes pose (camera) ; so i decided to give cloud.
I am actively testing this system. Any suggestions would be helpful.
Thank you
The text was updated successfully, but these errors were encountered:
Excuse me, how do you give the number of seats to make the plane fly at once? After I modified the flight mode according to the code explanation part in the video, I still could not achieve this task.
Excuse me, how do you give the number of seats to make the plane fly at once? After I modified the flight mode according to the code explanation part in the video, I still could not achieve this task.
I don't know what you mean by seats. As for flight mode, when I give a point, I switch to offboard mode. (PX4 1.14)
Hey, (@ZJU-jxlin, @bigsuperZZZX )
Firstly - Congrats on this amazing planner and thanks for this contribution.
I am trying to replicate this setup using a jetson orin nano, FC with PX4 1.14, D435i and T265. I get the odometry from T265 and depth (& pointcloud) from D435.
This is my launch file (Edit of single_run_in_exp)
There are issues. I don't know if these issues are interrelated.
There is no frame called world. Absence of a world frame prevents me from doing any transforms and displaying the pointcloud.
When the planner starts, the body's visualization moves much faster than the actual system. It also follows weird trajectories and keep going in to random point in random directions before reaching the target node. (No obstacles, open environment)
3.I read that the actual experiments were carried out with d435i only, (Odometry was estimated using VINS (Using the IR cameras)). I am using two differnt cameras for pose and depth. Do I need to add some code to get it working or is changing the topic and static transforms in the launch file enough?
[have tried enabling CUDA and using depth instead of cloud, makes no difference]
When giving depth, I don't have a topic that publishes pose (camera) ; so i decided to give cloud.
I am actively testing this system. Any suggestions would be helpful.
Thank you
The text was updated successfully, but these errors were encountered: