You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In commit #59e6c68 the viewer config was added such that the default viewer could track the world origin, the environment origin, or the asset_root during training. I would like to be able to specify the origin of a different body in an articulation or rigid body to track as the origin, for example the end effector in a manipulation task. I am happy to work on this as a PR if there is no concurrent work being done to this effect.
Motivation
The currently documented setup for wheeled navigation tasks suggests using a virtual arm to control the SO2 position of the robot instead of simulating the wheel dynamics directly, since the GPU pipeline currently struggles with smooth cylinders. This is also an easy way to implement quadcopter dynamics, which would suffer from the same problem (the robot is not the root). There is currently no way to track the robot (the way quadrupeds are tracked with asset_root) since the vehicle is not the asset_root in this setup. Additionally, it would be helpful to directly track the end effector in manipulation tasks to observe the grasps in high detail, no matter where the object is located prior to the grasp.
Alternatives
Cameras can be configured to save videos e.g. in a play script that can take any origin. But this is only useful for analyzing policies after the fact, not during training.
Checklist
I have checked that there is no similar issue in the repo
Acceptance Criteria
Support "asset_body" as an origin mode in the viewer cfg with an optional parameter "body_name" to specify which body to place the origin at for the camera.
The text was updated successfully, but these errors were encountered:
Proposal
In commit #59e6c68 the viewer config was added such that the default viewer could track the world origin, the environment origin, or the asset_root during training. I would like to be able to specify the origin of a different body in an articulation or rigid body to track as the origin, for example the end effector in a manipulation task. I am happy to work on this as a PR if there is no concurrent work being done to this effect.
Motivation
The currently documented setup for wheeled navigation tasks suggests using a virtual arm to control the SO2 position of the robot instead of simulating the wheel dynamics directly, since the GPU pipeline currently struggles with smooth cylinders. This is also an easy way to implement quadcopter dynamics, which would suffer from the same problem (the robot is not the root). There is currently no way to track the robot (the way quadrupeds are tracked with asset_root) since the vehicle is not the asset_root in this setup. Additionally, it would be helpful to directly track the end effector in manipulation tasks to observe the grasps in high detail, no matter where the object is located prior to the grasp.
Alternatives
Cameras can be configured to save videos e.g. in a play script that can take any origin. But this is only useful for analyzing policies after the fact, not during training.
Checklist
Acceptance Criteria
The text was updated successfully, but these errors were encountered: