Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds a first draft of a generic IK functionality #4

Open
wants to merge 15 commits into
base: revisions
Choose a base branch
from

Conversation

JonathanKuelz
Copy link

The idea behind this PR is to introduce an "api" (or interface) that allows using a pretrained model for downstream tasks without all the effort of transforming problems / data manually.

Eventually, the goal is to provide an ik function that takes:

  • 1 to N different robots (for the moment revolute only), described by nothing but the joint offsets (which seems to be the most generic format to define a robot for me)
  • 1 to M different goals per robot
  • hyperparameters

and returns either the best available of S sampled ik solutions or all S samples.

This PR provides the code for this ik function, but also utilities to load a pretrained model easily and also help with data transformations torch <-> generative_graphik without being too familiar with the orginal code.

Olimoyo and others added 2 commits March 4, 2024 22:11
…ned model on goals and robot kinematics solely defined via torch Tensors
@JonathanKuelz
Copy link
Author

Closes #3

@JonathanKuelz JonathanKuelz marked this pull request as ready for review March 13, 2024 10:11
@JonathanKuelz
Copy link
Author

I implemented everything I planned to. I added some tests that indicate everything is working properly.

However, I am not too happy with the current runtime: on my Desktop PC, I average ~600ms per robot for an IK with 32 samples. I tried speeding it up by batching the requests for multiple IK, but it did not really help much. I see the following problems:

  • I was not able to use mode.forward_eval with num_samples > 1: Maybe my data formatting is incorrect, but whenever I did, P_all contained nan values. I did not find a working example in this repo, but if you can point me towards one, I'll try to adapt my code.
  • Most of the runtime is spent on nx-related methods such as graph_from_pos. At the moment, I have to detach every sample from GPU (still faster than CPU only), create a graph with the resulting numpy array, and call the joint_variables method on it. While sampling multiple times scales great in the forward function of the model, it does not with this part of the pipeline. Do you have any suggestions on how to improve this part?

@Olimoyo Olimoyo self-assigned this Mar 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants