Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collecting human operated datasets in simulation #477

Open
mmurray opened this issue Oct 17, 2024 · 2 comments
Open

Collecting human operated datasets in simulation #477

mmurray opened this issue Oct 17, 2024 · 2 comments
Assignees

Comments

@mmurray
Copy link

mmurray commented Oct 17, 2024

Hello,

Can you provide info on how human supervision was provided for the simulated datasets (e.g. lerobot/aloha_sim_transfer_cube_human)? I am starting to setup a similar MuJoCo gym environment for the Stretch (https://github.com/mmurray/gym-stretch) and I would like to collect/train on some human teleop data, but it seems like the current control_robot.py script and data collection examples are setup only for physical robots. Is there a branch somewhere with the code used to collect lerobot/aloha_sim_transfer_cube_human that I can reference?

Thanks!

@Cadene
Copy link
Collaborator

Cadene commented Oct 18, 2024

Hello @mmurray , we are currently working on adding control_sim_robot.py ;)
I let @michel-aractingi comment if he has time.

For lerobot/aloha_sim_transfer_cube_human you might find more info in the original aloha paper: https://arxiv.org/abs/2304.13705

Best

@michel-aractingi
Copy link
Collaborator

michel-aractingi commented Oct 18, 2024

Hello @mmurray

You can find the control_sim_robot.py script in this branch. Just keep in mind that this is not the final version and plenty of things will change especially with the new refactoring of control_robot.py. I have also only tested it with the mujoco environment in gym_lowcostrobot

The usage is exactly the same as control_robot.py. You only need to define a sim config yaml file in lerobot/configs/env.
Here's an example of the one I am using with the gym_lowcostrobot.

# @package _global_

fps: 50

env:
  name: lowcostrobot
  fps: ${fps}
  handle: PushCubeLoop-v0

  gym:
    render_mode: human
    max_episode_steps: 100000

calibration:
  axis_directions: [-1, -1, 1, -1, -1, -1]
  offsets: [0, -0.5, -0.5, 0, -0.5, 0] # factor of pi

eval:
  use_async_envs: false

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants