Skip to content

Commit

Permalink
Integrate fixes, improve documentation, and create simple example for…
Browse files Browse the repository at this point in the history
… collecting 3d data for mapping (#58)

* add first-pass 3d mapping script

update dependencies and make sure it gets set up properly

add detectron2 as a dependency

add home-robot branch of contact graspnet to the repo

add a simple ROS data colector

remove torchaudio

capture data + update debugging instructions

udo not use segmentation algo

update build scritp to see whats going on

build 3d map fixes - integrate pointclouds over time via voxelize

update dependencies and make sure it gets set up properly

add a simple ROS data colector

remove torchaudio

capture data + update debugging instructions

update build scritp to see whats going on

build 3d map fixes - integrate pointclouds over time via voxelize

fix controller problems

update setup for assets - newest version of stretch urdf

fix issues with run controller teleop scritp

fix issues + add voxel grid

* aggregate from one ptc

* update script - change sample freq

* correction when turning things into and out of open3d point clouds

* fix point cloud math

* update goto controller node - make sure it starts

* fix issues with slow start for cameras

* update readme + install guide to add some more details

* add a note about project contents

* add some notes on setup

* update some things that cause failures on older cpus - removing open3d
from various places

* add first-pass 3d mapping script

update dependencies and make sure it gets set up properly

add detectron2 as a dependency

add home-robot branch of contact graspnet to the repo

add a simple ROS data colector

remove torchaudio

capture data + update debugging instructions

udo not use segmentation algo

update build scritp to see whats going on

build 3d map fixes - integrate pointclouds over time via voxelize

update dependencies and make sure it gets set up properly

add a simple ROS data colector

remove torchaudio

capture data + update debugging instructions

update build scritp to see whats going on

build 3d map fixes - integrate pointclouds over time via voxelize

fix controller problems

update setup for assets - newest version of stretch urdf

fix issues with run controller teleop scritp

fix issues + add voxel grid

* update goto controller node - make sure it starts

* aggregate from one ptc

* update script - change sample freq

* correction when turning things into and out of open3d point clouds

* fix point cloud math

* fix issues with slow start for cameras

* update readme + install guide to add some more details

* add a note about project contents

* add some notes on setup

* update some things that cause failures on older cpus - removing open3d
from various places

* update bullet config

* update visualization code

* update stretch controller - make sure that it's setting position mode

* switch to position mode if we need to control the base via the
controller - switching back and forth from nav/position

* update point cloud tools - remove from data_tools

* update code - fixes to open3d point cloud paths

* remove color fixes in open3d code - this really should not happen

* resolve this issue #69

* update basic environment to resolve #68

* update to resolve #70

* clean up some debug print statements

* update header for function

* resolve a couple comments from PR

* remove bad import
  • Loading branch information
cpaxton authored Feb 23, 2023
1 parent 30f7dd4 commit 78dba5a
Show file tree
Hide file tree
Showing 25 changed files with 271 additions and 1,323 deletions.
6 changes: 6 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,9 @@
[submodule "src/third_party/habitat-lab"]
path = src/third_party/habitat-lab
url = https://github.com/facebookresearch/habitat-lab
[submodule "src/third_party/detectron2"]
path = src/third_party/detectron2
url = [email protected]:facebookresearch/detectron2.git
[submodule "src/third_party/contact_graspnet"]
path = src/third_party/contact_graspnet
url = [email protected]:cpaxton/contact_graspnet.git
35 changes: 25 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,18 @@

Mostly Hello Stretch infrastructure

## Core Concepts

This package assumes you have a low-cost mobile robot with limited compute -- initially a [Hello Robot Stretch](hello-robot.com/) - and a "server" workstation with more GPU compute. Both are assumed to be running on the same network.

In general this is the recommended workflow:
- Turn on your robot; for the Stretch, run `stretch_robot_home.py` to get it ready to use.
- From your server, connect to the robot and start a [ROS launch file](http://wiki.ros.org/roslaunch) which brings up necessary low-level control and hardware drivers.
- If desired, run [rviz](http://wiki.ros.org/rviz) on the server to see what the robot is seeing.
- Start running your AI code on the server!

We provide a couple connections for useful perception libraries like [Detic](https://github.com/facebookresearch/Detic) and [Contact Graspnet](https://github.com/NVlabs/contact_graspnet), which you can then use as a part of your methods.

## Installation & Usage

This project contains numerous packages. See individual package docs for corresponding details & instructions.
Expand All @@ -28,19 +40,13 @@ git submodule update --recursive --init
```

1. SSH into the onboard computer on the Hello Stretch.
1. Install [home_robot_hw](src/home_robot_hw/install.md).
1. Install [home_robot](src/home_robot).
1. Install the core [home_robot](src/home_robot) python package.
1. Install [home_robot_hw](src/home_robot_hw/install.md) and complete the setup.
1. Launch the ROS hardware stack:
```sh
conda deactivate
conda deactivate # If you are using conda - not required on robot!
roslaunch home_robot startup_stretch_hector_slam.launch
```
1. In a separate shell, launch home-robot helper nodes:
```sh
conda activate home_robot
python -m home_robot.nodes.state_estimator &
python -m home_robot.nodes.goto_controller &
```
1. Launch interactive client: `python -m home_robot.client.local_hello_robot`

You should then be able to command the robot using the following commands:
Expand All @@ -59,7 +65,7 @@ robot.set_velocity(v: float, w: float) # directly sets the linear and angular v
# Navigation mode
robot.navigate_to(xyt: list, relative: bool = False, position_only: bool = False)
# Manipulation mode
# Manipulation mode (outdated)
robot.set_arm_joint_positions(joint_positions: list) # joint positions: [BASE_TRANSLATION, ARM_LIFT, ARM_EXTENTION, WRIST_YAW, WRIST_PITCH, WRIST_ROLL]
robot.set_ee_pose(pos: list, quat: list, relative: bool = False)
```
Expand All @@ -76,6 +82,15 @@ robot.switch_to_manipulation_mode()
robot.set_ee_pose([0.5, 0.6, 0.5], [0, 0, 0, 1])
```

### Development

To develop in `home-robot`, install the git pre-commit hooks:
```
python -m pip install pre-commit
cd $HOME_ROBOT_ROOT
pre-commit install
```
### Launching Grasping Demo (outdated)
You need to create a catkin workspace on your server in order to run this demo, as this is where we will run [Contact Graspnet](https://github.com/cpaxton/contact_graspnet/tree/cpaxton/devel).
Expand Down
2 changes: 1 addition & 1 deletion assets/hab_stretch
108 changes: 108 additions & 0 deletions example/build_3d_map.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import sys
import timeit

import click
import numpy as np
import open3d
import rospy

from home_robot.agent.motion.stretch import STRETCH_PREGRASP_Q, HelloStretchIdx
from home_robot.utils.point_cloud import (
numpy_to_pcd,
pcd_to_numpy,
show_point_cloud,
)
from home_robot.utils.pose import to_pos_quat
from home_robot_hw.env.stretch_grasping_env import StretchGraspingEnv


def combine_point_clouds(pc_xyz: np.ndarray, pc_rgb: np.ndarray, xyz: np.ndarray, rgb: np.ndarray) -> np.ndarray:
"""Tool to combine point clouds without duplicates. Concatenate, voxelize, and then return
the finished results."""
if pc_rgb is None:
pc_rgb, pc_xyz = rgb, xyz
else:
np.concatenate([pc_rgb, rgb], axis=0)
np.concatenate([pc_xyz, xyz], axis=0)
pcd = numpy_to_pcd(xyz, rgb).voxel_down_sample(voxel_size=0.05)
return pcd_to_numpy(pcd)


class RosMapDataCollector(object):
"""Simple class to collect RGB, Depth, and Pose information for building 3d spatial-semantic
maps for the robot. Needs to subscribe to:
- color images
- depth images
- camera info
- joint states/head camera pose
- base pose (relative to world frame)
This is an example collecting the data; not necessarily the way you should do it.
"""

def __init__(self, env):
self.env = env # Get the connection to the ROS environment via agent
self.observations = []
self.started = False

def step(self):
"""Step the collector. Get a single observation of the world. Remove bad points, such as
those from too far or too near the camera."""
rgb, depth, xyz = self.env.get_images(compute_xyz=True)
q, dq = self.env.update()

# apply depth filter
depth = depth.reshape(-1)
rgb = rgb.reshape(-1, 3)
xyz = xyz.reshape(-1, 3)
valid_depth = np.bitwise_and(depth > 0.1, depth < 4.)
rgb = rgb[valid_depth, :]
xyz = xyz[valid_depth, :]
self.observations.append((rgb, xyz, q, dq))

def show(self):
"""Display the aggregated point cloud."""

# Create a combined point cloud
# Do the other stuff we need
pc_xyz, pc_rgb = None, None
for obs in self.observations:
rgb = obs[0]
xyz = obs[1]
pc_xyz, pc_rgb = combine_point_clouds(pc_xyz, pc_rgb, xyz, rgb)

show_point_cloud(pc_xyz, pc_rgb / 255)


@click.command()
@click.option("--rate", default=1, type=int)
@click.option("--max-frames", default=5, type=int)
def main(rate=10, max_frames=-1):
rospy.init_node("build_3d_map")
env = StretchGraspingEnv(segmentation_method=None)
collector = RosMapDataCollector(env)

rate = rospy.Rate(rate)
print("Press ctrl+c to finish...")
frames = 0
while not rospy.is_shutdown():
# Run until we control+C this script
collector.step() # Append latest observations
rate.sleep()

frames += 1
if max_frames > 0 and frames >= max_frames:
break

print("Done collecting data.")
collector.show()


if __name__ == "__main__":
"""run the test script."""
main()
2 changes: 1 addition & 1 deletion projects/collect_data_real_robot.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@

from home_robot.agent.motion.robot import STRETCH_PREGRASP_Q, HelloStretchIdx
from home_robot.agent.perception.constants import coco_categories
from home_robot.hw.ros.stretch_ros import HelloStretchROSInterface
from home_robot.utils.pose import to_pos_quat
from home_robot_hw.ros.stretch_ros import HelloStretchROSInterface


def parse_args():
Expand Down
12 changes: 9 additions & 3 deletions projects/stretch_grasping/README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,21 @@
# Stand-alone grasping setup

## Environment setup on server

These are notes for setting up grasping and subject to change; you should follow the instructions in the appropriate readmes to set up your environment.
```
# General installation
conda create -n home_robot python=3.8 && conda activate home_robot
# Optionally install mamba in your base environment
conda install -c conda-forge mamba
# Create the home_robot conda environment for grasping
conda create -n home_robot python=3.8 && conda activate home_robot
# Install the home_robot repo
cd $HOME_ROBOT_ROOT/src/home_robot
pip install -e .
# Specific to stand-alone grasping script
pip install trimesh pybullet matplotlib open3d opencv-python rospkg numpy==1.21
git clone https://github.com/mjd3/tracikpy.git && pip install tracikpy
# Hint: using mamba instead of conda is strongly recommended for faster installation
# mamba install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
```
Expand Down
5 changes: 5 additions & 0 deletions projects/stretch_objectnav/eval_episode.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,11 @@
rospy.init_node("eval_episode_stretch_objectnav")
agent = ObjectNavAgent(config=config)
env = StretchObjectNavEnv(config=config)

print()
print("==============")
env.switch_to_navigation_mode()
print("==============")

agent.reset()
env.reset()
Expand Down
8 changes: 8 additions & 0 deletions src/home_robot/environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ dependencies:
- cmake
- pybind11
- pytorch
- torchvision
- pip:
- numpy <1.24 # certain deprecated operations were used in other deps
- scipy
Expand All @@ -17,6 +18,13 @@ dependencies:
- scikit-image
- scikit-fmm
- numpy-quaternion
# Neural networks
- openai-clip
- timm
# visualizations and debugging
- matplotlib
# Command line tools
- click
# data tools
- yacs
- h5py
Expand Down
1 change: 0 additions & 1 deletion src/home_robot/example_environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,6 @@ dependencies:
- threadpoolctl==3.1.0
- tomli==2.0.1
- tornado==6.2
- tracikpy==0.2.0
- traitlets==5.8.1
- trimesh==3.18.0
- wcwidth==0.2.5
Expand Down
1 change: 0 additions & 1 deletion src/home_robot/home_robot/agent/motion/stretch.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@

import home_robot.utils.bullet as hrb

# from tracikpy import TracIKSolver
from home_robot.agent.motion.robot import Robot
from home_robot.utils.bullet import PybulletIKSolver
from home_robot.utils.pose import to_matrix
Expand Down
16 changes: 8 additions & 8 deletions src/home_robot/home_robot/utils/bullet.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@
T_CORRECTION,
Camera,
opengl_depth_to_xyz,
show_point_cloud,
z_from_opengl_depth,
)

Expand Down Expand Up @@ -306,7 +305,9 @@ def capture_pc(self):
xyz = trimesh.transform_points(xyz, self.pose_matrix)
return rgb, xyz, seg

def show(self, images=False, show_pc=True, test_id=2):
def show(self, images=False, show_pc=True, test_id=2) -> np.ndarray:
"""Display what we can see in the pybullet scene and return xyz points if you want to
visualize them in open3d."""
rgb, depth, seg = self.capture()
# rgb = np.flip(rgb, axis=0)
# depth = np.flip(depth, axis=0)
Expand Down Expand Up @@ -339,9 +340,7 @@ def show(self, images=False, show_pc=True, test_id=2):
print("red size -", maxs - mins)

xyz = trimesh.transform_points(xyz, self.pose_matrix)
# SHow pc
if show_pc:
show_point_cloud(xyz, rgb / 255.0, orig=np.zeros(3))
return xyz

def get_pose(self):
# return T_CORRECTION @ self.pose_matrix.copy()
Expand Down Expand Up @@ -440,9 +439,10 @@ def set_joint_positions(self, q_init):
def get_dof(self):
return len(self.controlled_joints)

def compute_ik(self, pos_desired, quat_desired, q_init):
# This version assumes that q_init is NOT in the right format yet
self.set_joint_positions(q_init)
def compute_ik(self, pos_desired, quat_desired, q_init=None):
if q_init is not None:
# This version assumes that q_init is NOT in the right format yet
self.set_joint_positions(q_init)
if self.visualize:
self.debug_block.set_pose(pos_desired, quat_desired)
input("--- Press enter to solve ---")
Expand Down
Loading

0 comments on commit 78dba5a

Please sign in to comment.