Browse Source

Add docs for new HelloNode methods

pull/115/head
Binit Shah 1 year ago
parent
commit
56fc1bb1b2
1 changed files with 51 additions and 3 deletions
  1. +51
    -3
      hello_helpers/README.md

+ 51
- 3
hello_helpers/README.md View File

@ -8,7 +8,7 @@
*fit_plane.py* : Fits planes to 3D data.
*hello_misc.py* : Various functions, including a helpful Python object with which to create ROS nodes.
*hello_misc.py* : Various functions, including a helpful Python object with which to create ROS nodes.
*hello_ros_viz.py* : Various helper functions for vizualizations using RViz.
@ -59,6 +59,23 @@ temp.move_to_pose({'joint_lift': 0.4})
#### Attributes
##### `joint_states`
This attribute gives you the entire joint state of the robot as a [JointState](https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/JointState.html) message. The utility method [`get_joint_state()`](#get_joint_statejoint_name-moving_threshold0001) is an easier alternative to parsing the JointState message.
##### `point_cloud`
This attribute is a [PointCloud2](https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/PointCloud2.html) message as seen by the head camera. The utility method [`get_point_cloud()`](#get_point_cloud) is an easier alternative to parsing the PointCloud2 message.
##### `tool`
This attribute is the name of the end-effector as a string. You can use this attribute to flag an error for other Stretch users if their robot isn't configured with the correct tool. Most commonly, this attribute will be either `'tool_stretch_dex_wrist'` or `'tool_stretch_gripper'`. To learn about the other end-effectors available for Stretch, or how to create your own and plug it into Stretch's ROS driver, check out the [documentation on tools](https://docs.hello-robot.com/0.2/stretch-tutorials/stretch_body/tutorial_tool_change/).
##### `mode`
This attribute is the mode the robot's driver is in, as a string. See the driver's API to learn more about [driver modes](../stretch_core/README.md#mode-std_msgsstring).
##### `dryrun`
This attribute allows you to control whether the robot actually moves when calling `move_to_pose()`, `home_the_robot()`, `stow_the_robot()`, or other motion methods in this class. When `dryrun` is set to True, these motion methods return immediately. This attribute is helpful when you want to run just the perception/planning part of your node without actually moving the robot. For example, you could replace the following verbose snippet:
@ -127,13 +144,44 @@ Use this method to get the transform ([geometry_msgs/TransformStamped](https://d
```python
# roslaunch the stretch launch file beforehand
import rospy
import hello_helpers.hello_misc as hm
temp = hm.HelloNode.quick_create('temp')
t = temp.get_tf('base_link', 'link_grasp_center')
print(t.transform.translation)
```
##### `get_joint_state(joint_name, moving_threshold=0.001)`
Use this method to retrieve the joint state for a single joint. It will return a tuple with joint position, velocity, effort, and is_moving as a boolean (checked against the moving_threshold argument). For example:
```python
# roslaunch the stretch launch file beforehand
import hello_helpers.hello_misc as hm
temp = hm.HelloNode.quick_create('temp')
pos, vel, eff, is_moving = temp.get_joint_state('joint_head_pan')
print(f"The head pan is {'' if is_moving else 'not'} moving")
```
##### `get_point_cloud()`
Use this method to retrieve the point cloud seen by the head camera as a Numpy array. It will return a tuple with a named array, Nx3 3D point array, timestamp at which the point was captured, and TF frame in which the cloud was captured. For example:
```python
# roslaunch the stretch launch file beforehand
import hello_helpers.hello_misc as hm
temp = hm.HelloNode.quick_create('temp')
cloud, cloud_xyz, capture_time, capture_frame = temp.get_point_cloud()
print(f"Head camera saw a cloud of size {cloud_xyz.shape} in frame {capture_frame} at {capture_time}")
# Head camera saw a cloud of size (275925, 3) in frame camera_color_optical_frame at 1695973195045439959
import numpy as np
i = np.argmax(cloud['z'])
print(f"If the capture frame is camera_color_optical_frame (i.e. z axis points out from camera), the point furthest from the camera is {np.sqrt(cloud['x'][i]**2 + cloud['y'][i]**2 + cloud['z'][i]**2):.2f}m away and has the color {(cloud['r'][i], cloud['g'][i], cloud['b'][i])}")
# If the capture frame is camera_color_optical_frame (i.e. z axis points out from camera), the point furthest from the camera is 1.81m away and has the color (118, 121, 103)
```
##### `get_robot_floor_pose_xya(floor_frame='odom')`
Returns the current estimated x, y position and angle of the robot on the floor. This is typically called with respect to the odom frame or the map frame. x and y are in meters and the angle is in radians.
@ -187,4 +235,4 @@ temp.stop_the_robot_service(TriggerRequest())
## License
For license information, please see the LICENSE files.
For license information, please see the LICENSE files.

Loading…
Cancel
Save