@ -0,0 +1,176 @@ | |||
# Object Grasping | |||
FUNMAP is a hardware-targeted perception, planning, and navigation framework developed by Hello Robot for ROS developers and researchers. Some of the key features provided by FUNMAP include cliff detection, closed-loop navigation, and mapping. In this tutorial, we will explore an object-grasping demo using FUNMAP. | |||
## Motivation | |||
Through this demo, we demonstrate grasp candidate computation, grasp planning, and navigation using FUNMAP. An object is placed in front of the Stretch at a reasonable distance and height. The arm of the robot is fully retracted to get a good view of the object. We have observed reliable depth inference and object detection whenever the grasp candidate is placed in front of a dark background. | |||
## Workspace Setup | |||
Ideally, this demo requires the object to be placed at half the height of a Stretch. The end-effector of the robot is aligned with the target object. The arm is fully retracted to get a good view of the scene and reliable depth computation. The lift position is slightly below the height of the target surface. The demo works best in dim lighting conditions. Finally, ensure that there is no flat vertical surface, such as a wall, close behind the object of interest. | |||
## How-to-run | |||
After building and sourcing the workspace, home the robot: | |||
```bash | |||
stretch_robot_home.py | |||
``` | |||
This ensures that the underlying stretch_body package knows the exact joint limits and provides the user with a good starting joint configuration. | |||
After homing, launch the object grasping demo: | |||
```bash | |||
ros2 launch stretch_demos grasp_object.launch.py | |||
``` | |||
This command will launch stretch_driver, stretch_funmap, and the grasp_object nodes. | |||
In a new terminal window, start keyboard teleoperation: | |||
```bash | |||
ros2 run stretch_core keyboard_teleop --ros-args -p grasp_object_on:=true | |||
``` | |||
You will be presented with a keyboard teleoperation menu in a new terminal window. Use key commands to get the Stretch configured as per the above workspace setup guidelines. Once the robot is ready, press ‘\’ to trigger object grasping. | |||
## Code Explained | |||
The grasp_object node uses the joint_trajectory_server inside stretch_core to send out target joint positions. | |||
```python | |||
self.trajectory_client = ActionClient(self, | |||
FollowJointTrajectory, | |||
'/stretch_controller/follow_joint_trajectory', | |||
callback_group=self.callback_group | |||
) | |||
server_reached = self.trajectory_client.wait_for_server( | |||
timeout_sec=60.0 | |||
) | |||
if not server_reached: | |||
self.get_logger().error('Unable to connect to joint_trajectory_server. Timeout exceeded.') | |||
sys.exit() | |||
``` | |||
Additionally, the node also subscribes to RealSense's depth cloud, tool type, and Stretch’s joint state topics. | |||
```python | |||
self.joint_states_subscriber = self.create_subscription(JointState, | |||
'/stretch/joint_states', | |||
callback=self.joint_states_callback, | |||
qos_profile=1, | |||
callback_group=self.callback_group | |||
) | |||
self.point_cloud_subscriber = self.create_subscription(PointCloud2, | |||
'/camera/depth/color/points', | |||
callback=self.point_cloud_callback, | |||
qos_profile=1, | |||
callback_group=self.callback_group | |||
) | |||
self.tool_subscriber = self.create_subscription(String, | |||
'/tool', | |||
callback=self.tool_callback, | |||
qos_profile=1, | |||
callback_group=self.callback_group | |||
) | |||
``` | |||
Whenever the user triggers the grasp object service, Stretch scans the field of view for potential grasp candidates. | |||
```python | |||
self.logger.info('Stow the arm.') | |||
self.stow_the_robot() | |||
# 1. Scan surface and find grasp target | |||
self.look_at_surface(scan_time_s = 4.0) | |||
grasp_target = self.manipulation_view.get_grasp_target(self.tf2_buffer) | |||
``` | |||
After determining a suitable grasp candidate, the node generates a pregrasp base and end-effector pose. | |||
```python | |||
# 2. Move to pregrasp pose | |||
pregrasp_lift_m = self.manipulation_view.get_pregrasp_lift(grasp_target, | |||
self.tf2_buffer | |||
) | |||
if self.tool == "tool_stretch_dex_wrist": | |||
pregrasp_lift_m += 0.02 | |||
if (self.lift_position is None): | |||
return Trigger.Response( | |||
success=False, | |||
message='lift position unavailable' | |||
) | |||
self.logger.info('Raise tool to pregrasp height.') | |||
lift_to_pregrasp_m = max(self.lift_position + pregrasp_lift_m, 0.1) | |||
lift_to_pregrasp_m = min(lift_to_pregrasp_m, max_lift_m) | |||
pose = {'joint_lift': lift_to_pregrasp_m} | |||
self.move_to_pose(pose) | |||
if self.tool == "tool_stretch_dex_wrist": | |||
self.logger.info('Rotate pitch/roll for grasping.') | |||
pose = {'joint_wrist_pitch': -0.3, 'joint_wrist_roll': 0.0} | |||
self.move_to_pose(pose) | |||
pregrasp_yaw = self.manipulation_view.get_pregrasp_yaw(grasp_target, | |||
self.tf2_buffer | |||
) | |||
self.logger.info('pregrasp_yaw = {0:.2f} rad'.format(pregrasp_yaw)) | |||
self.logger.info('pregrasp_yaw = {0:.2f} deg'.format(pregrasp_yaw * (180.0/np.pi))) | |||
self.logger.info('Rotate the gripper for grasping.') | |||
pose = {'joint_wrist_yaw': pregrasp_yaw} | |||
self.move_to_pose(pose) | |||
self.logger.info('Open the gripper.') | |||
pose = {'gripper_aperture': 0.125} | |||
self.move_to_pose(pose) | |||
pregrasp_mobile_base_m, pregrasp_wrist_extension_m = self.manipulation_view.get_pregrasp_planar_translation(grasp_target, | |||
self.tf2_buffer | |||
) | |||
self.logger.info('pregrasp_mobile_base_m = {0:.3f} m'.format(pregrasp_mobile_base_m)) | |||
self.logger.info('pregrasp_wrist_extension_m = {0:.3f} m'.format(pregrasp_wrist_extension_m)) | |||
self.logger.info('Drive to pregrasp location.') | |||
self.drive(pregrasp_mobile_base_m) | |||
if pregrasp_wrist_extension_m > 0.0: | |||
extension_m = max(self.wrist_position + pregrasp_wrist_extension_m, min_extension_m) | |||
extension_m = min(extension_m, max_extension_m) | |||
self.logger.info('Extend tool above surface.') | |||
pose = {'wrist_extension': extension_m} | |||
self.move_to_pose(pose) | |||
else: | |||
self.logger.info('negative wrist extension for pregrasp, so not extending or retracting.') | |||
``` | |||
A plan to grasp the object is generated using FUNMAP’s ManipulationView class. This class requires the pregrasp pose parameters to generate the final grasp plan. | |||
```python | |||
# 3. Grasp the object and lift it | |||
grasp_mobile_base_m, grasp_lift_m, grasp_wrist_extension_m = self.manipulation_view.get_grasp_from_pregrasp(grasp_target, self.tf2_buffer) | |||
self.logger.info('grasp_mobile_base_m = {0:3f} m, grasp_lift_m = {1:3f} m, grasp_wrist_extension_m = {2:3f} m'.format(grasp_mobile_base_m, grasp_lift_m, grasp_wrist_extension_m)) | |||
self.logger.info('Move the grasp pose from the pregrasp pose.') | |||
lift_m = max(self.lift_position + grasp_lift_m, 0.1) | |||
lift_m = min(lift_m, max_lift_m) | |||
extension_m = max(self.wrist_position + grasp_wrist_extension_m, min_extension_m) | |||
extension_m = min(extension_m, max_extension_m) | |||
pose = {'translate_mobile_base': grasp_mobile_base_m, | |||
'joint_lift': lift_m, | |||
'wrist_extension': extension_m} | |||
``` | |||
The node then proceeds to grasp the object by sending our goal joint positions. | |||
@ -0,0 +1,157 @@ | |||
# Object Handover Demo | |||
FUNMAP is a hardware-targeted perception, planning, and navigation framework developed by Hello Robot for ROS developers and researchers. Some of the key features provided by FUNMAP include cliff detection, closed-loop navigation, and mapping. In this tutorial, we will explore an object-handover demo using FUNMAP. | |||
## Motivation | |||
Through this demo, we demonstrate human mouth detection using the stretch_deep_perception package, and demonstrate object delivery with navigation using FUNMAP. The robot is teleoperated to have a person in the view of its camera. The person requesting the object must face the robot. We use OpenVINO to perform facial recognition. | |||
## Workspace Setup | |||
Ideally, this demo requires the person requesting the object to be facing the robot’s camera. Use keyboard teleop to place the object in the robot’s gripper. | |||
## How-to-run | |||
After building and sourcing the workspace, home the robot: | |||
```bash | |||
stretch_robot_home.py | |||
``` | |||
This ensures that the underlying stretch_body package knows the exact joint limits and provides the user with a good starting joint configuration. | |||
After homing, launch the object handover demo: | |||
```bash | |||
ros2 launch stretch_demos handover_object.launch.py | |||
``` | |||
This command will launch stretch_driver, stretch_funmap, and the handover_object nodes. | |||
In a new terminal, launch keyboard teleoperation: | |||
```bash | |||
ros2 run stretch_core keyboard_teleop --ros-args -p handover_object_on:=true | |||
``` | |||
You will be presented with a keyboard teleoperation menu in a new terminal window. Use key commands to get the Stretch configured as per the above workspace setup guidelines. Once the robot is ready, press ‘y’ or ‘Y’ to trigger object handover. | |||
## Code Explained | |||
The object_handover node uses the joint_trajectory_server inside stretch_core to send out target joint positions. | |||
```python | |||
self.trajectory_client = ActionClient(self, FollowJointTrajectory, '/stretch_controller/follow_joint_trajectory', callback_group=self.callback_group) | |||
server_reached = self.trajectory_client.wait_for_server(timeout_sec=60.0) | |||
if not server_reached: | |||
self.get_logger().error('Unable to connect to joint_trajectory_server. Timeout exceeded.') | |||
sys.exit() | |||
``` | |||
Additionally, the node also subscribes to mouth positions detected by stretch_deep_perception, and Stretch’s joint state topics. | |||
```python | |||
self.joint_states_subscriber = self.create_subscription(JointState, '/stretch/joint_states', qos_profile=1, callback=self.joint_states_callback, callback_group=self.callback_group) | |||
self.mouth_position_subscriber = self.create_subscription(MarkerArray, '/nearest_mouth/marker_array', qos_profile=1, callback=self.mouth_position_callback, callback_group=self.callback_group) | |||
``` | |||
Whenever the node receives a mouth position message, it computes a handoff XYZ coordinate depending upon the current wrist and mouth positions: | |||
```python | |||
def mouth_position_callback(self, marker_array): | |||
with self.move_lock: | |||
for marker in marker_array.markers: | |||
if marker.type == self.mouth_marker_type: | |||
mouth_position = marker.pose.position | |||
self.mouth_point = PointStamped() | |||
self.mouth_point.point = mouth_position | |||
header = self.mouth_point.header | |||
header.stamp = marker.header.stamp | |||
header.frame_id = marker.header.frame_id | |||
# header.seq = marker.header.seq | |||
self.logger.info('******* new mouth point received *******') | |||
lookup_time = Time(seconds=0) # return most recent transform | |||
timeout_ros = Duration(seconds=0.1) | |||
old_frame_id = self.mouth_point.header.frame_id | |||
new_frame_id = 'base_link' | |||
stamped_transform = self.tf2_buffer.lookup_transform(new_frame_id, old_frame_id, lookup_time, timeout_ros) | |||
points_in_old_frame_to_new_frame_mat = rn.numpify(stamped_transform.transform) | |||
camera_to_base_mat = points_in_old_frame_to_new_frame_mat | |||
grasp_center_frame_id = 'link_grasp_center' | |||
stamped_transform = self.tf2_buffer.lookup_transform(new_frame_id, | |||
grasp_center_frame_id, | |||
lookup_time, | |||
timeout_ros | |||
) | |||
grasp_center_to_base_mat = rn.numpify(stamped_transform.transform) | |||
mouth_camera_xyz = np.array([0.0, 0.0, 0.0, 1.0]) | |||
mouth_camera_xyz[:3] = rn.numpify(self.mouth_point.point)[:3] | |||
mouth_xyz = np.matmul(camera_to_base_mat, mouth_camera_xyz)[:3] | |||
fingers_xyz = grasp_center_to_base_mat[:,3][:3] | |||
handoff_object = True | |||
if handoff_object: | |||
# attempt to handoff the object at a location below | |||
# the mouth with respect to the world frame (i.e., | |||
# gravity) | |||
target_offset_xyz = np.array([0.0, 0.0, -0.2]) | |||
else: | |||
object_height_m = 0.1 | |||
target_offset_xyz = np.array([0.0, 0.0, -object_height_m]) | |||
target_xyz = mouth_xyz + target_offset_xyz | |||
fingers_error = target_xyz - fingers_xyz | |||
self.logger.info(f'fingers_error = {str(fingers_error)}') | |||
delta_forward_m = fingers_error[0] | |||
delta_extension_m = -fingers_error[1] | |||
delta_lift_m = fingers_error[2] | |||
max_lift_m = 1.0 | |||
lift_goal_m = self.lift_position + delta_lift_m | |||
lift_goal_m = min(max_lift_m, lift_goal_m) | |||
self.lift_goal_m = lift_goal_m | |||
self.mobile_base_forward_m = delta_forward_m | |||
max_wrist_extension_m = 0.5 | |||
wrist_goal_m = self.wrist_position + delta_extension_m | |||
if handoff_object: | |||
# attempt to handoff the object by keeping distance | |||
# between the object and the mouth distance | |||
wrist_goal_m = wrist_goal_m - 0.25 # 25cm from the mouth | |||
wrist_goal_m = max(0.0, wrist_goal_m) | |||
self.wrist_goal_m = min(max_wrist_extension_m, wrist_goal_m) | |||
self.handover_goal_ready = True | |||
``` | |||
The delta between the wrist XYZ and mouth XYZ is used to calculate the lift position, base forward translation, and wrist extension. | |||
Once the user triggers the handover object service, the node sends out joint goal positions for the base, lift, and the wrist, to deliver the object near the person’s mouth: | |||
```python | |||
self.logger.info("Starting object handover!") | |||
with self.move_lock: | |||
# First, retract the wrist in preparation for handing out an object. | |||
pose = {'wrist_extension': 0.005} | |||
self.move_to_pose(pose) | |||
if self.handover_goal_ready: | |||
pose = {'joint_lift': self.lift_goal_m} | |||
self.move_to_pose(pose) | |||
tolerance_distance_m = 0.01 | |||
at_goal = self.move_base.forward(self.mobile_base_forward_m, | |||
detect_obstacles=False, | |||
tolerance_distance_m=tolerance_distance_m | |||
) | |||
pose = {'wrist_extension': self.wrist_goal_m} | |||
self.move_to_pose(pose) | |||
self.handover_goal_ready = False | |||
``` |
@ -0,0 +1,111 @@ | |||
# Hello World Demo | |||
FUNMAP is a hardware-targeted perception, planning, and navigation framework developed by Hello Robot for ROS developers and researchers. Some of the key features provided by FUNMAP include cliff detection, closed-loop navigation, and mapping. In this tutorial, we will explore a hello world writing demo using FUNMAP. | |||
## Motivation | |||
Stretch is a contacts-sensitive robot, making it compliant in human spaces. This contact sensitivity can be leveraged for surface and obstacles detection. Through this demo we demonstrate one specific application of contacts detection - surface writing. In this demo, Stretch writes out the letters 'H', 'E', 'L', 'L', and 'O' in a sequential manner. | |||
## Workspace Setup | |||
Things that you will need | |||
- Whiteboard | |||
- Marker | |||
Ideally, this demo requires a whiteboard that is placed at the same height as the Stretch. The writing surface must be flat and reachable by the robot's arm. | |||
## How-to-run | |||
After building and sourcing the workspace, home the robot: | |||
```bash | |||
stretch_robot_home.py | |||
``` | |||
This ensures that the underlying stretch_body package knows the exact joint limits and provides the user with a good starting joint configuration. | |||
After homing, launch the hello world demo: | |||
```bash | |||
ros2 launch stretch_demos hello_world.launch.py | |||
``` | |||
This command will launch stretch_driver, stretch_funmap, and the hello_world nodes. | |||
Next, in a separate terminal, run: | |||
```bash | |||
ros2 run stretch_core keyboard_teleop --ros-args -p hello_world_on:=true | |||
``` | |||
You will be presented with a keyboard teleoperation menu. Teleoperate the robot so that the arm is perpendicular to the writing surface, about 30 cm away. Place a marker pointing outward in the robot's gripper. Once the robot is ready, press ‘`’ or '~’ to trigger hello world writing. | |||
## Code Explained | |||
The clean_surface node uses the joint_trajectory_server inside stretch_core to send out target joint positions. | |||
```python | |||
self.trajectory_client = ActionClient(self, | |||
FollowJointTrajectory, | |||
'/stretch_controller/follow_joint_trajectory', | |||
callback_group=self.callback_group | |||
) | |||
server_reached = self.trajectory_client.wait_for_server(timeout_sec=60.0) | |||
if not server_reached: | |||
self.get_logger().error('Unable to connect to joint_trajectory_server. Timeout exceeded.') | |||
sys.exit() | |||
``` | |||
Additionally, the node also subscribes to Stretch’s joint states topic. | |||
```python | |||
self.joint_states_subscriber = self.create_subscription(JointState, | |||
'/stretch/joint_states', | |||
callback=self.joint_states_callback, | |||
qos_profile=10, | |||
callback_group=self.callback_group | |||
) | |||
``` | |||
Whenever the user triggers the hello world write service, the robot moves to an initial joint configuration: | |||
```python | |||
def move_to_initial_configuration(self): | |||
initial_pose = {'wrist_extension': 0.01, | |||
'joint_lift': self.letter_top_lift_m, | |||
'joint_wrist_yaw': 0.0} | |||
self.logger.info('Move to initial arm pose for writing.') | |||
self.move_to_pose(initial_pose) | |||
``` | |||
Thereafter, the node uses FUNMAP's `align_to_nearest_cliff` service to align the Stretch w.r.t the whiteboard. | |||
```python | |||
def align_to_surface(self): | |||
self.logger.info('align_to_surface') | |||
trigger_request = Trigger.Request() | |||
trigger_result = self.trigger_align_with_nearest_cliff_service.call_async(trigger_request) | |||
self.logger.info('trigger_result = {0}'.format(trigger_result)) | |||
``` | |||
After aligning with the whiteboard, the node then proceeds to execute joint position goals to draw out each letter of the word "hello": | |||
```python | |||
self.letter_h() | |||
self.space() | |||
self.letter_e() | |||
self.space() | |||
self.letter_l() | |||
self.space() | |||
self.letter_l() | |||
self.space() | |||
self.letter_o() | |||
self.space() | |||
return Trigger.Response( | |||
success=True, | |||
message='Completed writing hello!' | |||
) | |||
``` |
@ -0,0 +1,127 @@ | |||
# Drawer Opening Demo | |||
FUNMAP is a hardware-targeted perception, planning, and navigation framework developed by Hello Robot for ROS developers and researchers. Some of the key features provided by FUNMAP include cliff detection, closed-loop navigation, and mapping. In this tutorial, we will explore an drawer-opening demo using FUNMAP. | |||
## Motivation | |||
Through this demo, we demonstrate opening of common drawers that have handles, and contact sensing with navigation using FUNMAP. FUNMAP provides APIs that allow users to extend the arm or adjust the lift until a contact is detected. We leverage this contact detection API in the demo. Users can trigger the drawer opening demo through an upward or downward motion of the hook attached to the end-effector, explained below. | |||
## Workspace Setup | |||
The robot is teleoperated so as to keep its arm perpendicular to the drawer’s frontal surface, slightly above or below the handle. The workspace must be clear of any obstacles between the end-effector and the drawer. Finally, ensure that the wrist can reach the drawer through extension. | |||
## How-to-run | |||
After building and sourcing the workspace, home the robot: | |||
```bash | |||
stretch_robot_home.py | |||
``` | |||
This ensures that the underlying stretch_body package knows the exact joint limits and provides the user with a good starting joint configuration. | |||
After homing, launch the drawer opening demo: | |||
```bash | |||
ros2 launch stretch_demos open_drawer.launch.py | |||
``` | |||
This command will launch stretch_driver, stretch_funmap, and the open_drawer nodes. | |||
In a new terminal, launch keyboard teleoperation: | |||
```bash | |||
ros2 run stretch_core keyboard_teleop --ros-args -p open_drawer_on:=true | |||
``` | |||
You will be presented with a keyboard teleoperation menu in a new terminal window. Use key commands to get the Stretch configured as per the above workspace setup guidelines. Once the robot is ready, press ‘.’ or ‘>’ to trigger the drawer opening with an upward motion. If you want to run the demo with a downward motion, press ‘z’ or ‘Z’. | |||
## Code Explained | |||
The open_drawer node uses the joint_trajectory_server inside stretch_core to send out target joint positions. | |||
```python | |||
self.trajectory_client = ActionClient(self, FollowJointTrajectory, '/stretch_controller/follow_joint_trajectory', callback_group=self.callback_group) | |||
server_reached = self.trajectory_client.wait_for_server(timeout_sec=60.0) | |||
if not server_reached: | |||
self.get_logger().error('Unable to connect to joint_trajectory_server. Timeout exceeded.') | |||
sys.exit() | |||
``` | |||
The node then proceeds to connect to the contact detection services provided by FUNMAP: | |||
```python | |||
self.trigger_open_drawer_service = self.create_service(Trigger, '/open_drawer/trigger_open_drawer_down', self.trigger_open_drawer_down_callback, callback_group=self.callback_group) | |||
self.trigger_open_drawer_service = self.create_service(Trigger, '/open_drawer/trigger_open_drawer_up', self.trigger_open_drawer_up_callback, callback_group=self.callback_group) | |||
self.trigger_align_with_nearest_cliff_service = self.create_client(Trigger, '/funmap/trigger_align_with_nearest_cliff', callback_group=self.callback_group) | |||
self.trigger_align_with_nearest_cliff_service.wait_for_service() | |||
self.logger.info('Node ' + self.get_name() + ' connected to /funmap/trigger_align_with_nearest_cliff.') | |||
self.trigger_reach_until_contact_service = self.create_client(Trigger, '/funmap/trigger_reach_until_contact', callback_group=self.callback_group) | |||
self.trigger_reach_until_contact_service.wait_for_service() | |||
self.logger.info('Node ' + self.get_name() + ' connected to /funmap/trigger_reach_until_contact.') | |||
self.trigger_lower_until_contact_service = self.create_client(Trigger, '/funmap/trigger_lower_until_contact', callback_group=self.callback_group) | |||
self.trigger_lower_until_contact_service.wait_for_service() | |||
self.logger.info('Node ' + self.get_name() + ' connected to /funmap/trigger_lower_until_contact.') | |||
``` | |||
Once the user triggers the drawer opening demo, the robot is commanded to move to an initial configuration: | |||
```python | |||
initial_pose = {'wrist_extension': 0.01, | |||
'joint_wrist_yaw': 1.570796327, | |||
'gripper_aperture': 0.0} | |||
self.logger.info('Move to the initial configuration for drawer opening.') | |||
self.move_to_pose(initial_pose) | |||
``` | |||
The robot then proceeds to extend its wrist until it detects a contact, hopefully caused by a drawer surface. | |||
```python | |||
self.extend_hook_until_contact() | |||
``` | |||
It then backs off from the surface by about 5 cm so as to not touch it. | |||
```python | |||
success = self.backoff_from_surface() | |||
if not success: | |||
return Trigger.Response( | |||
success=False, | |||
message='Failed to backoff from the surface.' | |||
) | |||
``` | |||
Depending upon the user’s choice, the robot will either raise or lower its lift so as to hook the handle. | |||
```python | |||
if direction == 'down': | |||
self.lower_hook_until_contact() | |||
elif direction == 'up': | |||
self.raise_hook_until_contact() | |||
``` | |||
Finally, the robot will pull open the drawer. | |||
```python | |||
success = self.pull_open() | |||
if not success: | |||
return Trigger.Response( | |||
success=False, | |||
message='Failed to pull open the drawer.' | |||
) | |||
push_drawer_closed = False | |||
if push_drawer_closed: | |||
time.sleep(3.0) | |||
self.push_closed() | |||
return Trigger.Response( | |||
success=True, | |||
message='Completed opening the drawer!' | |||
) | |||
``` |
@ -0,0 +1,105 @@ | |||
# Surface Cleaning Demo | |||
FUNMAP is a hardware-targeted perception, planning, and navigation framework developed by Hello Robot for ROS developers and researchers. Some of the key features provided by FUNMAP include cliff detection, closed-loop navigation, and mapping. In this tutorial, we will explore a surface cleaning demo using FUNMAP. | |||
## Motivation | |||
Through this demo, we demonstrate flat surface detection using the head camera, and demonstrate surface cleaning with navigation using FUNMAP. The robot is teleoperated to have a flat surface at a reasonable height in the view of the robot’s camera. The arm of the robot is fully retracted to get a good and entire view of the surface of interest. We have observed reliable depth inference and object detection whenever the surface to be cleaned is placed in front of a dark background. | |||
## Workspace Setup | |||
Ideally, this demo requires the surface to be present at half the height of a Stretch. Teleoperate your robot so as to get a good view of it. The end-effector of the robot is aligned with the target object. The arm is fully retracted to get a good view of the scene and reliable depth computation. The lift position is slightly below the height of the target surface. The demo works best in dim lighting conditions. Finally, ensure that there is no flat vertical surface, such as a wall, close behind the surface to be cleaned. You can use RViz to visualize the pointcloud data being captured by the camera. | |||
## How-to-run | |||
After building and sourcing the workspace, home the robot: | |||
```bash | |||
stretch_robot_home.py | |||
``` | |||
This ensures that the underlying stretch_body package knows the exact joint limits and provides the user with a good starting joint configuration. | |||
After homing, launch the surface cleaning demo: | |||
```bash | |||
ros2 launch stretch_demos clean_surface.launch.py | |||
``` | |||
This command will launch stretch_driver, stretch_funmap, keyboard_teleop, and the clean_surface nodes. | |||
In a new terminal, launch keyboard teleoperation: | |||
```bash | |||
ros2 run stretch_core keyboard_teleop --ros-args -p clean_surface_on:=true | |||
``` | |||
You will be presented with a keyboard teleoperation menu in a new terminal window. Use key commands to get the Stretch configured as per the above workspace setup guidelines. Once the robot is ready, press ‘/’ or ‘?’ to trigger surface cleaning. | |||
## Code Explained | |||
The clean_surface node uses the joint_trajectory_server inside stretch_core to send out target joint positions. | |||
```python | |||
self.trajectory_client = ActionClient(self, | |||
FollowJointTrajectory, | |||
'/stretch_controller/follow_joint_trajectory', | |||
callback_group=self.callback_group | |||
) | |||
server_reached = self.trajectory_client.wait_for_server(timeout_sec=60.0) | |||
if not server_reached: | |||
self.get_logger().error('Unable to connect to joint_trajectory_server. Timeout exceeded.') | |||
sys.exit() | |||
``` | |||
Additionally, the node also subscribes to RealSense's depth cloud, and Stretch’s joint state topics. | |||
```python | |||
self.joint_states_subscriber = self.create_subscription(JointState, | |||
'/stretch/joint_states', | |||
callback=self.joint_states_callback, | |||
qos_profile=10, | |||
callback_group=self.callback_group | |||
) | |||
self.point_cloud_subscriber = self.create_subscription(PointCloud2, | |||
'/camera/depth/color/points', | |||
callback=self.point_cloud_callback, | |||
qos_profile=10, | |||
callback_group=self.callback_group | |||
) | |||
``` | |||
Whenever the user triggers the clean surface service, Stretch scans the field of view for candidate flat surfaces and generates a surface wiping plan represented by a list of joint position goals. The wiping plan is generated using the ManipulationView class: | |||
```python | |||
self.log.info("Cleaning initiating!") | |||
tool_width_m = 0.08 | |||
tool_length_m = 0.08 | |||
step_size_m = 0.04 | |||
min_extension_m = 0.01 | |||
max_extension_m = 0.5 | |||
self.look_at_surface() | |||
strokes, simple_plan, lift_to_surface_m = self.manipulation_view.get_surface_wiping_plan(self.tf2_buffer, | |||
tool_width_m, | |||
tool_length_m, | |||
step_size_m | |||
) | |||
self.log.info("Plan:" + str(simple_plan)) | |||
self.log.info('********* lift_to_surface_m = {0} **************'.format(lift_to_surface_m)) | |||
``` | |||
The ManipulationView class holds a max-height image as an internal representation of the surroundings. It creates a segmentation mask to generate a set of linear forward-backward strokes that cover the entire detected surface. See `stretch_funmap/segment_max_height_image.py` to understand how a flat surface is detected. Obstacles, if present on the surface, are inflated using dilation and erosion to create a margin of safety. These strokes, in the image frame, are then transformed into the base_link frame using tf2. This is how we obtain the wiping plan. | |||
After generating the wiping plan, the node validates this plan by asserting that the required strokes are greater than zero and the surface is reachable by adjusting the lift and wrist positions: | |||
```python | |||
if True and (strokes is not None) and (len(strokes) > 0): | |||
if (self.lift_position is not None) and (self.wrist_position is not None): | |||
above_surface_m = 0.1 | |||
lift_above_surface_m = self.lift_position + lift_to_surface_m + above_surface_m | |||
``` | |||
After validation, the node then proceeds to execute the wiping plan in a sequential manner. |