FUNMAP obstacle detection failed due to the frustum of the
RGB colored 3D points not overlapping with the region in front
of the robot that is required to detect the floor when moving
forward. Since the IR camera 3D point frustum still overlaps
with the region, it seems that an upstream Realsense D435i
change may be causing only the RGB colored 3D points to be used
now.
For now, this is a quick fix that makes the robot look down lower
when moving forward, so that the color camera frustum overlaps
with the obstacle detection region.
The FUNMAP mapping launch file started IMU nodes that were
unused, produced warnings, and created a problematic tf tree.
With this commit, mapping.launch no longer starts these nodes.
Mapping and navigation continued to function in a brief test
and the IMU related warnings were no longer produced.
I used the following command for testing.
$ roslaunch stretch_funmap mapping.launch
After these changes, FUNMAP mapping capabilities worked for me
in a small room. I tested the following capabilities:
+ head scans
+ merging head scans
+ navigating to a clicked pose
+ navigating and reaching to a clicked 3D point
+ autonomously navigating to a good place to scan
+ changing pose to a clicked pose
+ estimating nearby pose on a map
+ estimating global pose on a map
I tested these using the following launch command:
$ roslaunch stretch_funmap mapping.launch
I made a variety of changes, including the following:
+ \ => \\ for divisions that appeared to need an integer output
+ changed height bounds for floor segments (-0.05 m to 0.05 m)
Two notable remaining issues follow:
[ WARN] [1622754350.710109968]: Could not obtain transform from imu_mobile_base to base_link. Error was Could not find a connection between 'base_link' and 'imu_mobile_base' because they are not part of the same tree.Tf has two or more unconnected trees.
Transforms and URDF visualization sometimes fail after launching.
It can take multiple attempts for FUNMAP mapping to work. There is
significant nondeterminism when launching with mapping.launch.
After these changes, object grasping with the following command
worked for me. Stretch grasped a cup from a small nightstand.
$ roslaunch stretch_demos grasp_object.launch
Changes include the following:
+ / => // for some places where an integer output is required
+ changes for scikit_image (import skimage) 0.18.1
+ skimage.measure.label argument change
+ skimage.measure.regionprops argument change
With these updates, the handover_object.launch demo succeeded.
The robot successfully moved its hand to me three times in a
row when I moved myself to different positions. The handoff
poses seemed good.
A variety of changes were required to make this work, including
the following:
+ tab vs. spaces cleanup for funmap using "autopep8 -i funmap"
+ updates to cython compilation for Python 3 and recent cython
+ changed import statement pattern to "stretch_funmap.*"
Prior to my successful run of this demo, I encountered an error
related to get_robot_floor_pose_xya in HelloNode in hello_misc.py
(details below). It seems like it may have been a failed transform
lookup by get_p1_to_p2_matrix, which queries TF2 to obtain the
current estimated transformation from the robot's base_link frame
to the frame.
I did not make changes that I would expect to resolve this issue.
I only added a few print statements to hello_misc.py that I
intended to use for debugging. I have left the print statements
in place.
[ERROR] [1622656837.765018]: Error processing request: matmul: Input operand 0 does not have enough dimensions (has 0, gufunc core with signature (n?,k),(k,m?)->(n?,m?) requires 1)
['Traceback (most recent call last):\n', ' File "/opt/ros/noetic/lib/python3/dist-packages/rospy/impl/tcpros_service.py", line 632, in _handle_request\n response = convert_return_to_response(self.handler(request), self.response_class)\n', ' File "/home/ck/catkin_ws/src/stretch_ros/stretch_demos/nodes/handover_object", line 178, in trigger_handover_object_callback\n at_goal = self.move_base.forward(self.mobile_base_forward_m, detect_obstacles=False, tolerance_distance_m=tolerance_distance_m)\n', ' File "/home/ck/catkin_ws/src/stretch_ros/stretch_funmap/src/stretch_funmap/navigate.py", line 245, in forward\n xya, timestamp = self.node.get_robot_floor_pose_xya()\n', ' File "/home/ck/catkin_ws/src/stretch_ros/hello_helpers/src/hello_helpers/hello_misc.py", line 127, in get_robot_floor_pose_xya\n r0 = np.matmul(robot_to_odom_mat, r0)[:2]\n', 'ValueError: matmul: Input operand 0 does not have enough dimensions (has 0, gufunc core with signature (n?,k),(k,m?)->(n?,m?) requires 1)\n']
Used grep to find from_dcm and as_dcm methods in stretch_ros code.
scipy.spatial.Rotation changed the method names, so I attempted
to correct them throughout the code with the following conversions:
as_dcm => as_matrix
from_dcm => from_matrix
In the process of trying to get keyboard_teleop to work,
I encountered a syntax error. It turned out to be due to
"async" being used in an argument and "async" being
a keyword in Python 3. I've replaced "async" with
"return_before_done". This is the main change in this
commit.
+ issue with gripper not opening due to commanded opening aperture violating gripper joint bounds
+ added documentation
+ discards object candidate points based on the height of object and the height of the volume of interest used for perception (height of the robot's camera)
+ appears to correct issue when surface is against a wall, which seemed to result in the wall being detected as a large object on the surface