You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

194 lines
9.2 KiB

1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
  1. # Offloading Heavy Robot Compute to Remote Workstation
  2. In this tutorial, we will explore the method for offloading computationally intensive processes, such as running computer vision models, to a remote workstation computer. This approach offers several advantages such as:
  3. - Saving robot's processing power.
  4. - Increasing robot's efficiency by offloading high-power consuming processes.
  5. - Utilizing available GPU hardware on powerful workstations to run large deep learning models.
  6. We will delve into the process of **offloading [Stretch Deep Perception](https://github.com/hello-robot/stretch_ros2/tree/humble/stretch_deep_perception) ROS2 nodes**. These nodes are known for their demanding computational requirements and are frequently used in [Stretch Demos](https://github.com/hello-robot/stretch_ros2/tree/humble/stretch_demos).
  7. *NOTE: All Stretch ROS2 packages are developed with Humble distro.*
  8. ## 1. Setting a ROS_DOMAIN_ID
  9. ROS2 utilizes [DDS](https://design.ros2.org/articles/ros_on_dds.html) as the default middleware for communication. **DDS enables nodes within the same physical network to seamlessly discover one another and establish communication, provided they share the same [ROS_DOMAIN_ID](https://docs.ros.org/en/humble/Concepts/Intermediate/About-Domain-ID.html)**. This powerful mechanism ensures secure message passing between remote nodes as intended.
  10. By default, all ROS 2 nodes are configured with domain ID 0. To avoid conflicts, select a domain ID from the range of 0 to 101, and then set this chosen domain ID as the value for the `ROS_DOMAIN_ID` environment variable in both the Workstation and the Robot.
  11. ```{.bash .shell-prompt}
  12. export ROS_DOMAIN_ID=<ID>
  13. ```
  14. ## 2. Setup the Workstation to work with Stretch
  15. The workstation needs to be installed with the stretch related ros2 packages to have access to robot meshes for Visualization in Rviz, custom interfaces dependencies and essential perception packages.
  16. This section assumes the workstation already has an active ROS2 distro and colcon dependencies pre-installed.
  17. You can find [ROS2 Installation step for Ubuntu here](https://docs.ros.org/en/humble/Installation/Alternatives/Ubuntu-Install-Binary.html#).
  18. #### Setup Essential stretch_ros2 Packages
  19. Make sure the ROS2 distro is sourced.
  20. ```{.bash .shell-prompt}
  21. source /opt/ros/humble/setup.bash
  22. ```
  23. Create workspace directory and clone stretch_ros2 packages along with it's dependency packages to `src` folder.
  24. ```{.bash .shell-prompt}
  25. mkdir -p ~/ament_ws/src
  26. cd ~/ament_ws/src/
  27. git clone https://github.com/hello-robot/stretch_ros2
  28. git clone https://github.com/hello-binit/ros2_numpy -b humble
  29. git clone https://github.com/IntelRealSense/realsense-ros.git -b ros2-development
  30. git clone https://github.com/Slamtec/sllidar_ros2.git -b main
  31. git clone https://github.com/hello-binit/respeaker_ros2.git -b humble
  32. git clone https://github.com/hello-binit/audio_common.git -b humble
  33. ```
  34. Build and install all the packages present in source folder.
  35. ```{.bash .shell-prompt}
  36. cd ~/ament_ws
  37. rosdep install --rosdistro=humble -iyr --skip-keys="librealsense2" --from-paths src
  38. colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release
  39. ```
  40. Make sure to source the workspace to discover the packages in it.
  41. ```{.bash .shell-prompt}
  42. source ~/ament_ws/install/setup.bash
  43. ```
  44. #### Setup Robot URDF and Meshes
  45. All the robots will have calibrated URDF with pre-configured mesh files in the stretch_description package directory that is specific to your actual robot. So we recommend you to **copy the `stretch_description` directory that exists inside your robot and replace it with the one existing in the workstation**. The Stretch Description directory exists in the path `~/ament_ws/src/stretch_ros2/stretch_description`.
  46. If you dont want to use the URDFs from the robot, you can manually generate the uncalibrated URDF w.r.t your robot configuration using the following commands:
  47. ```{.bash .shell-prompt}
  48. cd ~/ament_ws/src/stretch_ros2/stretch_description/urdf/
  49. #if Dex-Wrist Installed
  50. cp stretch_description_dex.xacro stretch_description.xacro
  51. #if Standard Gripper
  52. cp stretch_description_standard.xacro stretch_description.xacro
  53. ros2 run stretch_calibration update_uncalibrated_urdf
  54. cp stretch_uncalibrated.urdf stretch.urdf
  55. ```
  56. After setting up the stretch_description folder, re-build the workspace to update the package with latest changes.
  57. ```{.bash .shell-prompt}
  58. cd ~/ament_ws
  59. colcon build
  60. ```
  61. #### Download Stretch Deep Perception Models
  62. [stretch_deep_perception_models](https://github.com/hello-robot/stretch_deep_perception_models) provides open deep learning models from third parties for use. We are cloning this directory to the home folder in the workstation.
  63. ```{.bash .shell-prompt}
  64. cd ~/
  65. git clone https://github.com/hello-robot/stretch_deep_perception_models
  66. ```
  67. ## 3. Start core Nodes on the Robot Compute
  68. Start the core driver nodes for controlling the robot, streaming the Lidar and realsense depth camera/s data using the [stretch_core](https://github.com/hello-robot/stretch_ros2/tree/humble/stretch_core) package.
  69. ```{.bash .shell-prompt}
  70. # Terminal 1: Start the Stretch Driver Node
  71. ros2 launch stretch_core stretch_driver.launch.py
  72. # Terminal 2: Start the realsense D435i stream.
  73. ros2 launch stretch_core d435i_high_resolution.launch.py
  74. # Terminal 3: Start lidar.
  75. ros2 launch stretch_core rplidar.launch.py
  76. ```
  77. ## 4. Verify Remote Workstation is able to discover Stretch Nodes
  78. After launching the above core nodes, all the robot control interfaces and sensor data streams should be exposed to all the other nodes in the same physical network with common ROS_DOMAIN_ID set.
  79. From the remote workstation try the following test commands:
  80. ```{.bash .shell-prompt}
  81. # Check if all robot topics are visible.
  82. ros2 topic list
  83. # Check if able to receive a sensor data by printing from Joint States topic.
  84. ros2 topic echo /joint_states
  85. # Check if able to send commands to robot by triggering stow_the_robot service
  86. ros2 service call /stow_the_robot std_srvs/srv/Trigger
  87. ```
  88. ## 5. Offload Object Detection Node to Remote Workstation
  89. From the workstation, run the [object detection node](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_deep_perception/stretch_deep_perception/detect_objects.py) which runs a YoloV5 model.
  90. ```{.bash .shell-prompt}
  91. ros2 run stretch_deep_perception detect_objects
  92. ```
  93. The node would start printing out the detected objects.
  94. ```{.bash .shell-prompt}
  95. Fusing layers...
  96. YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
  97. Adding AutoShape...
  98. [INFO] [1698379209.925727618] [DetectObjectsNode]: DetectObjectsNode started
  99. tv detected
  100. keyboard detected
  101. chair detected
  102. mouse detected
  103. mouse detected
  104. tv detected
  105. keyboard detected
  106. chair detected
  107. mouse detected
  108. mouse detected
  109. bowl detected
  110. tv detected
  111. keyboard detected
  112. chair detected
  113. mouse detected
  114. mouse detected
  115. ```
  116. ##### Visualize in Rviz
  117. ```{.bash .shell-prompt}
  118. rviz2 -d ~/ament_ws/install/stretch_deep_perception/share/stretch_deep_perception/rviz/object_detection.rviz
  119. ```
  120. ![Object detection rviz](./images/remote_compute_object_detection_viz.png)
  121. ## 6. Offload Face detection Node to Remote Workstation
  122. From the workstation, run the [face detection node](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_deep_perception/stretch_deep_perception/detect_faces.py). The face-detection node uses model parameters loaded from the stretch_deep_perception_models directory, whose [path is pulled](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_deep_perception/stretch_deep_perception/deep_learning_model_options.py#L5) from HELLO_FLEET_PATH environment variable. In our case, we will set the HELLO_FLEET_PATH environment variable to point to the home folder where the stretch_deep_perception_models directory was cloned.
  123. ```{.bash .shell-prompt}
  124. export HELLO_FLEET_PATH=~/
  125. ros2 run stretch_deep_perception detect_faces
  126. ```
  127. The node will load the face detection model network and start poblishing the detection.
  128. ```
  129. head_detection_model.getUnconnectedOutLayers() = [112]
  130. head_detection_model output layer names = ['detection_out']
  131. head_detection_model output layer names = ('detection_out',)
  132. head_detection_model input layer = <dnn_Layer 0x7f7d1e695cd0>
  133. .
  134. .
  135. .
  136. landmarks_model input layer name = align_fc3
  137. landmarks_model out_layer = <dnn_Layer 0x7f7d1e695d30>
  138. [INFO] [1698383830.671699923] [DetectFacesNode]: DetectFacesNode started
  139. ```
  140. ##### Visualize in Rviz
  141. ```{.bash .shell-prompt}
  142. rviz2 -d ~/ament_ws/install/stretch_deep_perception/share/stretch_deep_perception/rviz/face_detection.rviz
  143. ```
  144. ![Object detection rviz](./images/remote_compute_face_Detect.png)
  145. ### Troubleshooting Notes
  146. - Using a dedicated Wi-Fi router would increase the data transmission speeds significantly.
  147. - Realtime PointCloud visualization in Rviz commonly lags because of subscribing to a large message data stream. We recommend turning off the point-cloud visualization in remote workstations when possible to decrease network overhead.
  148. - If the nodes in the remote network are unable to discover robot running nodes, here are two debug steps:
  149. - Check if you can ping between the robot and remote workstation computer.
  150. - Use `ifconfig` command and compare the Network assigned IP addresses of both the robot and workstation. The first two parts of the IP address should normally match for both computers to discover each other in the network.