Browse Source

Merge pull request #10 from hello-robot/features/ros2_examples

Ported ROS2 tutorials for the new documentation
pull/11/head
Chintan Desai 1 year ago
committed by GitHub
parent
commit
4d2c5bed59
No known key found for this signature in database GPG Key ID: 4AEE18F83AFDEB23
24 changed files with 2360 additions and 108 deletions
  1. +14
    -12
      mkdocs.yml
  2. +30
    -25
      ros2/README.md
  3. +3
    -3
      ros2/align_to_aruco.md
  4. +120
    -0
      ros2/aruco_marker_detection.md
  5. +9
    -3
      ros2/avoiding_deadlocks_race_conditions.md
  6. +3
    -3
      ros2/deep_perception.md
  7. +1
    -1
      ros2/example_1.md
  8. +1
    -1
      ros2/example_10.md
  9. +458
    -0
      ros2/example_12.md
  10. +2
    -2
      ros2/example_2.md
  11. +2
    -2
      ros2/example_3.md
  12. +1
    -1
      ros2/example_4.md
  13. +208
    -0
      ros2/example_5.md
  14. +355
    -0
      ros2/example_6.md
  15. +277
    -0
      ros2/example_7.md
  16. +150
    -0
      ros2/example_8.md
  17. +470
    -0
      ros2/example_9.md
  18. +2
    -2
      ros2/follow_joint_trajectory.md
  19. +13
    -13
      ros2/intro_to_ros2.md
  20. +4
    -4
      ros2/navigation_simple_commander.md
  21. +2
    -2
      ros2/obstacle_avoider.md
  22. +61
    -0
      ros2/respeaker_mic_array.md
  23. +168
    -0
      ros2/respeaker_topics.md
  24. +6
    -34
      ros2/teleoperating_stretch.md

+ 14
- 12
mkdocs.yml View File

@ -147,24 +147,26 @@ nav:
- Overview: ./ros2/README.md - Overview: ./ros2/README.md
- Basics: - Basics:
- Getting Started: ./ros2/getting_started.md - Getting Started: ./ros2/getting_started.md
# - Teleoperating Stretch: ./ros2/teleoperating_stretch.md
- Introduction to ROS 2: ./ros2/intro_to_ros2.md - Introduction to ROS 2: ./ros2/intro_to_ros2.md
- Introduction to HelloNode: ./ros2/intro_to_hellonode.md - Introduction to HelloNode: ./ros2/intro_to_hellonode.md
- Follow Joint Trajectory Commands: ./ros2/follow_joint_trajectory.md
- Teleoperating Stretch: ./ros2/teleoperating_stretch.md
- Internal State of Stretch: ./ros2/internal_state_of_stretch.md - Internal State of Stretch: ./ros2/internal_state_of_stretch.md
- RViz Basics: ./ros2/rviz_basics.md - RViz Basics: ./ros2/rviz_basics.md
- Nav2 Stack: - Nav2 Stack:
- Overview: ./ros2/navigation_overview.md - Overview: ./ros2/navigation_overview.md
- Nav2 Basics: ./ros2/navigation_stack.md - Nav2 Basics: ./ros2/navigation_stack.md
- Nav2 Simple Commander: ./ros2/navigation_simple_commander.md - Nav2 Simple Commander: ./ros2/navigation_simple_commander.md
- Follow Joint Trajectory Commands: ./ros2/follow_joint_trajectory.md
- Perception: ./ros2/perception.md
- ArUco Marker Detection: ./ros2/aruco_marker_detection.md
- ReSpeaker Microphone Array:
- Respeaker Mic Array: ./ros2/respeaker_mic_array.md
- Respeaker Topics: ./ros2/respeaker_topics.md
- FUNMAP: https://github.com/hello-robot/stretch_ros2/tree/humble/stretch_funmap
# - MoveIt 2: # - MoveIt 2:
# - MoveIt Basics: ./ros2/moveit_basics.md # - MoveIt Basics: ./ros2/moveit_basics.md
# - MoveIt with RViz: ./ros2/moveit_rviz_demo.md # - MoveIt with RViz: ./ros2/moveit_rviz_demo.md
# - MoveGroup C++ API: ./ros2/moveit_movegroup_demo.md # - MoveGroup C++ API: ./ros2/moveit_movegroup_demo.md
# - Perception: ./ros2/perception.md
# - ArUco Marker Detection: ./ros2/aruco_marker_detection.md
# - ReSpeaker Microphone Array: ./ros2/respeaker_microphone_array.md
# - FUNMAP: https://github.com/hello-robot/stretch_ros/tree/master/stretch_funmap
# - ROS testing: ./ros2/ros_testing.md # - ROS testing: ./ros2/ros_testing.md
# - Other Nav Stack Features: ./ros2/other_nav_features.md # - Other Nav Stack Features: ./ros2/other_nav_features.md
# - Gazebo Basics: ./ros2/gazebo_basics.md # - Gazebo Basics: ./ros2/gazebo_basics.md
@ -173,18 +175,18 @@ nav:
- Filter Laser Scans: ./ros2/example_2.md - Filter Laser Scans: ./ros2/example_2.md
- Mobile Base Collision Avoidance: ./ros2/example_3.md - Mobile Base Collision Avoidance: ./ros2/example_3.md
- Give Stretch a Balloon: ./ros2/example_4.md - Give Stretch a Balloon: ./ros2/example_4.md
# - Print Joint States: ./ros2/example_5.md
# - Store Effort Values: ./ros2/example_6.md
# - Capture Image: ./ros2/example_7.md
# - Voice to Text: ./ros2/example_8.md
# - Voice Teleoperation of Base: ./ros2/example_9.md
- Print Joint States: ./ros2/example_5.md
- Store Effort Values: ./ros2/example_6.md
- Capture Image: ./ros2/example_7.md
- Voice to Text: ./ros2/example_8.md
- Voice Teleoperation of Base: ./ros2/example_9.md
- Tf2 Broadcaster and Listener: ./ros2/example_10.md - Tf2 Broadcaster and Listener: ./ros2/example_10.md
- ArUco Tag Locator: ./ros2/example_12.md
- Obstacle Avoider: ./ros2/obstacle_avoider.md - Obstacle Avoider: ./ros2/obstacle_avoider.md
- Align to ArUco: ./ros2/align_to_aruco.md - Align to ArUco: ./ros2/align_to_aruco.md
- Deep Perception: ./ros2/deep_perception.md - Deep Perception: ./ros2/deep_perception.md
- Avoiding Race Conditions and Deadlocks: ./ros2/avoiding_deadlocks_race_conditions.md - Avoiding Race Conditions and Deadlocks: ./ros2/avoiding_deadlocks_race_conditions.md
# - PointCloud Transformation: ./ros2/example_11.md # - PointCloud Transformation: ./ros2/example_11.md
# - ArUco Tag Locator: ./ros2/example_12.md
- Stretch Tool Share: - Stretch Tool Share:
- Overview: ./stretch_tool_share/README.md - Overview: ./stretch_tool_share/README.md

+ 30
- 25
ros2/README.md View File

@ -14,17 +14,21 @@ This tutorial track is for users looking to get familiar with programming Stretc
| | Tutorial | Description | | | Tutorial | Description |
|--|---------------------------------------------------------------------------------|----------------------------------------------------| |--|---------------------------------------------------------------------------------|----------------------------------------------------|
| 1 | [Getting Started](getting_started.md) | Setup instructions for ROS 2 on Stretch|
| 2 | [Follow Joint Trajectory Commands](follow_joint_trajectory.md) | Control joints using joint trajectory server. |
| 3 | [Internal State of Stretch](internal_state_of_stretch.md) | Monitor the joint states of Stretch. |
| 4 | [RViz Basics](rviz_basics.md) | Visualize topics in Stretch. |
| 1 | [Getting Started](getting_started.md) | Setup instructions for ROS 2 on Stretch|
| 2 | [Introduction to ROS 2](intro_to_ros2.md.md) | Explore the client library used in ROS2 |
| 3 | [Introduction to HelloNode](intro_to_hellonode.md) | Explore the Hello Node class to create a ROS2 node for Stretch |
| 4 | [Teleoperating Stretch](teleoperating_stretch.md) | Control Stretch with a Keyboard or a Gamepad controller. |
| 5 | [Internal State of Stretch](internal_state_of_stretch.md) | Monitor the joint states of Stretch. |
| 6 | [RViz Basics](rviz_basics.md) | Visualize topics in Stretch. |
| 7 | [Nav2 Stack](navigation_overview.md) | Motion planning and control for mobile base. |
| 8 | [Follow Joint Trajectory Commands](follow_joint_trajectory.md) | Control joints using joint trajectory server. |
| 9 | [Perception](perception.md) | Use the Realsense D435i camera to visualize the environment. |
| 10 | [ArUco Marker Detection](aruco_marker_detection.md) | Localize objects using ArUco markers. |
| 11 | [ReSpeaker Microphone Array](respeaker_mic_array.md) | Learn to use the ReSpeaker Microphone Array. |
| 12 | [FUNMAP](https://github.com/hello-robot/stretch_ros2/tree/humble/stretch_funmap) | Fast Unified Navigation, Manipulation and Planning. |
<!--| 5 | [MoveIt2 Basics](moveit_basics.md) | Motion planning and control for the arm using MoveIt. | <!--| 5 | [MoveIt2 Basics](moveit_basics.md) | Motion planning and control for the arm using MoveIt. |
| 6 | [MoveIt2 with Rviz](moveit_rviz_demo.md) | Motion planning and control for the arm using MoveIt. | | 6 | [MoveIt2 with Rviz](moveit_rviz_demo.md) | Motion planning and control for the arm using MoveIt. |
| 7 | [MoveIt2 MoveGroup C++ API](moveit_movegroup_demo.md) | Motion planning and control for the arm using MoveIt. |
| 9 | [Perception](coming_soon.md) | Use the Realsense D435i camera to visualize the environment. |
| 10 | [ArUco Marker Detection](coming_soon.md) | Localize objects using ArUco markers. |
| 11 | [ReSpeaker Microphone Array](coming_soon.md) | Learn to use the ReSpeaker Microphone Array. |
| 12 | [FUNMAP](https://github.com/hello-robot/stretch_ros/tree/master/stretch_funmap) | Fast Unified Navigation, Manipulation and Planning. |
| 7 | [MoveIt2 MoveGroup C++ API](moveit_movegroup_demo.md) | Motion planning and control for the arm using MoveIt. |
| 13 | [ROS testing](coming_soon.md) | Write ROS system tests for introspection. | | 13 | [ROS testing](coming_soon.md) | Write ROS system tests for introspection. |
| 14 | [Other Nav Stack Features](coming_soon.md) | Advanced features for Nav 2. | --> | 14 | [Other Nav Stack Features](coming_soon.md) | Advanced features for Nav 2. | -->
@ -34,19 +38,20 @@ To help get you started on your software development, here are examples of nodes
| | Tutorial | Description | | | Tutorial | Description |
|---|-------------------------------------------------|----------------------------------------------------| |---|-------------------------------------------------|----------------------------------------------------|
| 1 | [Teleoperate Stretch with a Node](example_1.md) | Use a python script that sends velocity commands. |
| 2 | [Filter Laser Scans](example_2.md) | Publish new scan ranges that are directly in front of Stretch.|
| 3 | [Mobile Base Collision Avoidance](example_3.md) | Stop Stretch from running into a wall.|
| 4 | [Give Stretch a Balloon](example_4.md) | Create a "balloon" marker that goes where ever Stretch goes.|
| 5 | [Tf2 Broadcaster and Listener](example_10.md) | Create a tf2 broadcaster and listener.|
| 6 | [Obstacle Avoider](obstacle_avoider.md) | Avoid obstacles using the planar lidar. |
| 7 | [Align to ArUco](align_to_aruco.md) | Detect ArUco fiducials using OpenCV and align to them.|
| 8 | [Deep Perception](deep_perception.md) | Use YOLOv5 to detect 3D objects in a point cloud.|
<!-- | 5 | [Print Joint States](example_5.md) | Print the joint states of Stretch.|
| 6 | [Store Effort Values](example_6.md) | Print, store, and plot the effort values of the Stretch robot.|
| 7 | [Capture Image](example_7.md) | Capture images from the RealSense camera data.|
| 8 | [Voice to Text](example_8.md) | Interpret speech and save transcript to a text file.|
| 9 | [Voice Teleoperation of Base](example_9.md) | Use speech to teleoperate the mobile base.|
| 10 | [Tf2 Broadcaster and Listener](example_10.md) | Create a tf2 broadcaster and listener.|
| 11 | [PointCloud Transformation](example_11.md) | Convert PointCloud2 data to a PointCloud and transform to a different frame.|
| 12 | [ArUco Tag Locator](example_12.md) | Actuate the head to locate a requested ArUco marker tag and return a transform.| -->
| 1 | [Mobile Base Velocity Control](example_1.md) | Use a python script that sends velocity commands. |
| 2 | [Filter Laser Scans](example_2.md) | Publish new scan ranges that are directly in front of Stretch.|
| 3 | [Mobile Base Collision Avoidance](example_3.md) | Stop Stretch from running into a wall.|
| 4 | [Give Stretch a Balloon](example_4.md) | Create a "balloon" marker that goes where ever Stretch goes.|
| 5 | [Print Joint States](example_5.md) | Print the joint states of Stretch.|
| 6 | [Store Effort Values](example_6.md) | Print, store, and plot the effort values of the Stretch robot.|
| 7 | [Capture Image](example_7.md) | Capture images from the RealSense camera data.|
| 8 | [Voice to Text](example_8.md) | Interpret speech and save transcript to a text file.|
| 9 | [Voice Teleoperation of Base](example_9.md) | Use speech to teleoperate the mobile base.|
| 10 | [Tf2 Broadcaster and Listener](example_10.md) | Create a tf2 broadcaster and listener.|
| 11 | [ArUco Tag Locator](example_12.md) | Actuate the head to locate a requested ArUco marker tag and return a transform.|
| 12 | [Obstacle Avoider](obstacle_avoider.md) | Avoid obstacles using the planar lidar. |
| 13 | [Align to ArUco](align_to_aruco.md) | Detect ArUco fiducials using OpenCV and align to them.|
| 14 | [Deep Perception](deep_perception.md) | Use YOLOv5 to detect 3D objects in a point cloud.|
| 15 | [Avoiding Race Conditions and Deadlocks](avoiding_deadlocks_race_conditions.md) | Learn how to avoid Race Conditions and Deadlocks |
<!--| 11 | [PointCloud Transformation](example_11.md) | Convert PointCloud2 data to a PointCloud and transform to a different frame.|

+ 3
- 3
ros2/align_to_aruco.md View File

@ -4,7 +4,7 @@ ArUco markers are a type of fiducials that are used extensively in robotics for
## ArUco Detection ## ArUco Detection
Stretch uses the OpenCV ArUco detection library and is configured to identify a specific set of ArUco markers belonging to the 6x6, 250 dictionary. To understand why this is important, please refer to [this](https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html) handy guide provided by OpenCV. Stretch uses the OpenCV ArUco detection library and is configured to identify a specific set of ArUco markers belonging to the 6x6, 250 dictionary. To understand why this is important, please refer to [this](https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html) handy guide provided by OpenCV.
Stretch comes preconfigured to identify ArUco markers. The ROS node that enables this is the detect_aruco_markers [node](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/stretch_core/detect_aruco_markers.py) in the stretch_core package. Thanks to this node, identifying and estimating the pose of a marker is as easy as pointing the camera at the marker and running the detection node. It is also possible and quite convenient to visualize the detections with RViz.
Stretch comes preconfigured to identify ArUco markers. The ROS node that enables this is the detect_aruco_markers [node](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) in the stretch_core package. Thanks to this node, identifying and estimating the pose of a marker is as easy as pointing the camera at the marker and running the detection node. It is also possible and quite convenient to visualize the detections with RViz.
## Computing Transformations ## Computing Transformations
If you have not already done so, now might be a good time to review the [tf listener](https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/example_10/) tutorial. Go on, we can wait… If you have not already done so, now might be a good time to review the [tf listener](https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/example_10/) tutorial. Go on, we can wait…
@ -50,7 +50,7 @@ ros2 launch stretch_core align_to_aruco.launch.py
</p> </p>
## Code Breakdown ## Code Breakdown
Let's jump into the code to see how things work under the hood. Follow along [here](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/stretch_core/align_to_aruco.py) to have a look at the entire script.
Let's jump into the code to see how things work under the hood. Follow along [here](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/align_to_aruco.py) to have a look at the entire script.
We make use of two separate Python classes for this demo. The FrameListener class is derived from the Node class and is the place where we compute the TF transformations. For an explantion of this class, you can refer to the [TF listener](https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/example_10/) tutorial. We make use of two separate Python classes for this demo. The FrameListener class is derived from the Node class and is the place where we compute the TF transformations. For an explantion of this class, you can refer to the [TF listener](https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/example_10/) tutorial.
```python ```python
@ -118,4 +118,4 @@ The align_to_marker() method is where we command Stretch to the pose goal in thr
def align_to_marker(self): def align_to_marker(self):
``` ```
If you want to work with a different ArUco marker than the one we used in this tutorial, you can do so by changing line 44 in the [code](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/stretch_core/align_to_aruco.py#L44) to the one you wish to detect. Also, don't forget to add the marker in the [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/config/stretch_marker_dict.yaml) ArUco marker dictionary.
If you want to work with a different ArUco marker than the one we used in this tutorial, you can do so by changing line 44 in the [code](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/align_to_aruco.py#L44) to the one you wish to detect. Also, don't forget to add the marker in the [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/config/stretch_marker_dict.yaml) ArUco marker dictionary.

+ 120
- 0
ros2/aruco_marker_detection.md View File

@ -0,0 +1,120 @@
## ArUco Marker Detector
For this tutorial, we will go over how to detect Stretch's ArUco markers and review the files that hold the information for the tags.
### Visualize ArUco Markers in RViz
Begin by running the stretch driver launch file.
```{.bash .shell-prompt}
ros2 launch stretch_core stretch_driver.launch.py
```
To activate the RealSense camera and publish topics to be visualized, run the following launch file in a new terminal.
```{.bash .shell-prompt}
ros2 launch stretch_core d435i_high_resolution.launch.py
```
Next, in a new terminal, run the stretch ArUco launch file which will bring up the [detect_aruco_markers](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) node.
```{.bash .shell-prompt}
ros2 launch stretch_core stretch_aruco.launch.py
```
Within this tutorial package, there is an [RViz config file](https://github.com/hello-robot/stretch_tutorials/blob/humble/rviz/aruco_detector_example.rviz) with the topics for the transform frames in the Display tree. You can visualize these topics and the robot model by running the command below in a new terminal.
```{.bash .shell-prompt}
ros2 run rviz2 rviz2 -d /home/hello-robot/ament_ws/src/stretch_tutorials/rviz/aruco_detector_example.rviz
```
You are going to need to teleoperate Stretch's head to detect the ArUco marker tags. Run the following command in a new terminal and control the head to point the camera toward the markers.
```{.bash .shell-prompt}
ros2 run stretch_core keyboard_teleop
```
<p align="center">
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/aruco_detector.gif"/>
</p>
### The ArUco Marker Dictionary
When defining the ArUco markers on Stretch, hello robot utilizes a YAML file, [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/config/stretch_marker_dict.yaml), that holds the information about the markers.
If [detect_aruco_markers](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) node doesn’t find an entry in [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/config/stretch_marker_dict.yaml) for a particular ArUco marker ID number, it uses the default entry. For example, most robots have shipped with the following default entry:
```yaml
'default':
'length_mm': 24
'use_rgb_only': False
'name': 'unknown'
'link': None
```
and the following entry for the ArUco marker on the top of the wrist
```yaml
'133':
'length_mm': 23.5
'use_rgb_only': False
'name': 'wrist_top'
'link': 'link_aruco_top_wrist'
```
**Dictionary Breakdown**
```yaml
'133':
```
The dictionary key for each entry is the ArUco marker’s ID number or `default`. For example, the entry shown above for the ArUco marker on the top of the wrist assumes that the marker’s ID number is `133`.
```yaml
'length_mm': 23.5
```
The `length_mm` value used by [detect_aruco_markers](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) is important for estimating the pose of an ArUco marker.
!!! note
If the actual width and height of the marker do not match this value, then pose estimation will be poor. Thus, carefully measure custom Aruco markers.
```yaml
'use_rgb_only': False
```
If `use_rgb_only` is `True`, [detect_aruco_markers](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) will ignore depth images from the [Intel RealSense D435i depth camera](https://www.intelrealsense.com/depth-camera-d435i/) when estimating the pose of the marker and will instead only use RGB images from the D435i.
```yaml
'name': 'wrist_top'
```
`name` is used for the text string of the ArUco marker’s [ROS Marker](http://docs.ros.org/en/melodic/api/visualization_msgs/html/msg/Marker.html) in the [ROS MarkerArray](http://docs.ros.org/en/melodic/api/visualization_msgs/html/msg/MarkerArray.html) Message published by the [detect_aruco_markers](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) ROS node.
```yaml
'link': 'link_aruco_top_wrist'
```
`link` is currently used by [stretch_calibration](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_calibration/stretch_calibration/collect_head_calibration_data.py). It is the name of the link associated with a body-mounted ArUco marker in the [robot’s URDF](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_description/urdf/stretch_aruco.xacro).
It’s good practice to add an entry to [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/config/stretch_marker_dict.yaml) for each ArUco marker you use.
### Create a New ArUco Marker
At Hello Robot, we’ve used the following guide when generating new ArUco markers.
We generate ArUco markers using a 6x6-bit grid (36 bits) with 250 unique codes. This corresponds with[ DICT_6X6_250 defined in OpenCV](https://docs.opencv.org/3.4/d9/d6a/group__aruco.html). We generate markers using this [online ArUco marker generator](https://chev.me/arucogen/) by setting the Dictionary entry to 6x6 and then setting the Marker ID and Marker size, mm as appropriate for the specific application. We strongly recommend measuring the actual marker by hand before adding an entry for it to [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/config/stretch_marker_dict.yaml).
We select marker ID numbers using the following ranges.
* 0 - 99: reserved for users
* 100 - 249: reserved for official use by Hello Robot Inc.
* 100 - 199: reserved for robots with distinct sets of body-mounted markers
* Allows different robots near each other to use distinct sets of body-mounted markers to avoid confusion. This could be valuable for various uses of body-mounted markers, including calibration, visual servoing, visual motion capture, and multi-robot tasks.
* 5 markers per robot = 2 on the mobile base + 2 on the wrist + 1 on the shoulder
* 20 distinct sets = 100 available ID numbers / 5 ID numbers per robot
* 200 - 249: reserved for official accessories
* 245 for the prototype docking station
* 246-249 for large floor markers
When coming up with this guide, we expected the following:
* Body-mounted accessories with the same ID numbers mounted to different robots could be disambiguated using the expected range of 3D locations of the ArUco markers on the calibrated body.
* Accessories in the environment with the same ID numbers could be disambiguated using a map or nearby observable features of the environment.

+ 9
- 3
ros2/avoiding_deadlocks_race_conditions.md View File

@ -19,10 +19,12 @@ With a MultiThreadedExecutor, callbacks can be serviced in parallel. However, th
There are two different kinds of callback groups available. A MutuallyExclusiveCallbackGroup, the default, ensures that the callbacks belonging to this group never execute in parallel. You would use this when two callbacks access and write to the same shared memory space and having them execute them together would result in a race condition. A ReentrantCallbackGroup ensures that callbacks belonging to this group are able to execute parallelly. You would use this when a long running callback occupies the bulk of the executors time and you want shorter fast running callbacks to run in parallel. There are two different kinds of callback groups available. A MutuallyExclusiveCallbackGroup, the default, ensures that the callbacks belonging to this group never execute in parallel. You would use this when two callbacks access and write to the same shared memory space and having them execute them together would result in a race condition. A ReentrantCallbackGroup ensures that callbacks belonging to this group are able to execute parallelly. You would use this when a long running callback occupies the bulk of the executors time and you want shorter fast running callbacks to run in parallel.
Now, let’s explore what we have learned so far in the form of a real example.
Now, let’s explore what we have learned so far in the form of a real example. You can try them by creating a Python file and run it in your terminal as `python3 FILE_NAME`.
## Race Condition Example ## Race Condition Example
It is instructive to see an example code that generates a race condition. The below code simulates a race condition by defining two subscriber callbacks that write and read from shared memory simultaneously. It is instructive to see an example code that generates a race condition. The below code simulates a race condition by defining two subscriber callbacks that write and read from shared memory simultaneously.
!!! note
Before executing the Race Condition Example you first need to launch the stretch core driver as ros2 launch stretch_core stretch_driver.launch.py
```python ```python
import rclpy import rclpy
@ -97,7 +99,7 @@ Executing the above script, the expected behavior is to see the logged statement
Executing the above code, you are presented with two prompts, first to select the executor, either a SingleThreadedExecutor or a MultiThreadedExecutor; and then to select a callback group type, either a MutuallyExclusiveCallbackGroup or a ReentrantCallbackGroup. Executing the above code, you are presented with two prompts, first to select the executor, either a SingleThreadedExecutor or a MultiThreadedExecutor; and then to select a callback group type, either a MutuallyExclusiveCallbackGroup or a ReentrantCallbackGroup.
Selecting a SingleThreadedExecutor, irrespective of which callback group is selected, results in callbacks being executed sequentially. This is because the executor is spun using a single thread that can only service one callback at a time. In this case, we see that there is no memory curroption and the observed behavior is the same as the expected behavior.
Selecting a SingleThreadedExecutor, irrespective of which callback group is selected, results in callbacks being executed sequentially. This is because the executor is spun using a single thread that can only service one callback at a time. In this case, we see that there is no memory corruption and the observed behavior is the same as the expected behavior.
Things get interesting when we choose the MultiThreadedExecutor along with a ReentrantCallbackGroup. Multiple threads are used by the executor to service callbacks, while callbacks are allowed to execute in parallel. This allows multiple threads to access the same memory space and execute read/write operations. The observed behavior is that, sometimes you see the callbacks print statements like "Switching from True to True" or "Switching from False to False" which go against the conditions set in the callbacks. This is a race condition. Things get interesting when we choose the MultiThreadedExecutor along with a ReentrantCallbackGroup. Multiple threads are used by the executor to service callbacks, while callbacks are allowed to execute in parallel. This allows multiple threads to access the same memory space and execute read/write operations. The observed behavior is that, sometimes you see the callbacks print statements like "Switching from True to True" or "Switching from False to False" which go against the conditions set in the callbacks. This is a race condition.
@ -105,7 +107,7 @@ Selecting a MultiThreadedExecutor along with a MutuallyExclusiveCallbackGroup al
## Deadlock Example ## Deadlock Example
A great example of a deadlock is provided in the official ROS 2 documentation on [sync deadlock](https://docs.ros.org/en/humble/How-To-Guides/Sync-Vs-Async.html?highlight=timer#sync-deadlock), so this example will directly build off of the same code. The server side defines a callback method add_two_ints_callback() which returns the sum of two requested numbers. Notice the call to spin in the main() method which persistently executes the callback method as a service request arrives.
A great example of a deadlock is provided in the official ROS 2 documentation on [sync deadlock](https://docs.ros.org/en/iron/How-To-Guides/Sync-Vs-Async.html#sync-deadlock), so this example will directly build off of the same code. The server side defines a callback method add_two_ints_callback() which returns the sum of two requested numbers. Notice the call to spin in the main() method which persistently executes the callback method as a service request arrives. For the requested numbers you will need to input them in the terminal manually.
```python ```python
from example_interfaces.srv import AddTwoInts from example_interfaces.srv import AddTwoInts
@ -196,6 +198,10 @@ def main():
An alternative to this, a feature that's new to ROS 2, is to use an asynchronous service call. This allows one to monitor the response through what's called a future object in ROS 2. The future holds the status of whether a service call was accepted by the server and also the returned response. Since the future is returned immediately on making an async service call, the execution is not blocked and the executor can be invoked in the main execution thread. Here's how to do it: An alternative to this, a feature that's new to ROS 2, is to use an asynchronous service call. This allows one to monitor the response through what's called a future object in ROS 2. The future holds the status of whether a service call was accepted by the server and also the returned response. Since the future is returned immediately on making an async service call, the execution is not blocked and the executor can be invoked in the main execution thread. Here's how to do it:
```python ```python
class MinimalClientAsync(Node):
...
def send_request(self, a, b): def send_request(self, a, b):
self.req.a = a self.req.a = a
self.req.b = b self.req.b = b

+ 3
- 3
ros2/deep_perception.md View File

@ -28,7 +28,7 @@ Voila! You just executed your first deep-learning model on Stretch!
## Code Breakdown ## Code Breakdown
Luckily, the stretch_deep_pereption package is extremely modular and is designed to work with a wide array of detectors. Although most of the heavy lifting in this tutorial is being done by the neural network, let's attempt to break down the code into functional blocks to understand the detection pipeline. Luckily, the stretch_deep_pereption package is extremely modular and is designed to work with a wide array of detectors. Although most of the heavy lifting in this tutorial is being done by the neural network, let's attempt to break down the code into functional blocks to understand the detection pipeline.
The control flow begins with executing the [detect_objects.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/detect_objects.py) script. In the main() function, we create an instance of the ObjectDetector class from the [object_detect_pytorch.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/object_detect_pytorch.py) script where we configure the YOLOv5s model. Next, we pass this detector to an instance of the DetectionNode class from the [detection_node.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/detection_node.py) script and call the main function.
The control flow begins with executing the [detect_objects.py](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_deep_perception/stretch_deep_perception/detect_objects.py) script. In the main() function, we create an instance of the ObjectDetector class from the [object_detect_pytorch.py](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_deep_perception/stretch_deep_perception/object_detect_pytorch.py) script where we configure the YOLOv5s model. Next, we pass this detector to an instance of the DetectionNode class from the [detection_node.py](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_deep_perception/stretch_deep_perception/detection_node.py) script and call the main function.
```python ```python
def main(): def main():
confidence_threshold = 0.0 confidence_threshold = 0.0
@ -134,9 +134,9 @@ ros2 launch stretch_deep_perception stretch_detect_faces.launch.py
![detect_faces](https://user-images.githubusercontent.com/97639181/196327737-7091cd61-f79a-4ff0-a291-039ab3f7127a.gif) ![detect_faces](https://user-images.githubusercontent.com/97639181/196327737-7091cd61-f79a-4ff0-a291-039ab3f7127a.gif)
## Code Breakdown ## Code Breakdown
Ain't that something! If you followed the breakdown in object detection, you'll find that the only change if you are looking to detect faces, facial landmarks or estimate head pose instead of detecting objects is in using a different deep learning model that does just that. For this, we will explore how to use the OpenVINO toolkit. Let's head to the detect_faces.py [node](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/detect_faces.py) to begin.
Ain't that something! If you followed the breakdown in object detection, you'll find that the only change if you are looking to detect faces, facial landmarks or estimate head pose instead of detecting objects is in using a different deep learning model that does just that. For this, we will explore how to use the OpenVINO toolkit. Let's head to the detect_faces.py [node](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_deep_perception/stretch_deep_perception/detect_faces.py) to begin.
In the main() method, we see a similar structure as with the object detction node. We first create an instance of the detector using the HeadPoseEstimator class from the [head_estimator.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/head_estimator.py) script to configure the deep learning models. Next, we pass this to an instance of the DetectionNode class from the detection_node.py script and call the main function.
In the main() method, we see a similar structure as with the object detction node. We first create an instance of the detector using the HeadPoseEstimator class from the [head_estimator.py](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_deep_perception/stretch_deep_perception/head_estimator.py) script to configure the deep learning models. Next, we pass this to an instance of the DetectionNode class from the detection_node.py script and call the main function.
```python ```python
... ...

+ 1
- 1
ros2/example_1.md View File

@ -62,7 +62,7 @@ Now let's break the code down.
```python ```python
#!/usr/bin/env python3 #!/usr/bin/env python3
``` ```
Every Python ROS [Node](http://wiki.ros.org/Nodes) will have this declaration at the top. The first line makes sure your script is executed as a Python script.
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python script.
```python ```python

+ 1
- 1
ros2/example_10.md View File

@ -136,7 +136,7 @@ Now let's break the code down.
#!/usr/bin/env python3 #!/usr/bin/env python3
``` ```
Every Python ROS [Node](http://wiki.ros.org/Nodes) will have this declaration at the top. The first line makes sure your script is executed as a Python script.
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python script.
```python ```python
import rclpy import rclpy

+ 458
- 0
ros2/example_12.md View File

@ -0,0 +1,458 @@
# Example 12
For this example, we will send follow joint trajectory commands for the head camera to search and locate an ArUco tag. In this instance, a Stretch robot will try to locate the docking station's ArUco tag.
## Modifying Stretch Marker Dictionary YAML File
When defining the ArUco markers on Stretch, hello robot utilizes a YAML file, [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/config/stretch_marker_dict.yaml), that holds the information about the markers. A further breakdown of the YAML file can be found in our [Aruco Marker Detection](aruco_marker_detection.md) tutorial.
Below is what needs to be included in the [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/config/stretch_marker_dict.yaml) file so the [detect_aruco_markers](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) node can find the docking station's ArUco tag.
```yaml
'245':
'length_mm': 88.0
'use_rgb_only': False
'name': 'docking_station'
'link': None
```
## Getting Started
Begin by running the stretch driver launch file.
```{.bash .shell-prompt}
ros2 launch stretch_core stretch_driver.launch.py
```
To activate the RealSense camera and publish topics to be visualized, run the following launch file in a new terminal.
```{.bash .shell-prompt}
ros2 launch stretch_core d435i_high_resolution.launch.py
```
Next, run the stretch ArUco launch file which will bring up the [detect_aruco_markers](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) node. In a new terminal, execute:
```{.bash .shell-prompt}
ros2 launch stretch_core stretch_aruco.launch.py
```
Within this tutorial package, there is an [RViz config file](https://github.com/hello-robot/stretch_tutorials/blob/humble/rviz/aruco_detector_example.rviz) with the topics for the transform frames in the Display tree. You can visualize these topics and the robot model by running the command below in a new terminal.
```{.bash .shell-prompt}
ros2 run rviz2 rviz2 -d /home/hello-robot/ament_ws/src/stretch_tutorials/rviz/aruco_detector_example.rviz
```
Then run the [aruco_tag_locator.py](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/aruco_tag_locator.py) node. In a new terminal, execute:
```{.bash .shell-prompt}
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/
python3 aruco_tag_locator.py
```
<p align="center">
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/aruco_locator.gif"/>
</p>
### The Code
```python
#!/usr/bin/env python3
# Import modules
import rclpy
import time
import tf2_ros
from tf2_ros import TransformException
from rclpy.time import Time
from math import pi
# Import hello_misc script for handling trajectory goals with an action client
import hello_helpers.hello_misc as hm
# We're going to subscribe to a JointState message type, so we need to import
# the definition for it
from sensor_msgs.msg import JointState
# Import the FollowJointTrajectory from the control_msgs.action package to
# control the Stretch robot
from control_msgs.action import FollowJointTrajectory
# Import JointTrajectoryPoint from the trajectory_msgs package to define
# robot trajectories
from trajectory_msgs.msg import JointTrajectoryPoint
# Import TransformStamped from the geometry_msgs package for the publisher
from geometry_msgs.msg import TransformStamped
class LocateArUcoTag(hm.HelloNode):
"""
A class that actuates the RealSense camera to find the docking station's
ArUco tag and returns a Transform between the `base_link` and the requested tag.
"""
def __init__(self):
"""
A function that initializes the subscriber and other needed variables.
:param self: The self reference.
"""
# Initialize the inhereted hm.Hellonode class
hm.HelloNode.__init__(self)
hm.HelloNode.main(self, 'aruco_tag_locator', 'aruco_tag_locator', wait_for_first_pointcloud=False)
# Initialize subscriber
self.joint_states_sub = self.create_subscription(JointState, '/stretch/joint_states', self.joint_states_callback, 1)
# Initialize publisher
self.transform_pub = self.create_publisher(TransformStamped, 'ArUco_transform', 10)
# Initialize the variable that will store the joint state positions
self.joint_state = None
# Provide the min and max joint positions for the head pan. These values
# are needed for sweeping the head to search for the ArUco tag
self.min_pan_position = -3.8
self.max_pan_position = 1.50
# Define the number of steps for the sweep, then create the step size for
# the head pan joint
self.pan_num_steps = 10
self.pan_step_size = abs(self.min_pan_position - self.max_pan_position)/self.pan_num_steps
# Define the min tilt position, number of steps, and step size
self.min_tilt_position = -0.75
self.tilt_num_steps = 3
self.tilt_step_size = pi/16
# Define the head actuation rotational velocity
self.rot_vel = 0.5 # radians per sec
def joint_states_callback(self, msg):
"""
A callback function that stores Stretch's joint states.
:param self: The self reference.
:param msg: The JointState message type.
"""
self.joint_state = msg
def send_command(self, command):
'''
Handles single joint control commands by constructing a FollowJointTrajectoryGoal
message and sending it to the trajectory_client created in hello_misc.
:param self: The self reference.
:param command: A dictionary message type.
'''
if (self.joint_state is not None) and (command is not None):
# Extract the string value from the `joint` key
joint_name = command['joint']
# Set trajectory_goal as a FollowJointTrajectory.Goal and define
# the joint name
trajectory_goal = FollowJointTrajectory.Goal()
trajectory_goal.trajectory.joint_names = [joint_name]
# Create a JointTrajectoryPoint message type
point = JointTrajectoryPoint()
# Check to see if `delta` is a key in the command dictionary
if 'delta' in command:
# Get the current position of the joint and add the delta as a
# new position value
joint_index = self.joint_state.name.index(joint_name)
joint_value = self.joint_state.position[joint_index]
delta = command['delta']
new_value = joint_value + delta
point.positions = [new_value]
# Check to see if `position` is a key in the command dictionary
elif 'position' in command:
# extract the head position value from the `position` key
point.positions = [command['position']]
# Set the rotational velocity
point.velocities = [self.rot_vel]
# Assign goal position with updated point variable
trajectory_goal.trajectory.points = [point]
# Specify the coordinate frame that we want (base_link) and set the time to be now.
trajectory_goal.trajectory.header.stamp = self.get_clock().now().to_msg()
trajectory_goal.trajectory.header.frame_id = 'base_link'
# Make the action call and send the goal. The last line of code waits
# for the result
self.trajectory_client.send_goal(trajectory_goal)
def find_tag(self, tag_name='docking_station'):
"""
A function that actuates the camera to search for a defined ArUco tag
marker. Then the function returns the pose.
:param self: The self reference.
:param tag_name: A string value of the ArUco marker name.
:returns transform: The docking station's TransformStamped message.
"""
# Create dictionaries to get the head in its initial position
pan_command = {'joint': 'joint_head_pan', 'position': self.min_pan_position}
self.send_command(pan_command)
tilt_command = {'joint': 'joint_head_tilt', 'position': self.min_tilt_position}
self.send_command(tilt_command)
# Nested for loop to sweep the joint_head_pan and joint_head_tilt in increments
for i in range(self.tilt_num_steps):
for j in range(self.pan_num_steps):
# Update the joint_head_pan position by the pan_step_size
pan_command = {'joint': 'joint_head_pan', 'delta': self.pan_step_size}
self.send_command(pan_command)
# Give time for system to do a Transform lookup before next step
time.sleep(0.2)
# Use a try-except block
try:
now = Time()
# Look up transform between the base_link and requested ArUco tag
transform = self.tf_buffer.lookup_transform('base_link',
tag_name,
now)
self.get_logger().info("Found Requested Tag: \n%s", transform)
# Publish the transform
self.transform_pub.publish(transform)
# Return the transform
return transform
except TransformException as ex:
continue
# Begin sweep with new tilt angle
pan_command = {'joint': 'joint_head_pan', 'position': self.min_pan_position}
self.send_command(pan_command)
tilt_command = {'joint': 'joint_head_tilt', 'delta': self.tilt_step_size}
self.send_command(tilt_command)
time.sleep(0.25)
# Notify that the requested tag was not found
self.get_logger().info("The requested tag '%s' was not found", tag_name)
def main(self):
"""
Function that initiates the issue_command function.
:param self: The self reference.
"""
# Create a StaticTranformBoradcaster Node. Also, start a Tf buffer that
# will store the tf information for a few seconds.Then set up a tf listener, which
# will subscribe to all of the relevant tf topics, and keep track of the information
self.static_broadcaster = tf2_ros.StaticTransformBroadcaster(self)
self.tf_buffer = tf2_ros.Buffer()
self.listener = tf2_ros.TransformListener(self.tf_buffer, self)
# Give the listener some time to accumulate transforms
time.sleep(1.0)
# Notify Stretch is searching for the ArUco tag with `get_logger().info()`
self.get_logger().info('Searching for docking ArUco tag')
# Search for the ArUco marker for the docking station
pose = self.find_tag("docking_station")
def main():
try:
# Instantiate the `LocateArUcoTag()` object
node = LocateArUcoTag()
# Run the `main()` method
node.main()
node.new_thread.join()
except:
node.get_logger().info('Interrupt received, so shutting down')
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
```
### The Code Explained
Now let's break the code down.
```python
#!/usr/bin/env python3
```
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python3 script.
```python
import rclpy
import time
import tf2_ros
from tf2_ros import TransformException
from rclpy.time import Time
from math import pi
import hello_helpers.hello_misc as hm
from sensor_msgs.msg import JointState
from control_msgs.action import FollowJointTrajectory
from trajectory_msgs.msg import JointTrajectoryPoint
from geometry_msgs.msg import TransformStamped
```
You need to import `rclpy` if you are writing a ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html). Import other python modules needed for this node. Import the `FollowJointTrajectory` from the [control_msgs.action](http://wiki.ros.org/control_msgs) package to control the Stretch robot. Import `JointTrajectoryPoint` from the [trajectory_msgs](https://github.com/ros2/common_interfaces/tree/humble/trajectory_msgs) package to define robot trajectories. The [hello_helpers](https://github.com/hello-robot/stretch_ros2/tree/humble/hello_helpers) package consists of a module that provides various Python scripts used across [stretch_ros](https://github.com/hello-robot/stretch_ros2). In this instance, we are importing the `hello_misc` script.
```python
def __init__(self):
# Initialize the inhereted hm.Hellonode class
hm.HelloNode.__init__(self)
hm.HelloNode.main(self, 'aruco_tag_locator', 'aruco_tag_locator', wait_for_first_pointcloud=False)
# Initialize subscriber
self.joint_states_sub = self.create_subscription(JointState, '/stretch/joint_states', self.joint_states_callback, 1)
# Initialize publisher
self.transform_pub = self.create_publisher(TransformStamped, 'ArUco_transform', 10)
# Initialize the variable that will store the joint state positions
self.joint_state = None
```
The `LocateArUcoTag` class inherits the `HelloNode` class from `hm` and is instantiated.
Set up a subscriber with `self.create_subscription(JointState, '/stretch/joint_states', self.joint_states_callback, 1)`. We're going to subscribe to the topic `stretch/joint_states`, looking for `JointState` messages. When a message comes in, ROS is going to pass it to the function `joint_states_callback()` automatically.
`self.create_publisher(TransformStamped, 'ArUco_transform', 10)` declares that your node is publishing to the `ArUco_transform` topic using the message type `TransformStamped`. The `10` argument limits the amount of queued messages if any subscriber is not receiving them fast enough.
```python
self.min_pan_position = -4.10
self.max_pan_position = 1.50
self.pan_num_steps = 10
self.pan_step_size = abs(self.min_pan_position - self.max_pan_position)/self.pan_num_steps
```
Provide the minimum and maximum joint positions for the head pan. These values are needed for sweeping the head to search for the ArUco tag. We also define the number of steps for the sweep, then create the step size for the head pan joint.
```python
self.min_tilt_position = -0.75
self.tilt_num_steps = 3
self.tilt_step_size = pi/16
```
Set the minimum position of the tilt joint, the number of steps, and the size of each step.
```python
self.rot_vel = 0.5 # radians per sec
```
Define the head actuation rotational velocity.
```python
def joint_states_callback(self, msg):
self.joint_state = msg
```
The `joint_states_callback()` function stores Stretch's joint states.
```python
def send_command(self, command):
if (self.joint_state is not None) and (command is not None):
joint_name = command['joint']
trajectory_goal = FollowJointTrajectory.Goal()
trajectory_goal.trajectory.joint_names = [joint_name]
point = JointTrajectoryPoint()
```
Assign `trajectory_goal` as a `FollowJointTrajectory.Goal` message type. Then extract the string value from the `joint` key. Also, assign `point` as a `JointTrajectoryPoint` message type.
```python
if 'delta' in command:
joint_index = self.joint_state.name.index(joint_name)
joint_value = self.joint_state.position[joint_index]
delta = command['delta']
new_value = joint_value + delta
point.positions = [new_value]
```
Check to see if `delta` is a key in the command dictionary. Then get the current position of the joint and add the delta as a new position value.
```python
elif 'position' in command:
point.positions = [command['position']]
```
Check to see if `position` is a key in the command dictionary. Then extract the position value.
```python
point.velocities = [self.rot_vel]
trajectory_goal.trajectory.points = [point]
trajectory_goal.trajectory.header.stamp = self.get_clock().now().to_msg()
trajectory_goal.trajectory.header.frame_id = 'base_link'
self.trajectory_client.send_goal(trajectory_goal)
```
Then `trajectory_goal.trajectory.points` is defined by the positions set in `point`. Specify the coordinate frame that we want (*base_link*) and set the time to be now. Make the action call and send the goal.
```python
def find_tag(self, tag_name='docking_station'):
pan_command = {'joint': 'joint_head_pan', 'position': self.min_pan_position}
self.send_command(pan_command)
tilt_command = {'joint': 'joint_head_tilt', 'position': self.min_tilt_position}
self.send_command(tilt_command)
```
Create a dictionary to get the head in its initial position for its search and send the commands with the `send_command()` function.
```python
for i in range(self.tilt_num_steps):
for j in range(self.pan_num_steps):
pan_command = {'joint': 'joint_head_pan', 'delta': self.pan_step_size}
self.send_command(pan_command)
time.sleep(0.5)
```
Utilize a nested for loop to sweep the pan and tilt in increments. Then update the `joint_head_pan` position by the `pan_step_size`. Use `time.sleep()` function to give time to the system to do a Transform lookup before the next step.
```python
try:
now = Time()
transform = self.tf_buffer.lookup_transform('base_link',
tag_name,
now)
self.get_logger().info("Found Requested Tag: \n%s", transform)
self.transform_pub.publish(transform)
return transform
except TransformException as ex:
continue
```
Use a try-except block to look up the transform between the *base_link* and the requested ArUco tag. Then publish and return the `TransformStamped` message.
```python
pan_command = {'joint': 'joint_head_pan', 'position': self.min_pan_position}
self.send_command(pan_command)
tilt_command = {'joint': 'joint_head_tilt', 'delta': self.tilt_step_size}
self.send_command(tilt_command)
time.sleep(.25)
```
Begin sweep with new tilt angle.
```python
def main(self):
self.static_broadcaster = tf2_ros.StaticTransformBroadcaster(self)
self.tf_buffer = tf2_ros.Buffer()
self.listener = tf2_ros.TransformListener(self.tf_buffer, self)
time.sleep(1.0)
```
Create a StaticTranformBoradcaster Node. Also, start a tf buffer that will store the tf information for a few seconds. Then set up a tf listener, which will subscribe to all of the relevant tf topics, and keep track of the information. Include `time.sleep(1.0)` to give the listener some time to accumulate transforms.
```python
self.get_logger().info('Searching for docking ArUco tag')
pose = self.find_tag("docking_station")
```
Notice Stretch is searching for the ArUco tag with a `self.get_logger().info()` function. Then search for the ArUco marker for the docking station.
```python
def main():
try:
node = LocateArUcoTag()
node.main()
node.new_thread.join()
except:
node.get_logger().info('Interrupt received, so shutting down')
node.destroy_node()
rclpy.shutdown()
```
Instantiate the `LocateArUcoTag()` object and run the `main()` method.

+ 2
- 2
ros2/example_2.md View File

@ -5,7 +5,7 @@
This example aims to provide instructions on how to filter scan messages. This example aims to provide instructions on how to filter scan messages.
For robots with laser scanners, ROS provides a special Message type in the [sensor_msgs](https://github.com/ros2/common_interfaces/tree/galactic/sensor_msgs) package called [LaserScan](https://github.com/ros2/common_interfaces/blob/galactic/sensor_msgs/msg/LaserScan.msg) to hold information about a given scan. Let's take a look at the message specification itself:
For robots with laser scanners, ROS provides a special Message type in the [sensor_msgs](https://github.com/ros2/common_interfaces/tree/humble/sensor_msgs) package called [LaserScan](https://github.com/ros2/common_interfaces/blob/humble/sensor_msgs/msg/LaserScan.msg) to hold information about a given scan. Let's take a look at the message specification itself:
```{.bash .no-copy} ```{.bash .no-copy}
# Laser scans angles are measured counter clockwise, # Laser scans angles are measured counter clockwise,
@ -111,7 +111,7 @@ Now let's break the code down.
```python ```python
#!/usr/bin/env python3 #!/usr/bin/env python3
``` ```
Every Python ROS [Node](http://wiki.ros.org/Nodes) will have this declaration at the top. The first line makes sure your script is executed as a Python script.
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python script.
```python ```python
import rclpy import rclpy

+ 2
- 2
ros2/example_3.md View File

@ -6,7 +6,7 @@
The aim of example 3 is to combine the two previous examples and have Stretch utilize its laser scan data to avoid collision with objects as it drives forward. The aim of example 3 is to combine the two previous examples and have Stretch utilize its laser scan data to avoid collision with objects as it drives forward.
```{.bash .shell-prompt} ```{.bash .shell-prompt}
ros2 launch stretch_core stretch_driver.launch.py
ros2 launch stretch_core stretch_driver.launch.py mode:=navigation
``` ```
Then in a new terminal type the following to activate the LiDAR sensor. Then in a new terminal type the following to activate the LiDAR sensor.
@ -77,7 +77,7 @@ Now let's break the code down.
#!/usr/bin/env python3 #!/usr/bin/env python3
``` ```
Every Python ROS [Node](http://wiki.ros.org/Nodes) will have this declaration at the top. The first line makes sure your script is executed as a Python script.
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python script.
```python ```python

+ 1
- 1
ros2/example_4.md View File

@ -71,7 +71,7 @@ Now let's break the code down.
#!/usr/bin/env python3 #!/usr/bin/env python3
``` ```
Every Python ROS [Node](http://wiki.ros.org/Nodes) will have this declaration at the top. The first line makes sure your script is executed as a Python script.
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python script.
```python ```python
import rclpy import rclpy

+ 208
- 0
ros2/example_5.md View File

@ -0,0 +1,208 @@
## Example 5
In this example, we will review a Python script that prints out the positions of a selected group of Stretch joints. This script is helpful if you need the joint positions after you teleoperated Stretch with the Xbox controller or physically moved the robot to the desired configuration after hitting the run stop button.
If you are looking for a continuous print of the joint states while Stretch is in action, then you can use the [ros2 topic command-line tool](https://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Topics/Understanding-ROS2-Topics.html) shown in the [Internal State of Stretch Tutorial](https://github.com/hello-robot/stretch_tutorials/blob/master/ros2/internal_state_of_stretch.md).
Begin by starting up the stretch driver launch file.
```{.bash .shell-prompt}
ros2 launch stretch_core stretch_driver.launch.py
```
You can then hit the run-stop button (you should hear a beep and the LED light in the button blink) and move the robot's joints to a desired configuration. Once you are satisfied with the configuration, hold the run-stop button until you hear a beep. Then run the following command to execute the [joint_state_printer.py](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/joint_state_printer.py) which will print the joint positions of the lift, arm, and wrist. In a new terminal, execute:
```{.bash .shell-prompt}
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/
python3 joint_state_printer.py
```
Your terminal will output the `position` information of the previously mentioned joints shown below.
```{.bash .no-copy}
name: ['joint_lift', 'wrist_extension', 'joint_wrist_yaw']
position: [0.6043133175850597, 0.19873586673129257, 0.017257283863713464]
```
!!! note
Stretch's arm has four prismatic joints and the sum of their positions gives the *wrist_extension* distance. The *wrist_extension* is needed when sending [joint trajectory commands](https://github.com/hello-robot/stretch_tutorials/blob/master/ros2/follow_joint_trajectory.md) to the robot. Further, you can not actuate an individual arm joint. Here is an image of the arm joints for reference:
<p align="center">
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/joints.png"/>
</p>
### The Code
```python
#!/usr/bin/env python3
import rclpy
from rclpy.node import Node
import sys
import time
# We're going to subscribe to a JointState message type, so we need to import
# the definition for it
from sensor_msgs.msg import JointState
class JointStatePublisher(Node):
"""
A class that prints the positions of desired joints in Stretch.
"""
def __init__(self):
"""
Function that initializes the subscriber.
:param self: The self reference
"""
super().__init__('stretch_joint_state')
# Set up a subscriber. We're going to subscribe to the topic "joint_states"
self.sub = self.create_subscription(JointState, 'joint_states', self.callback, 1)
def callback(self, msg):
"""
Callback function to deal with the incoming JointState messages.
:param self: The self reference.
:param msg: The JointState message.
"""
# Store the joint messages for later use
self.get_logger().info('Receiving JointState messages')
self.joint_states = msg
def print_states(self, joints):
"""
print_states function to deal with the incoming JointState messages.
:param self: The self reference.
:param joints: A list of string values of joint names.
"""
# Create an empty list that will store the positions of the requested joints
joint_positions = []
# Use of forloop to parse the names of the requested joints list.
# The index() function returns the index at the first occurrence of
# the name of the requested joint in the self.joint_states.name list
for joint in joints:
if joint == "wrist_extension":
index = self.joint_states.name.index('joint_arm_l0')
joint_positions.append(4*self.joint_states.position[index])
continue
index = self.joint_states.name.index(joint)
joint_positions.append(self.joint_states.position[index])
# Print the joint position values to the terminal
print("name: " + str(joints))
print("position: " + str(joint_positions))
# Sends a signal to rclpy to shutdown the ROS interfaces
rclpy.shutdown()
# Exit the Python interpreter
sys.exit(0)
def main(args=None):
# Initialize the node
rclpy.init(args=args)
joint_publisher = JointStatePublisher()
time.sleep(1)
rclpy.spin_once(joint_publisher)
# Create a list of the joints and name them joints. These will be an argument
# for the print_states() function
joints = ["joint_lift", "wrist_extension", "joint_wrist_yaw"]
joint_publisher.print_states(joints)
# Give control to ROS. This will allow the callback to be called whenever new
# messages come in. If we don't put this line in, then the node will not work,
# and ROS will not process any messages
rclpy.spin(joint_publisher)
if __name__ == '__main__':
main()
```
### The Code Explained
Now let's break the code down.
```python
#!/usr/bin/env python3
```
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python3 script.
```python
import rclpy
import sys
import time
from rclpy.node import Node
from sensor_msgs.msg import JointState
```
You need to import rclpy if you are writing a ROS 2 Node. Import `sensor_msgs.msg` so that we can subscribe to `JointState` messages.
```python
self.sub = self.create_subscription(JointState, 'joint_states', self.callback, 1)
```
Set up a subscriber. We're going to subscribe to the topic `joint_states`, looking for `JointState` messages. When a message comes in, ROS is going to pass it to the function "callback" automatically
```python
def callback(self, msg):
self.joint_states = msg
```
This is the callback function where the `JointState` messages are stored as `self.joint_states`. Further information about this message type can be found here: [JointState Message](http://docs.ros.org/en/lunar/api/sensor_msgs/html/msg/JointState.html)
```python
def print_states(self, joints):
joint_positions = []
```
This is the `print_states()` function which takes in a list of joints of interest as its argument. the is also an empty list set as `joint_positions` and this is where the positions of the requested joints will be appended.
```python
for joint in joints:
if joint == "wrist_extension":
index = self.joint_states.name.index('joint_arm_l0')
joint_positions.append(4*self.joint_states.position[index])
continue
index = self.joint_states.name.index(joint)
joint_positions.append(self.joint_states.position[index])
```
In this section of the code, a for loop is used to parse the names of the requested joints from the `self.joint_states` list. The `index()` function returns the index of the name of the requested joint and appends the respective position to the `joint_positions` list.
```python
rclpy.shutdown()
sys.exit(0)
```
The first line of code initiates a clean shutdown of ROS. The second line of code exits the Python interpreter.
```python
rclpy.init(args=args)
joint_publisher = JointStatePublisher()
time.sleep(1)
```
The next line, rclpy.init_node initializes the node. In this case, your node will take on the name 'stretch_joint_state'. **NOTE:** the name must be a base name, i.e. it cannot contain any slashes "/".
Declare object, *joint_publisher*, from the `JointStatePublisher` class.
The use of the `time.sleep()` function is to allow the *joint_publisher* class to initialize all of its features before requesting to publish joint positions of desired joints (running the `print_states()` method).
```python
joints = ["joint_lift", "wrist_extension", "joint_wrist_yaw"]
#joints = ["joint_head_pan","joint_head_tilt", joint_gripper_finger_left", "joint_gripper_finger_right"]
joint_publisher.print_states(joints)
```
Create a list of the desired joints that you want positions to be printed. Then use that list as an argument for the `print_states()` method.
```python
rclpy.spin(joint_publisher)
```
Give control to ROS. This will allow the callback to be called whenever new messages come in. If we don't put this line in, then the node will not work, and ROS will not process any messages.

+ 355
- 0
ros2/example_6.md View File

@ -0,0 +1,355 @@
## Example 6
In this example, we will review a Python script that prints and stores the effort values from a specified joint. If you are looking for a continuous print of the joint state efforts while Stretch is in action, then you can use the [ros2 topic command-line tool](https://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Topics/Understanding-ROS2-Topics.html) shown in the [Internal State of Stretch Tutorial](https://github.com/hello-robot/stretch_tutorials/blob/master/ros2/internal_state_of_stretch.md).
<p align="center">
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/effort_sensing.gif"/>
</p>
Begin by running the following command in a terminal.
```{.bash .shell-prompt}
ros2 launch stretch_core stretch_driver.launch.py
```
There's no need to switch to the position mode in comparison with ROS1 because the default mode of the launcher is this position mode. Then run the [effort_sensing.py](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/effort_sensing.py) node. In a new terminal, execute:
```{.bash .shell-prompt}
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/
python3 effort_sensing.py
```
This will send a `FollowJointTrajectory` command to move Stretch's arm or head while also printing the effort of the lift.
### The Code
```python
##!/usr/bin/env python3
import rclpy
import hello_helpers.hello_misc as hm
import os
import csv
import time
import pandas as pd
import matplotlib
matplotlib.use('tkagg')
import matplotlib.pyplot as plt
from matplotlib import animation
from datetime import datetime
from control_msgs.action import FollowJointTrajectory
from trajectory_msgs.msg import JointTrajectoryPoint
class JointActuatorEffortSensor(hm.HelloNode):
def __init__(self, export_data=True, animate=True):
hm.HelloNode.__init__(self)
hm.HelloNode.main(self, 'issue_command', 'issue_command', wait_for_first_pointcloud=False)
self.joints = ['joint_lift']
self.joint_effort = []
self.save_path = '/home/hello-robot/ament_ws/src/stretch_tutorials/stored_data'
self.export_data = export_data
self.result = False
self.file_name = datetime.now().strftime("effort_sensing_tutorial_%Y%m%d%I")
def issue_command(self):
trajectory_goal = FollowJointTrajectory.Goal()
trajectory_goal.trajectory.joint_names = self.joints
point0 = JointTrajectoryPoint()
point0.positions = [0.9]
trajectory_goal.trajectory.points = [point0]
trajectory_goal.trajectory.header.stamp = self.get_clock().now().to_msg()
trajectory_goal.trajectory.header.frame_id = 'base_link'
while self.joint_state is None:
time.sleep(0.1)
self._send_goal_future = self.trajectory_client.send_goal_async(trajectory_goal, self.feedback_callback)
self.get_logger().info('Sent position goal = {0}'.format(trajectory_goal))
self._send_goal_future.add_done_callback(self.goal_response_callback)
def goal_response_callback(self, future):
goal_handle = future.result()
if not goal_handle.accepted:
self.get_logger().info('Failed')
return
self.get_logger().info('Succeded')
self._get_result_future = goal_handle.get_result_async()
self._get_result_future.add_done_callback(self.get_result_callback)
def get_result_callback(self, future):
self.result = future.result().result
self.get_logger().info('Sent position goal = {0}'.format(self.result))
def feedback_callback(self, feedback_msg):
if 'wrist_extension' in self.joints:
self.joints.remove('wrist_extension')
self.joints.append('joint_arm_l0')
current_effort = []
for joint in self.joints:
index = self.joint_state.name.index(joint)
current_effort.append(self.joint_state.effort[index])
if not self.export_data:
print("name: " + str(self.joints))
print("effort: " + str(current_effort))
else:
self.joint_effort.append(current_effort)
if self.export_data:
file_name = self.file_name
completeName = os.path.join(self.save_path, file_name)
with open(completeName, "w") as f:
writer = csv.writer(f)
writer.writerow(self.joints)
writer.writerows(self.joint_effort)
def plot_data(self, animate = True):
while not self.result:
time.sleep(0.1)
file_name = self.file_name
self.completeName = os.path.join(self.save_path, file_name)
self.data = pd.read_csv(self.completeName)
self.y_anim = []
self.animate = animate
for joint in self.data.columns:
# Create figure, labels, and title
fig = plt.figure()
plt.title(joint + ' Effort Sensing')
plt.ylabel('Effort')
plt.xlabel('Data Points')
# Conditional statement for animation plotting
if self.animate:
self.effort = self.data[joint]
frames = len(self.effort)-1
anim = animation.FuncAnimation(fig=fig,func=self.plot_animate, repeat=False,blit=False,frames=frames, interval =75)
plt.show()
## If you want to save a video, make sure to comment out plt.show(),
## right before this comment.
# save_path = str(self.completeName + '.mp4')
# anim.save(save_path, writer=animation.FFMpegWriter(fps=10))
# Reset y_anim for the next joint effort animation
del self.y_anim[:]
# Conditional statement for regular plotting (No animation)
else:
self.data[joint].plot(kind='line')
# save_path = str(self.completeName + '.png')
# plt.savefig(save_path, bbox_inches='tight')
plt.show()
def plot_animate(self,i):
# Append self.effort values for given joint
self.y_anim.append(self.effort.values[i])
plt.plot(self.y_anim, color='blue')
def main(self):
self.get_logger().info('issuing command')
self.issue_command()
def main():
try:
node = JointActuatorEffortSensor(export_data=True, animate=True)
node.main()
node.plot_data()
node.new_thread.join()
except KeyboardInterrupt:
node.get_logger().info('interrupt received, so shutting down')
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
```
### The Code Explained
This code is similar to that of the [multipoint_command](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/multipoint_command.py) and [joint_state_printer](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/joint_state_printer.py) node. Therefore, this example will highlight sections that are different from those tutorials. Now let's break the code down.
```python
#!/usr/bin/env python3
```
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html
) will have this declaration at the top. The first line makes sure your script is executed as a Python3 script.
```python
import rclpy
import hello_helpers.hello_misc as hm
import os
import csv
import time
import pandas as pd
import matplotlib
matplotlib.use('tkagg')
import matplotlib.pyplot as plt
from matplotlib import animation
from datetime import datetime
from control_msgs.action import FollowJointTrajectory
from trajectory_msgs.msg import JointTrajectoryPoint
```
You need to import rclpy if you are writing a ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html). Import the `FollowJointTrajectory` from the `control_msgs.action` package to control the Stretch robot. Import `JointTrajectoryPoint` from the `trajectory_msgs` package to define robot trajectories. The `hello_helpers` package consists of a module that provides various Python scripts used across [stretch_ros](https://github.com/hello-robot/stretch_ros2). In this instance, we are importing the `hello_misc` script.
```Python
class JointActuatorEffortSensor(hm.HelloNode):
def __init__(self, export_data=False):
hm.HelloNode.__init__(self)
hm.HelloNode.main(self, 'issue_command', 'issue_command', wait_for_first_pointcloud=False)
```
The `JointActuatorEffortSensor` class inherits the `HelloNode` class from `hm` and is initialized also the HelloNode class already have the topic joint_states, thanks to this we don't need to create a subscriber.
```python
self.joints = ['joint_lift']
```
Create a list of the desired joints you want to print.
```Python
self.joint_effort = []
self.save_path = '/home/hello-robot/ament_ws/src/stretch_tutorials/stored_data'
self.export_data = export_data
self.result = False
self.file_name = datetime.now().strftime("effort_sensing_tutorial_%Y%m%d%I")
```
Create an empty list to store the joint effort values. The `self.save_path` is the directory path where the .txt file of the effort values will be stored. You can change this path to a preferred directory. The `self.export_data` is a boolean and its default value is set to `True`. If set to `False`, then the joint values will be printed in the terminal, otherwise, it will be stored in a .txt file and that's what we want to see the plot graph. Also we want to give our text file a name since the beginning and we use the `self.file_name` to access this later.
```python
self._send_goal_future = self.trajectory_client.send_goal_async(trajectory_goal, self.feedback_callback)
self._send_goal_future.add_done_callback(self.goal_response_callback)
```
The [ActionClient.send_goal_async()](https://docs.ros2.org/latest/api/rclpy/api/actions.html#rclpy.action.client.ActionClient.send_goal_async) method returns a future to a goal handle. Include the goal and `feedback_callback` functions in the send goal function. Also we need to register a `goal_response_callback` for when the future is complete
```python
def goal_response_callback(self,future):
goal_handle = future.result()
if not goal_handle.accepted:
self.get_logger().info('Failed')
return
self.get_logger().info('Succeded')
```
Looking at the `goal_response_callback` in more detail we can see if the future is complete with the messages that will appear.
```python
self._get_result_future = goal_handle.get_result_async()
self._get_result_future.add_done_callback(self.get_result_callback)
```
We need the goal_handle to request the result with the method get_result_async. With this we will get a future that will complete when the result is ready so we need a callback for this result.
```python
def get_result_callback(self, future):
self.result = future.result().result
self.get_logger().info('Sent position goal = {0}'.format(result))
```
In the result callback we log the result of our poistion goal
```python
def feedback_callback(self,feedback_msg):
```
The feedback callback function takes in the `FollowJointTrajectoryActionFeedback` message as its argument.
```python
if 'wrist_extension' in self.joints:
self.joints.remove('wrist_extension')
self.joints.append('joint_arm_l0')
```
Use a conditional statement to replace `wrist_extenstion` with `joint_arm_l0`. This is because `joint_arm_l0` has the effort values that the `wrist_extension` is experiencing.
```python
current_effort = []
for joint in self.joints:
index = self.joint_states.name.index(joint)
current_effort.append(self.joint_states.effort[index])
```
Create an empty list to store the current effort values. Then use a for loop to parse the joint names and effort values.
```python
if not self.export_data:
print("name: " + str(self.joints))
print("effort: " + str(current_effort))
else:
self.joint_effort.append(current_effort)
```
Use a conditional statement to print effort values in the terminal or store values into a list that will be used for exporting the data in a .txt file.
```python
if self.export_data:
file_name = self.file_name
completeName = os.path.join(self.save_path, file_name)
with open(completeName, "w") as f:
writer = csv.writer(f)
writer.writerow(self.joints)
writer.writerows(self.joint_effort)
```
A conditional statement is used to export the data to a .txt file. The file's name is set to the one we created at the beginning.
```python
def plot_data(self, animate = True):
while not self.result:
time.sleep(0.1)
file_name = self.file_name
self.completeName = os.path.join(self.save_path, file_name)
self.data = pd.read_csv(self.completeName)
self.y_anim = []
self.animate = animate
```
This function will help us initialize some values to plot our data, we need to wait until we get the results to start plotting, because the file could still be storing values and we want to plot every point also we need to create an empty list for the animation.
```python
for joint in self.data.columns:
# Create figure, labels, and title
fig = plt.figure()
plt.title(joint + ' Effort Sensing')
plt.ylabel('Effort')
plt.xlabel('Data Points')
```
Create a for loop to print each joint's effort writing the correct labels for x and y
```python
if self.animate:
self.effort = self.data[joint]
frames = len(self.effort)-1
anim = animation.FuncAnimation(fig=fig,func=self.plot_animate, repeat=False,blit=False,frames=frames, interval =75)
plt.show()
del self.y_anim[:]
else:
self.data[joint].plot(kind='line')
# save_path = str(self.completeName + '.png')
# plt.savefig(save_path, bbox_inches='tight')
plt.show()
```
This is a conditional statement for the animation plotting
```python
def plot_animate(self,i):
self.y_anim.append(self.effort.values[i])
plt.plot(self.y_anim, color='blue')
```
We will create another function that will plot every increment in the data frame
<p align="center">
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/stored_data/2022-06-30_11:26:20-AM.png"/>
</p>

+ 277
- 0
ros2/example_7.md View File

@ -0,0 +1,277 @@
# Capture Image
## Example 7
In this example, we will review a Python script that captures an image from the [RealSense camera](https://www.intelrealsense.com/depth-camera-d435i/).
<p align="center">
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/camera_image.jpeg"/>
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/camera_image_edge_detection.jpeg"/>
</p>
Begin by running the stretch `driver.launch` file.
```{.bash .shell-prompt}
ros2 launch stretch_core stretch_driver.launch.py
```
To activate the [RealSense camera](https://www.intelrealsense.com/depth-camera-d435i/) and publish topics to be visualized, run the following launch file in a new terminal.
```{.bash .shell-prompt}
ros2 launch stretch_core d435i_low_resolution.launch.py
```
Within this tutorial package, there is an RViz config file with the topics for perception already in the Display tree. You can visualize these topics and the robot model by running the command below in a new terminal.
```{.bash .shell-prompt}
ros2 run rviz2 rviz2 -d /home/hello-robot/ament_ws/src/stretch_tutorials/rviz/perception_example.rviz
```
## Capture Image with Python Script
In this section, we will use a Python node to capture an image from the [RealSense camera](https://www.intelrealsense.com/depth-camera-d435i/). Execute the [capture_image.py](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/capture_image.py) node to save a .jpeg image of the image topic `/camera/color/image_raw`. In a terminal, execute:
```{.bash .shell-prompt}
cd ~/ament_ws/src/stretch_tutorials/stretch_ros_tutorials
python3 capture_image.py
```
An image named **camera_image.jpeg** is saved in the **stored_data** folder in this package, if you don't have this folder you can create it yourself.
### The Code
```python
#!/usr/bin/env python3
import rclpy
import sys
import os
import cv2
from rclpy.node import Node
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
class CaptureImage(Node):
"""
A class that converts a subscribed ROS image to a OpenCV image and saves
the captured image to a predefined directory.
"""
def __init__(self):
"""
A function that initializes a CvBridge class, subscriber, and save path.
:param self: The self reference.
"""
super().__init__('stretch_capture_image')
self.bridge = CvBridge()
self.sub = self.create_subscription(Image, '/camera/color/image_raw', self.image_callback, 10)
self.save_path = '/home/hello-robot/ament_ws/src/stretch_tutorials/stored_data'
self.br = CvBridge()
def image_callback(self, msg):
"""
A callback function that converts the ROS image to a CV2 image and stores the
image.
:param self: The self reference.
:param msg: The ROS image message type.
"""
try:
image = self.bridge.imgmsg_to_cv2(msg, 'bgr8')
except CvBridgeError as e:
self.get_logger().warn('CV Bridge error: {0}'.format(e))
file_name = 'camera_image.jpeg'
completeName = os.path.join(self.save_path, file_name)
cv2.imwrite(completeName, image)
rclpy.shutdown()
sys.exit(0)
def main(args=None):
rclpy.init(args=args)
capture_image = CaptureImage()
rclpy.spin(capture_image)
if __name__ == '__main__':
main()
```
### The Code Explained
Now let's break the code down.
```python
#!/usr/bin/env python3
```
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python3 script.
```python
import rclpy
import sys
import os
import cv2
```
You need to import `rclpy` if you are writing a ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html). There are functions from `sys`, `os`, and `cv2` that are required within this code. `cv2` is a library of Python functions that implements computer vision algorithms. Further information about cv2 can be found here: [OpenCV Python](https://www.geeksforgeeks.org/opencv-python-tutorial/).
```python
from rclpy.node import Node
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
```
The `sensor_msgs.msg` is imported so that we can subscribe to ROS `Image` messages. Import [CvBridge](http://wiki.ros.org/cv_bridge) to convert between ROS `Image` messages and OpenCV images and the `Node` is neccesary to create a node in ROS2.
```python
def __init__(self):
"""
A function that initializes a CvBridge class, subscriber, and save path.
:param self: The self reference.
"""
super().__init__('stretch_capture_image')
self.bridge = CvBridge()
self.sub = self.create_subscription(Image, '/camera/color/image_raw', self.image_callback, 10)
self.save_path = '/home/hello-robot/ament_ws/src/stretch_tutorials/stored_data'
self.br = CvBridge()
```
Initialize the node, CvBridge class, the subscriber, and the directory where the captured image will be stored.
```python
def image_callback(self, msg):
"""
A callback function that converts the ROS image to a cv2 image and stores the
image.
:param self: The self reference.
:param msg: The ROS image message type.
"""
try:
image = self.bridge.imgmsg_to_cv2(msg, 'bgr8')
except CvBridgeError as e:
rospy.logwarn('CV Bridge error: {0}'.format(e))
```
Try to convert the ROS Image message to a cv2 Image message using the `imgmsg_to_cv2()` function.
```python
file_name = 'camera_image.jpeg'
completeName = os.path.join(self.save_path, file_name)
cv2.imwrite(completeName, image)
```
Join the directory and file name using the `path.join()` function. Then use the `imwrite()` function to save the image.
```python
rclpy.shutdown()
sys.exit(0)
```
The first line of code initiates a clean shutdown of ROS. The second line of code exits the Python interpreter.
```python
rclpy.init(args=args)
capture_image = CaptureImage()
```
The next line, rclpy.init_node initializes the node. In this case, your node will take on the name 'stretch_capture_image'. Also setup CaptureImage class with `capture_image = CaptureImage()`.
!!! note
The name must be a base name, i.e. it cannot contain any slashes "/".
```python
rclpy.spin(capture_image)
```
Give control to ROS. This will allow the callback to be called whenever new messages come in. If we don't put this line in, then the node will not work, and ROS will not process any messages.
## Edge Detection
In this section, we highlight a node that utilizes the [Canny Edge filter](https://www.geeksforgeeks.org/python-opencv-canny-function/) algorithm to detect the edges from an image and convert it back as a ROS image to be visualized in RViz. In a terminal, execute:
```{.bash .shell-prompt}
cd ~/ament_ws/src/stretch_tutorials/stretch_ros_tutorials
python3 edge_detection.py
```
The node will publish a new Image topic named `/image_edge_detection`. This can be visualized in RViz and a gif is provided below for reference.
<p align="center">
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/camera_image_edge_detection.gif"/>
</p>
### The Code
```python
#!/usr/bin/env python3
import rclpy
from rclpy.node import Node
import cv2
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
class EdgeDetection(Node):
"""
A class that converts a subscribed ROS image to a OpenCV image and saves
the captured image to a predefined directory.
"""
def __init__(self):
"""
A function that initializes a CvBridge class, subscriber, and other
parameter values.
:param self: The self reference.
"""
super().__init__('stretch_edge_detection')
self.bridge = CvBridge()
self.sub = self.create_subscription(Image, '/camera/color/image_raw', self.callback, 1)
self.pub = self.create_publisher(Image, '/image_edge_detection', 1)
self.save_path = '/home/hello-robot/catkin_ws/src/stretch_tutorials/stored_data'
self.lower_thres = 100
self.upper_thres = 200
self.get_logger().info("Publishing the CV2 Image. Use RViz to visualize.")
def callback(self, msg):
"""
A callback function that converts the ROS image to a CV2 image and goes
through the Canny Edge filter in OpenCV for edge detection. Then publishes
that filtered image to be visualized in RViz.
:param self: The self reference.
:param msg: The ROS image message type.
"""
try:
image = self.bridge.imgmsg_to_cv2(msg, 'bgr8')
except CvBridgeError as e:
self.get_logger().warn('CV Bridge error: {0}'.format(e))
image = cv2.Canny(image, self.lower_thres, self.upper_thres)
image_msg = self.bridge.cv2_to_imgmsg(image, 'passthrough')
image_msg.header = msg.header
self.pub.publish(image_msg)
def main(args=None):
rclpy.init(args=args)
edge_detection = EdgeDetection()
rclpy.spin(edge_detection)
if __name__ == '__main__':
main()
```
### The Code Explained
Since there are similarities in the capture image node, we will only break down the different components of the edge detection node.
Define the lower and upper bounds of the Hysteresis Thresholds.
```python
image = cv2.Canny(image, self.lower_thres, self.upper_thres)
```
Run the Canny Edge function to detect edges from the cv2 image.
```python
image_msg = self.bridge.cv2_to_imgmsg(image, 'passthrough')
```
Convert the cv2 image back to a ROS image so it can be published.
```python
image_msg.header = msg.header
self.pub.publish(image_msg)
```
Publish the ROS image with the same header as the subscribed ROS message.

+ 150
- 0
ros2/example_8.md View File

@ -0,0 +1,150 @@
# Example 8
This example will showcase how to save the interpreted speech from Stretch's [ReSpeaker Mic Array v2.0](https://wiki.seeedstudio.com/ReSpeaker_Mic_Array_v2.0/) to a text file.
<p align="center">
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/respeaker.jpg"/>
</p>
Begin by running the `respeaker.launch.py` file in a terminal.
```{.bash .shell-prompt}
ros2 launch respeaker_ros2 respeaker.launch.py
```
Then run the [speech_text.py](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/speech_text.py) node. In a new terminal, execute:
```{.bash .shell-prompt}
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/
python3 speech_text.py
```
The ReSpeaker will be listening and will start to interpret speech and save the transcript to a text file. To shut down the node, type `Ctrl` + `c` in the terminal.
### The Code
```python
#!/usr/bin/env python3
# Import modules
import rclpy
import os
from rclpy.node import Node
# Import SpeechRecognitionCandidates from the speech_recognition_msgs package
from speech_recognition_msgs.msg import SpeechRecognitionCandidates
class SpeechText(Node):
def __init__(self):
super().__init__('stretch_speech_text')
# Initialize subscriber
self.sub = self.create_subscription(SpeechRecognitionCandidates, "speech_to_text", self.callback, 1)
# Create path to save captured images to the stored data folder
self.save_path = '/home/hello-robot/ament_ws/src/stretch_tutorials/stored_data'
# Create log message
self.get_logger().info("Listening to speech")
def callback(self,msg):
# Take all items in the iterable list and join them into a single string
transcript = ' '.join(map(str,msg.transcript))
# Define the file name and create a complete path name
file_name = 'speech.txt'
completeName = os.path.join(self.save_path, file_name)
# Append 'hello' at the end of file
with open(completeName, "a+") as file_object:
file_object.write("\n")
file_object.write(transcript)
def main(args=None):
# Initialize the node and name it speech_text
rclpy.init(args=args)
# Instantiate the SpeechText class
speech_txt = SpeechText()
# Give control to ROS. This will allow the callback to be called whenever new
# messages come in. If we don't put this line in, then the node will not work,
# and ROS will not process any messages
rclpy.spin(speech_txt)
if __name__ == '__main__':
main()
```
### The Code Explained
Now let's break the code down.
```python
#!/usr/bin/env python3
```
Every Python ROS [Node](https://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python3 script.
```python
import rclpy
import os
from rclpy.node import Node
```
You need to import rclpy if you are writing a ROS Node.
```python
from speech_recognition_msgs.msg import SpeechRecognitionCandidates
```
Import `SpeechRecognitionCandidates` from the `speech_recgonition_msgs.msg` so that we can receive the interpreted speech.
```python
def __init__(self):
super().__init__('stretch_speech_text')
self.sub = self.create_subscription(SpeechRecognitionCandidates, "speech_to_text", self.callback, 1)
```
Set up a subscriber. We're going to subscribe to the topic `speech_to_text`, looking for `SpeechRecognitionCandidates` messages. When a message comes in, ROS is going to pass it to the function "callback" automatically.
```python
self.save_path = '/home/hello-robot/ament_ws/src/stretch_tutorials/stored_data'
```
Define the directory to save the text file.
```python
transcript = ' '.join(map(str,msg.transcript))
```
Take all items in the iterable list and join them into a single string named transcript.
```python
file_name = 'speech.txt'
completeName = os.path.join(self.save_path, file_name)
```
Define the file name and create a complete path directory.
```python
with open(completeName, "a+") as file_object:
file_object.write("\n")
file_object.write(transcript)
```
Append the transcript to the text file.
```python
def main(args=None):
rclpy.init(args=args)
speech_txt = SpeechText()
```
The next line, rclpy.init() method initializes the node. In this case, your node will take on the name 'stretch_speech_text'. Instantiate the `SpeechText()` class.
!!! note
The name must be a base name, i.e. it cannot contain any slashes "/".
```python
rclpy.spin(speech_txt)
```
Give control to ROS with `rclpy.spin()`. This will allow the callback to be called whenever new messages come in. If we don't put this line in, then the node will not work, and ROS will not process any messages.

+ 470
- 0
ros2/example_9.md View File

@ -0,0 +1,470 @@
## Example 9
This example aims to combine the [ReSpeaker Microphone Array](respeaker_mic_array.md) and [Follow Joint Trajectory](follow_joint_trajectory.md) tutorials to voice teleoperate the mobile base of the Stretch robot.
Begin by running the following command in a new terminal.
```{.bash .shell-prompt}
ros2 launch stretch_core stretch_driver.launch.py
```
Then run the `respeaker.launch.py` file. In a new terminal, execute:
```{.bash .shell-prompt}
ros2 launch respeaker_ros2 respeaker.launch.py
```
Then run the [voice_teleoperation_base.py](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/voice_teleoperation_base.py) node in a new terminal.
```{.bash .shell-prompt}
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/
python3 voice_teleoperation_base.py
```
In terminal 3, a menu of voice commands is printed. You can reference this menu layout below.
```{.bash .no-copy}
------------ VOICE TELEOP MENU ------------
VOICE COMMANDS
"forward": BASE FORWARD
"back" : BASE BACK
"left" : BASE ROTATE LEFT
"right" : BASE ROTATE RIGHT
"stretch": BASE ROTATES TOWARDS SOUND
STEP SIZE
"big" : BIG
"medium" : MEDIUM
"small" : SMALL
"quit" : QUIT AND CLOSE NODE
-------------------------------------------
```
To stop the node from sending twist messages, type `Ctrl` + `c` or say "**quit**".
### The Code
```python
#!/usr/bin/env python3
# Import modules
import math
import rclpy
import sys
from rclpy.duration import Duration
# We're going to subscribe to 64-bit integers, so we need to import the definition
# for them
from sensor_msgs.msg import JointState
# Import Int32 message typs from the std_msgs package
from std_msgs.msg import Int32
# Import the FollowJointTrajectory from the control_msgs.msg package to
# control the Stretch robot
from control_msgs.action import FollowJointTrajectory
# Import JointTrajectoryPoint from the trajectory_msgs package to define
# robot trajectories
from trajectory_msgs.msg import JointTrajectoryPoint
# Import hello_misc script for handling trajectory goals with an action client
import hello_helpers.hello_misc as hm
# Import SpeechRecognitionCandidates from the speech_recognition_msgs package
from speech_recognition_msgs.msg import SpeechRecognitionCandidates
class GetVoiceCommands:
def __init__(self, node):
self.node = node
# Set step size as medium by default
self.step_size = 'medium'
self.rad_per_deg = math.pi/180.0
# Small step size parameters
self.small_deg = 5.0
self.small_rad = self.rad_per_deg * self.small_deg
self.small_translate = 0.025
# Medium step size parameters
self.medium_deg = 10.0
self.medium_rad = self.rad_per_deg * self.medium_deg
self.medium_translate = 0.05
# Big step size parameters
self.big_deg = 20.0
self.big_rad = self.rad_per_deg * self.big_deg
self.big_translate = 0.1
# Initialize the voice command
self.voice_command = None
# Initialize the sound direction
self.sound_direction = 0
# Initialize subscribers
self.speech_to_text_sub = self.node.create_subscription(SpeechRecognitionCandidates, "/speech_to_text", self.callback_speech, 1)
self.sound_direction_sub = self.node.create_subscription(Int32, "/sound_direction", self.callback_direction, 1)
def callback_direction(self, msg):
self.sound_direction = msg.data * -self.rad_per_deg
def callback_speech(self,msg):
self.voice_command = ' '.join(map(str,msg.transcript))
def get_inc(self):
if self.step_size == 'small':
inc = {'rad': self.small_rad, 'translate': self.small_translate}
if self.step_size == 'medium':
inc = {'rad': self.medium_rad, 'translate': self.medium_translate}
if self.step_size == 'big':
inc = {'rad': self.big_rad, 'translate': self.big_translate}
return inc
def print_commands(self):
"""
A function that prints the voice teleoperation menu.
:param self: The self reference.
"""
print(' ')
print('------------ VOICE TELEOP MENU ------------')
print(' ')
print(' VOICE COMMANDS ')
print(' "forward": BASE FORWARD ')
print(' "back" : BASE BACK ')
print(' "left" : BASE ROTATE LEFT ')
print(' "right" : BASE ROTATE RIGHT ')
print(' "stretch": BASE ROTATES TOWARDS SOUND ')
print(' ')
print(' STEP SIZE ')
print(' "big" : BIG ')
print(' "medium" : MEDIUM ')
print(' "small" : SMALL ')
print(' ')
print(' ')
print(' "quit" : QUIT AND CLOSE NODE ')
print(' ')
print('-------------------------------------------')
def get_command(self):
command = None
# Move base forward command
if self.voice_command == 'forward':
command = {'joint': 'translate_mobile_base', 'inc': self.get_inc()['translate']}
# Move base back command
if self.voice_command == 'back':
command = {'joint': 'translate_mobile_base', 'inc': -self.get_inc()['translate']}
# Rotate base left command
if self.voice_command == 'left':
command = {'joint': 'rotate_mobile_base', 'inc': self.get_inc()['rad']}
# Rotate base right command
if self.voice_command == 'right':
command = {'joint': 'rotate_mobile_base', 'inc': -self.get_inc()['rad']}
# Move base to sound direction command
if self.voice_command == 'stretch':
command = {'joint': 'rotate_mobile_base', 'inc': self.sound_direction}
# Set the step size of translational and rotational base motions
if (self.voice_command == "small") or (self.voice_command == "medium") or (self.voice_command == "big"):
self.step_size = self.voice_command
self.node.get_logger().info('Step size = {0}'.format(self.step_size))
if self.voice_command == 'quit':
# Sends a signal to ros to shutdown the ROS interfaces
self.node.get_logger().info("done")
# Exit the Python interpreter
sys.exit(0)
# Reset voice command to None
self.voice_command = None
# return the updated command
return command
class VoiceTeleopNode(hm.HelloNode):
def __init__(self):
hm.HelloNode.__init__(self)
self.rate = 10.0
self.joint_state = None
hm.HelloNode.main(self, 'voice_teleop', 'voice_teleop', wait_for_first_pointcloud=False)
self.speech = GetVoiceCommands(self)
def joint_states_callback(self, msg):
self.joint_state = msg
def send_command(self, command):
joint_state = self.joint_state
# Conditional statement to send joint trajectory goals
if (joint_state is not None) and (command is not None):
# Assign point as a JointTrajectoryPoint message type
point = JointTrajectoryPoint()
point.time_from_start = Duration(seconds=0).to_msg()
# Assign trajectory_goal as a FollowJointTrajectoryGoal message type
trajectory_goal = FollowJointTrajectory.Goal()
trajectory_goal.goal_time_tolerance = Duration(seconds=0).to_msg()
# Extract the joint name from the command dictionary
joint_name = command['joint']
trajectory_goal.trajectory.joint_names = [joint_name]
# Extract the increment type from the command dictionary
inc = command['inc']
self.get_logger().info('inc = {0}'.format(inc))
new_value = inc
# Assign the new_value position to the trajectory goal message type
point.positions = [new_value]
trajectory_goal.trajectory.points = [point]
trajectory_goal.trajectory.header.stamp = self.get_clock().now().to_msg()
self.get_logger().info('joint_name = {0}, trajectory_goal = {1}'.format(joint_name, trajectory_goal))
# Make the action call and send goal of the new joint position
self.trajectory_client.send_goal(trajectory_goal)
self.get_logger().info('Done sending command.')
# Reprint the voice command menu
self.speech.print_commands()
def timer_get_command(self):
# Get voice command
command = self.speech.get_command()
# Send voice command for joint trajectory goals
self.send_command(command)
def main(self):
self.create_subscription(JointState, '/stretch/joint_states', self.joint_states_callback, 1)
rate = self.create_rate(self.rate)
self.speech.print_commands()
self.sleep = self.create_timer(1/self.rate, self.timer_get_command)
def main(args=None):
try:
#rclpy.init(args=args)
node = VoiceTeleopNode()
node.main()
node.new_thread.join()
except KeyboardInterrupt:
node.get_logger().info('interrupt received, so shutting down')
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
```
### The Code Explained
This code is similar to that of the [multipoint_command](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/multipoint_command.py) and [joint_state_printer](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/joint_state_printer.py) node. Therefore, this example will highlight sections that are different from those tutorials. Now let's break the code down.
```python
#!/usr/bin/env python3
```
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python script.
```python
import math
import rclpy
import sys
from rclpy.duration import Duration
from sensor_msgs.msg import JointState
from std_msgs.msg import Int32
from control_msgs.action import FollowJointTrajectory
from trajectory_msgs.msg import JointTrajectoryPoint
import hello_helpers.hello_misc as hm
from speech_recognition_msgs.msg import SpeechRecognitionCandidates
```
You need to import `rclpy` if you are writing a ROS Node. Import the `FollowJointTrajectory` from the `control_msgs.action` package to control the Stretch robot. Import `JointTrajectoryPoint` from the `trajectory_msgs` package to define robot trajectories. The `hello_helpers` package consists of a module that provides various Python scripts used across stretch_ros. In this instance, we are importing the `hello_misc` script. Import `sensor_msgs.msg` so that we can subscribe to JointState messages.
```python
class GetVoiceCommands:
```
Create a class that subscribes to the `speech-to-text` recognition messages, prints a voice command menu, and defines step size for translational and rotational mobile base motion.
```python
self.node = node
self.step_size = 'medium'
self.rad_per_deg = math.pi/180.0
```
Set the default step size as medium and create a float value, `self.rad_per_deg`, to convert degrees to radians.
```python
self.small_deg = 5.0
self.small_rad = self.rad_per_deg * self.small_deg
self.small_translate = 0.025
self.medium_deg = 10.0
self.medium_rad = self.rad_per_deg * self.medium_deg
self.medium_translate = 0.05
self.big_deg = 20.0
self.big_rad = self.rad_per_deg * self.big_deg
self.big_translate = 0.1
```
Define the three rotation and translation step sizes.
```python
self.voice_command = None
self.sound_direction = 0
self.speech_to_text_sub = self.node.create_subscription(SpeechRecognitionCandidates, "/speech_to_text", self.callback_speech, 1)
self.sound_direction_sub = self.node.create_subscription(Int32, "/sound_direction", self.callback_direction, 1)
```
Initialize the voice command and sound direction to values that will not result in moving the base.
Set up two subscribers. The first one subscribes to the topic `/speech_to_text`, looking for `SpeechRecognitionCandidates` messages. When a message comes in, ROS is going to pass it to the function `callback_speech` automatically. The second subscribes to `/sound_direction` message and passes it to the `callback_direction` function.
```python
def callback_direction(self, msg):
self.sound_direction = msg.data * -self.rad_per_deg
```
The `callback_direction` function converts the `sound_direction` topic from degrees to radians.
```python
if self.step_size == 'small':
inc = {'rad': self.small_rad, 'translate': self.small_translate}
if self.step_size == 'medium':
inc = {'rad': self.medium_rad, 'translate': self.medium_translate}
if self.step_size == 'big':
inc = {'rad': self.big_rad, 'translate': self.big_translate}
return inc
```
The `callback_speech` stores the increment size for translational and rotational base motion in `inc`. The increment size is contingent on the `self.step_size` string value.
```python
command = None
if self.voice_command == 'forward':
command = {'joint': 'translate_mobile_base', 'inc': self.get_inc()['translate']}
if self.voice_command == 'back':
command = {'joint': 'translate_mobile_base', 'inc': -self.get_inc()['translate']}
if self.voice_command == 'left':
command = {'joint': 'rotate_mobile_base', 'inc': self.get_inc()['rad']}
if self.voice_command == 'right':
command = {'joint': 'rotate_mobile_base', 'inc': -self.get_inc()['rad']}
if self.voice_command == 'stretch':
command = {'joint': 'rotate_mobile_base', 'inc': self.sound_direction}
```
In the `get_command()` function, the `command` is initialized as `None`, or set as a dictionary where the `joint` and `inc` values are stored. The `command` message type is dependent on the `self.voice_command` string value.
```python
if (self.voice_command == "small") or (self.voice_command == "medium") or (self.voice_command == "big"):
self.step_size = self.voice_command
self.node.get_logger().info('Step size = {0}'.format(self.step_size))
```
Based on the `self.voice_command` value set the step size for the increments.
```python
if self.voice_command == 'quit':
self.node.get_logger().info("done")
sys.exit(0)
```
If the `self.voice_command` is equal to `quit`, then initiate a clean shutdown of ROS and exit the Python interpreter.
```python
class VoiceTeleopNode(hm.HelloNode):
def __init__(self):
hm.HelloNode.__init__(self)
self.rate = 10.0
self.joint_state = None
hm.HelloNode.main(self, 'voice_teleop', 'voice_teleop', wait_for_first_pointcloud=False)
self.speech = GetVoiceCommands(self)
```
A class that inherits the `HelloNode` class from `hm` declares object from the `GetVoiceCommands` class and sends joint trajectory commands. The main function instantiates the `HelloNode` class.
```python
def send_command(self, command):
joint_state = self.joint_state
if (joint_state is not None) and (command is not None):
point = JointTrajectoryPoint()
point.time_from_start = Duration(seconds=0).to_msg()
```
The `send_command` function stores the joint state message and uses a conditional statement to send joint trajectory goals. Also, assign `point` as a `JointTrajectoryPoint` message type.
```python
trajectory_goal = FollowJointTrajectory.Goal()
trajectory_goal.goal_time_tolerance = Duration(seconds=0).to_msg()
```
Assign `trajectory_goal` as a `FollowJointTrajectory.Goal` message type.
```python
joint_name = command['joint']
trajectory_goal.trajectory.joint_names = [joint_name]
```
Extract the joint name from the command dictionary.
```python
inc = command['inc']
self.get_logger().info('inc = {0}'.format(inc))
new_value = inc
```
Extract the increment type from the command dictionary.
```python
point.positions = [new_value]
trajectory_goal.trajectory.points = [point]
```
Assign the new value position to the trajectory goal message type.
```python
self.trajectory_client.send_goal(trajectory_goal)
self.get_logger().info('Done sending command.')
```
Make the action call and send the goal of the new joint position.
```python
self.speech.print_commands()
```
Reprint the voice command menu after the trajectory goal is sent.
```python
def main(self):
self.create_subscription(JointState, '/stretch/joint_states', self.joint_states_callback, 1)
rate = self.create_rate(self.rate)
self.speech.print_commands()
```
The main function initializes the subscriber and we are going to use the publishing rate that we set before.
```python
try:
#rclpy.init(args=args)
node = VoiceTeleopNode()
node.main()
node.new_thread.join()
except KeyboardInterrupt:
node.get_logger().info('interrupt received, so shutting down')
node.destroy_node()
rclpy.shutdown()
```
Declare a `VoiceTeleopNode` object. Then execute the `main()` method.

+ 2
- 2
ros2/follow_joint_trajectory.md View File

@ -87,7 +87,7 @@ Now let's break the code down.
```python ```python
#!/usr/bin/env python3 #!/usr/bin/env python3
``` ```
Every Python ROS [Node](http://wiki.ros.org/Nodes) will have this declaration at the top. The first line makes sure your script is executed as a Python script.
Every Python ROS [Node](https://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python script.
```python ```python
@ -99,7 +99,7 @@ from hello_helpers.hello_misc import HelloNode
import time import time
``` ```
You need to import rclpy if you are writing a ROS 2 Node. Import the FollowJointTrajectory from the [control_msgs.action](https://github.com/ros-controls/control_msgs/tree/master/control_msgs) package to control the Stretch robot. Import JointTrajectoryPoint from the [trajectory_msgs](https://github.com/ros2/common_interfaces/tree/rolling/trajectory_msgs/msg) package to define robot trajectories.
You need to import rclpy if you are writing a ROS 2 Node. Import the FollowJointTrajectory from the [control_msgs.action](https://github.com/ros-controls/control_msgs/tree/master/control_msgs) package to control the Stretch robot. Import JointTrajectoryPoint from the [trajectory_msgs](https://github.com/ros2/common_interfaces/tree/humble/trajectory_msgs/msg) package to define robot trajectories.
```python ```python
class StowCommand(HelloNode): class StowCommand(HelloNode):

+ 13
- 13
ros2/intro_to_ros2.md View File

@ -14,7 +14,7 @@ Try out the following snippets for a ROS 2 quickstart:
## Initialization and Shutdown ## Initialization and Shutdown
### rclpy.init() ### rclpy.init()
All rclpy functionality can be exposed after initialization: All rclpy functionality can be exposed after initialization:
```{.bash .shell-prompt}
```python
import rclpy import rclpy
rclpy.init() rclpy.init()
@ -22,13 +22,13 @@ rclpy.init()
### rclpy.create_node() ### rclpy.create_node()
To create a new ROS 2 node, one can use the create_node method with the node name as the argument: To create a new ROS 2 node, one can use the create_node method with the node name as the argument:
```{.bash .shell-prompt}
```python
node = rclpy.create_node('temp') node = rclpy.create_node('temp')
``` ```
### rclpy.logging.get_logger() ### rclpy.logging.get_logger()
The rclpy library also provides a logger to print messages with different severity levels to stdout. Here’s how you can use it: The rclpy library also provides a logger to print messages with different severity levels to stdout. Here’s how you can use it:
```{.bash .shell-prompt}
```python
import rclpy.logging import rclpy.logging
logger = rclpy.logging.get_logger('temp') logger = rclpy.logging.get_logger('temp')
logger.info("Hello") logger.info("Hello")
@ -38,7 +38,7 @@ logger.error("Stretch")
### rclpy.ok() ### rclpy.ok()
If you want to check whether rclpy has been initialized, you can run the following snippet. This is especially useful to simulate an infinite loop based on whether rclpy has been shutdown. If you want to check whether rclpy has been initialized, you can run the following snippet. This is especially useful to simulate an infinite loop based on whether rclpy has been shutdown.
```{.bash .shell-prompt}
```python
import time import time
while rclpy.ok(): while rclpy.ok():
@ -51,7 +51,7 @@ Press ctrl+c to get out of the infinite loop.
### rclpy.shutdown() ### rclpy.shutdown()
Finally, to destroy a node safely and shutdown the instance of rclpy you can run: Finally, to destroy a node safely and shutdown the instance of rclpy you can run:
```{.bash .shell-prompt}
```python
node.destroy_node() node.destroy_node()
rclpy.shutdown() rclpy.shutdown()
``` ```
@ -59,7 +59,7 @@ rclpy.shutdown()
## Publishing and Subscribing ## Publishing and Subscribing
### create_publisher() ### create_publisher()
ROS 2 is a distributed communication system and one way to send data is through a publisher. It takes the following arguments: msg_type, msg_topic and a history depth (formerly queue_size): ROS 2 is a distributed communication system and one way to send data is through a publisher. It takes the following arguments: msg_type, msg_topic and a history depth (formerly queue_size):
```{.bash .shell-prompt}
```python
from std_msgs.msg import String from std_msgs.msg import String
import rclpy import rclpy
@ -70,7 +70,7 @@ pub = node.create_publisher(String, 'hello', 10)
### create_subscription() ### create_subscription()
To receive a message, we need to create a subscriber with a callback function that listens to the arriving messages. Let's create a subscriber and define a callback called hello_callback() that logs the a message as soon as one is received: To receive a message, we need to create a subscriber with a callback function that listens to the arriving messages. Let's create a subscriber and define a callback called hello_callback() that logs the a message as soon as one is received:
```{.bash .shell-prompt}
```python
def hello_callback(msg): def hello_callback(msg):
print("Received message: {}".format(msg.data)) print("Received message: {}".format(msg.data))
@ -79,7 +79,7 @@ sub = node.create_subscription(String, 'hello', hello_callback, 10)
### publish() ### publish()
Now that you have defined a publisher and a subscriber, let’s send a message and see if it gets printed to the console: Now that you have defined a publisher and a subscriber, let’s send a message and see if it gets printed to the console:
```{.bash .shell-prompt}
```python
msg = String() msg = String()
msg.data = "Hello" msg.data = "Hello"
pub.publish(msg) pub.publish(msg)
@ -87,7 +87,7 @@ pub.publish(msg)
### rclpy.spin_once() ### rclpy.spin_once()
That didn’t do it! Although the message was sent, it didn't get printed to the console. Why? Because the hello_callback() method was never called to print the message. In ROS, we don’t call this method manually, but rather leave it to what’s called the executor. The executor can be invoked by calling the spin_once() method. We pass the node object and a timeout of 2 seconds as the arguments. The timeout is important because the spin_once() method is blocking and it will wait for a message to arrive indefinitely if a timeout is not defined. It returns immediately once a message is received. That didn’t do it! Although the message was sent, it didn't get printed to the console. Why? Because the hello_callback() method was never called to print the message. In ROS, we don’t call this method manually, but rather leave it to what’s called the executor. The executor can be invoked by calling the spin_once() method. We pass the node object and a timeout of 2 seconds as the arguments. The timeout is important because the spin_once() method is blocking and it will wait for a message to arrive indefinitely if a timeout is not defined. It returns immediately once a message is received.
```{.bash .shell-prompt}
```python
rclpy.spin_once(node, timeout_sec=2.0) rclpy.spin_once(node, timeout_sec=2.0)
``` ```
@ -95,7 +95,7 @@ rclpy.spin_once(node, timeout_sec=2.0)
The spin_once() method only does work equivalent to a single message callback. What if you want the executor to process callbacks continuously? This can be achieved using the spin() method. While retaining the current interpreter instance, let’s open a new terminal window with a new instance of IPython and execute the following: The spin_once() method only does work equivalent to a single message callback. What if you want the executor to process callbacks continuously? This can be achieved using the spin() method. While retaining the current interpreter instance, let’s open a new terminal window with a new instance of IPython and execute the following:
Terminal 2: Terminal 2:
```{.bash .shell-prompt}
```python
import rclpy import rclpy
from std_msgs.msg import String from std_msgs.msg import String
rclpy.init() rclpy.init()
@ -109,7 +109,7 @@ rclpy.spin(node)
Now, from the first IPython instance, send a series of messages and see what happens: Now, from the first IPython instance, send a series of messages and see what happens:
Terminal 1: Terminal 1:
```{.bash .shell-prompt}
```python
for i in range(10): for i in range(10):
msg.data = "Hello {}".format(i) msg.data = "Hello {}".format(i)
pub.publish(msg) pub.publish(msg)
@ -120,7 +120,7 @@ Voila! Finally, close both the terminals to end the session.
## Service Server and Client ## Service Server and Client
### create_service() ### create_service()
Let’s explore another common way of using ROS 2. Imagine a case where you need to request some information from a node and you expect to receive a response. This can be achieved using the service client paradigm in ROS 2. Let’s fire up IPython again and create a quick service: Let’s explore another common way of using ROS 2. Imagine a case where you need to request some information from a node and you expect to receive a response. This can be achieved using the service client paradigm in ROS 2. Let’s fire up IPython again and create a quick service:
```{.bash .shell-prompt}
```python
import rclpy import rclpy
from example_interfaces.srv import AddTwoInts from example_interfaces.srv import AddTwoInts
rclpy.init() rclpy.init()
@ -143,7 +143,7 @@ The add_ints() method is the callback method for the service server. Once a serv
### create_client() ### create_client()
Now, while retaining the current IPython session, open another session of the IPython interpreter in another terminal to write the service client: Now, while retaining the current IPython session, open another session of the IPython interpreter in another terminal to write the service client:
```{.bash .shell-prompt}
```python
import rclpy import rclpy
from example_interfaces.srv import AddTwoInts from example_interfaces.srv import AddTwoInts
rclpy.init() rclpy.init()

+ 4
- 4
ros2/navigation_simple_commander.md View File

@ -2,7 +2,7 @@
In this tutorial, we will work with Stretch to explore the Simple Commander Python API to enable autonomous navigation programmatically. We will also demonstrate a security patrol routine for Stretch developed using this API. If you just landed here, it might be a good idea to first review the previous tutorial which covered mapping and navigation using RViz as an interface. In this tutorial, we will work with Stretch to explore the Simple Commander Python API to enable autonomous navigation programmatically. We will also demonstrate a security patrol routine for Stretch developed using this API. If you just landed here, it might be a good idea to first review the previous tutorial which covered mapping and navigation using RViz as an interface.
## The Simple Commander Python API ## The Simple Commander Python API
To develop complex behaviors with Stretch where navigation is just one aspect of the autonomy stack, we need to be able to plan and execute navigation routines as part of a bigger program. Luckily, the Nav2 stack exposes a Python API that abstracts the ROS layer and the Behavior Tree framework (more on that later!) from the user through a pre-configured library called the [robot navigator](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_nav2/stretch_nav2/robot_navigator.py). This library defines a class called BasicNavigator which wraps the planner, controller and recovery action servers and exposes methods such as `goToPose()`, `goToPoses()` and `followWaypoints()` to execute navigation behaviors.
To develop complex behaviors with Stretch where navigation is just one aspect of the autonomy stack, we need to be able to plan and execute navigation routines as part of a bigger program. Luckily, the Nav2 stack exposes a Python API that abstracts the ROS layer and the Behavior Tree framework (more on that later!) from the user through a pre-configured library called the [robot navigator](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_nav2/stretch_nav2/robot_navigator.py). This library defines a class called BasicNavigator which wraps the planner, controller and recovery action servers and exposes methods such as `goToPose()`, `goToPoses()` and `followWaypoints()` to execute navigation behaviors.
Let's first see the demo in action and then explore the code to understand how this works! Let's first see the demo in action and then explore the code to understand how this works!
@ -16,7 +16,7 @@ stretch_robot_stow.py
``` ```
## Setup ## Setup
Let's set the patrol route up before you can execute this demo in your map. This requires reading the position of the robot at various locations in the map and entering the co-ordinates in the array called `security_route` in the [simple_commander_demo.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_nav2/stretch_nav2/simple_commander_demo.py#L30) file.
Let's set the patrol route up before you can execute this demo in your map. This requires reading the position of the robot at various locations in the map and entering the co-ordinates in the array called `security_route` in the [simple_commander_demo.py](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_nav2/stretch_nav2/simple_commander_demo.py#L30) file.
First, execute the following command while passing the correct map YAML. Then, press the 'Startup' button: First, execute the following command while passing the correct map YAML. Then, press the 'Startup' button:
@ -24,7 +24,7 @@ First, execute the following command while passing the correct map YAML. Then, p
ros2 launch stretch_nav2 navigation.launch.py map:=${HELLO_ROBOT_FLEET}/maps/<map_name>.yaml ros2 launch stretch_nav2 navigation.launch.py map:=${HELLO_ROBOT_FLEET}/maps/<map_name>.yaml
``` ```
Since we expect the first point in the patrol route to be at the origin of the map, the first coordinates should be (0.0, 0.0). Next, to define the route, the easiest way to define the waypoints in the `security_route` array is by setting the robot at random locations in the map using the '2D Pose Estimate' button in RViz as shown below. For each location, note the x, and y coordinates in the position field of the base_footprint frame and add it to the `security_route` array in [simple_commander_demo.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_nav2/stretch_nav2/simple_commander_demo.py#L30).
Since we expect the first point in the patrol route to be at the origin of the map, the first coordinates should be (0.0, 0.0). Next, to define the route, the easiest way to define the waypoints in the `security_route` array is by setting the robot at random locations in the map using the '2D Pose Estimate' button in RViz as shown below. For each location, note the x, and y coordinates in the position field of the base_footprint frame and add it to the `security_route` array in [simple_commander_demo.py](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_nav2/stretch_nav2/simple_commander_demo.py#L30).
<p align="center"> <p align="center">
<img height=500 src="https://user-images.githubusercontent.com/97639181/206782270-e84b33c4-e155-468d-8a46-d926b88ba428.gif"/> <img height=500 src="https://user-images.githubusercontent.com/97639181/206782270-e84b33c4-e155-468d-8a46-d926b88ba428.gif"/>
@ -50,7 +50,7 @@ ros2 launch stretch_nav2 demo_security.launch.py map:=${HELLO_ROBOT_FLEET}/maps/
</p> </p>
## Code Breakdown ## Code Breakdown
Now, let's jump into the code to see how things work under the hood. Follow along in the [code](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_nav2/stretch_nav2/simple_commander_demo.py) to have a look at the entire script.
Now, let's jump into the code to see how things work under the hood. Follow along in the [code](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_nav2/stretch_nav2/simple_commander_demo.py) to have a look at the entire script.
First, we import the `BasicNavigator` class from the robot_navigator library which comes standard with the Nav2 stack. This class wraps around the planner, controller and recovery action servers. First, we import the `BasicNavigator` class from the robot_navigator library which comes standard with the Nav2 stack. This class wraps around the planner, controller and recovery action servers.

+ 2
- 2
ros2/obstacle_avoider.md View File

@ -12,7 +12,7 @@ LaserScanSpeckleFilter - We use this filter to remove phantom detections in the
LaserScanBoxFilter - Stretch is prone to returning false detections right over the mobile base. While navigating, since it’s safe to assume that Stretch is not standing right above an obstacle, we filter out any detections that are in a box shape over the mobile base. LaserScanBoxFilter - Stretch is prone to returning false detections right over the mobile base. While navigating, since it’s safe to assume that Stretch is not standing right above an obstacle, we filter out any detections that are in a box shape over the mobile base.
Beware that filtering laser scans comes at the cost of a sparser scan that might not be ideal for all applications. If you want to tweak the values for your end application, you could do so by changing the values in the [laser_filter_params.yaml](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/config/laser_filter_params.yaml) file and by following the laser_filters package wiki. Also, if you are feeling zany and want to use the raw unfiltered scans from the laser scanner, simply subscribe to the /scan topic instead of the /scan_filtered topic.
Beware that filtering laser scans comes at the cost of a sparser scan that might not be ideal for all applications. If you want to tweak the values for your end application, you could do so by changing the values in the [laser_filter_params.yaml](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/config/laser_filter_params.yaml) file and by following the laser_filters package wiki. Also, if you are feeling zany and want to use the raw unfiltered scans from the laser scanner, simply subscribe to the /scan topic instead of the /scan_filtered topic.
![laser_filtering](https://user-images.githubusercontent.com/97639181/196327251-c39f3cbb-c898-48c8-ae28-2683564061d9.gif) ![laser_filtering](https://user-images.githubusercontent.com/97639181/196327251-c39f3cbb-c898-48c8-ae28-2683564061d9.gif)
@ -42,7 +42,7 @@ ros2 launch stretch_core rplidar_keepout.launch.py
![avoidance](https://user-images.githubusercontent.com/97639181/196327294-1b2dde5e-2fdc-4a67-a188-ae6b1f5e6a06.gif) ![avoidance](https://user-images.githubusercontent.com/97639181/196327294-1b2dde5e-2fdc-4a67-a188-ae6b1f5e6a06.gif)
## Code Breakdown: ## Code Breakdown:
Let's jump into the code to see how things work under the hood. Follow along [here](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/stretch_core/avoider.py) to have a look at the entire script.
Let's jump into the code to see how things work under the hood. Follow along [here](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/avoider.py) to have a look at the entire script.
The turning distance is defined by the distance attribute and the keepout distance is defined by the keepout attribute. The turning distance is defined by the distance attribute and the keepout distance is defined by the keepout attribute.

+ 61
- 0
ros2/respeaker_mic_array.md View File

@ -0,0 +1,61 @@
# ReSpeaker Microphone Array
For this tutorial, we will get a high-level view of how to use Stretch's [ReSpeaker Mic Array v2.0](https://wiki.seeedstudio.com/ReSpeaker_Mic_Array_v2.0/).
<p align="center">
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/respeaker.jpg"/>
</p>
## Stretch Body Package
In this section we will use command line tools in the [Stretch_Body](https://github.com/hello-robot/stretch_body) package, a low-level Python API for Stretch's hardware, to directly interact with the ReSpeaker.
Begin by typing the following command in a new terminal.
```{.bash .shell-prompt}
stretch_respeaker_test.py
```
The following will be displayed in your terminal:
```{.bash .no-copy}
For use with S T R E T C H (TM) RESEARCH EDITION from Hello Robot Inc.
* waiting for audio...
* recording 3 seconds
* done
* playing audio
* done
```
The ReSpeaker Mico Array will wait until it hears audio loud enough to trigger its recording feature. Stretch will record audio for 3 seconds and then replay it through its speakers. This command line is a good method to see if the hardware is working correctly.
To stop the python script, type `Ctrl` + `c` in the terminal.
## ReSpeaker_ROS Package
A [ROS package for the ReSpeaker](https://index.ros.org/p/respeaker_ros/#melodic) is utilized for this section.
Begin by running the `sample_respeaker.launch.py` file in a terminal.
```{.bash .shell-prompt}
ros2 launch respeaker_ros2 respeaker.launch.py
```
This will bring up the necessary nodes that will allow the ReSpeaker to implement a voice and sound interface with the robot.
## ReSpeaker Topics
Below are executables you can run to see the ReSpeaker results.
```{.bash .shell-prompt}
ros2 topic echo /sound_direction
ros2 topic echo /sound_localization
ros2 topic echo /is_speeching
ros2 topic echo /audio
ros2 topic echo /speech_audio
ros2 topic echo /speech_to_text
```
There's also another topic called `/status_led`, with this topic you can change the color of the LEDs in the ReSpeaker, you need to publish the desired color in the terminal using `ros2 topic pub`. We will explore this topics in the next tutorial.
You can also set various parameters via `dynamic_reconfigure` by running the following command in a new terminal.
```{.bash .shell-prompt}
ros2 run rqt_reconfigure rqt_reconfigure

+ 168
- 0
ros2/respeaker_topics.md View File

@ -0,0 +1,168 @@
# ReSpeaker Microphone Array Topics
In this tutorial we will see the topics more in detail and have an idea of what the ReSpeaker can do. If you just landed here, it might be a good idea to first review the previous tutorial which covered the basics of the ReSpeaker and the information about the package used
## ReSpeaker Topics
Begin by running the `sample_respeaker.launch.py` file in a terminal.
```{.bash .shell-prompt}
ros2 launch respeaker_ros respeaker.launch.py
```
This will bring up the necessary nodes that will allow the ReSpeaker to implement a voice and sound interface with the robot. To see the topics that are available for us you can run the command `ros2 topic list -t` and search the topics that we are looking for.
Don't worry these are the executables you can run to see the ReSpeaker results.
```{.bash .shell-prompt}
ros2 topic echo /sound_direction # Result of Direction (in Radians) of Audio
ros2 topic echo /sound_localization # Result of Direction as Pose (Quaternion values)
ros2 topic echo /is_speeching # Result of Voice Activity Detector
ros2 topic echo /audio # Raw audio data
ros2 topic echo /speech_audio # Raw audio data when there is speech
ros2 topic echo /speech_to_text # Voice recognition
ros2 topic pub /status_led ... # Modify LED color
```
Let's go one by one and see what we can expect of each topic, the first is the `sound_direction` topic, in the terminal execute the command that you learned earlier:
```{.bash .shell-prompt}
ros2 topic echo /sound_direction # Result of Direction (in Radians) of Audio
```
This will give you the direction of the sound detected by the ReSpeaker in radians
```{.bash .no-copy}
data: 21
---
data: 138
---
data: -114
---
data: -65
---
```
The Direction of Arrival (DOA) for the ReSpeaker goes from -180 to 180, to know more about how is it in Stretch watch this DOA diagram:
<p align="center">
<img height=500 src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/humble/images/respeaker_doa_up.png"/>
</p>
The next topic is the `sound_localization`, this is similar to the `sound_direction` topic but now the result it's as pose (Quaternion Values), try it out, execute the command:
```{.bash .shell-prompt}
ros2 topic echo /sound_localization # Result of Direction as Pose (Quaternion values)
```
With this you will have in your terminal this:
```{.bash .no-copy}
---
header:
stamp:
sec: 1695325677
nanosec: 882383094
frame_id: respeaker_base
pose:
position:
x: -0.0
y: 0.0
z: 0.0
orientation:
x: 0.0
y: 0.0
z: 0.43051109680829525
w: 0.9025852843498605
---
```
The next one on the list is the `is_speeching` topic, with this you will have the result of Voice Activity Detector, let's try it out:
```{.bash .shell-prompt}
ros2 topic echo /is_speeching # Result of Voice Activity Detector
```
The result will be a true or false in the data but it can detect sounds as true so be careful with this
```{.bash .no-copy}
data: false
---
data: true
---
data: false
---
```
The `audio` topic is goint to output all the Raw audio data, if you want to see what this does execute the command:
```{.bash .shell-prompt}
ros2 topic echo /audio # Raw audio data
```
You will expect a lot of data from this, you will see this output:
```{.bash .no-copy}
---
data:
- 229
- 0
- 135
- 0
- 225
- 0
- 149
- 0
- 94
- 0
- 15
```
For the `speech_audio` topic you can expect the same result as the `audio` topic but this time you are going to have the raw data when there is a speech, execute the next command and speak near the microphone array:
```{.bash .shell-prompt}
ros2 topic echo /speech_audio # Raw audio data when there is speech
```
So if it's almost the same topic but now is going to ouput the data when you are talking then you guessed right, the result will look like the same as before.
```{.bash .no-copy}
---
data:
- 17
- 254
- 70
- 254
```
Passing to the `speech_to_text` topic, with this you can say a small sentence and it will output what you said. In the terminal, execute the next command and speak near the microphone array again:
```{.bash .shell-prompt}
ros2 topic echo /speech_to_text # Voice recognition
```
In this instance, "hello robot" was said. The following will be displayed in your terminal:
```{.bash .no-copy}
transcript:
- hello robot
confidence:
- ######
---
```
And for the final topic, the `status_led`, with this you can setup custom LED patterns and effects. There are 3 ways to do it, the first one is using `rqt_publisher`, in the terminal input:
```{.bash .shell-prompt}
ros2 run rqt_publisher rqt_publisher
```
With this the rqt_publisher window will open, there you need to add the topic manually, search for the `/status_led` topic, then click in the plus button, this is the add new publisher button and the topic will be added, then you can start moving the RGBA values between 0 to 1 and that's it, you can try it with the next example:
<p align="center">
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/humble/images/status_led_rqt.png"/>
</p>
You will see that there's a purple light coming out from the ReSpeaker, you can change the rate and color if you want.
Now for the next way you can do it in the terminal, let's try again with the same values that we had so input this command in the terminal:
```bash
ros2 topic pub /status_led std_msgs/msg/ColorRGBA "r: 1.0
g: 0.0
b: 1.0
a: 1.0"
```
And you can see that we have the same result as earlier, good job!
And for the final way it's going to be with a python code, here you can modify the lights just as we did before but now you have color patterns that you can create, let's try it so that you can see yourself, input in the terminal:
```{.bash .shell-prompt}
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/
python3 led_color_change.py
```
With this we can change the colors as well but the difference is that we are able to create our own patterns, in the [ReSpeaker Documentation](https://wiki.seeedstudio.com/ReSpeaker_Mic_Array_v2.0/#control-the-leds) there are more options to customize and control de LEDs.

+ 6
- 34
ros2/teleoperating_stretch.md View File

@ -3,23 +3,21 @@
!!! note !!! note
Teleoperation support for Stretch in ROS 2 is under active development. Please reach out to us if you want to teleoperate Stretch in ROS 2. Teleoperation support for Stretch in ROS 2 is under active development. Please reach out to us if you want to teleoperate Stretch in ROS 2.
Refer to the instructions below if you want to test this functionality in ROS 1.
### Xbox Controller Teleoperating ### Xbox Controller Teleoperating
If you have not already had a look at the [Xbox Controller Teleoperation](https://docs.hello-robot.com/0.2/stretch-tutorials/getting_started/quick_start_guide_re2/#hello-world-demo) section in the Quick Start guide, now might be a good time to try it. If you have not already had a look at the [Xbox Controller Teleoperation](https://docs.hello-robot.com/0.2/stretch-tutorials/getting_started/quick_start_guide_re2/#hello-world-demo) section in the Quick Start guide, now might be a good time to try it.
### Keyboard Teleoperating: Full Body ### Keyboard Teleoperating: Full Body
For full-body teleoperation with the keyboard, you first need to run the `stretch_driver.launch` in a terminal.
For full-body teleoperation with the keyboard, you first need to run the `stretch_driver.launch.py` in a terminal.
```{.bash .shell-prompt} ```{.bash .shell-prompt}
roslaunch stretch_core stretch_driver.launch
ros2 launch stretch_core stretch_driver.launch.py
``` ```
Then in a new terminal, type the following command Then in a new terminal, type the following command
```{.bash .shell-prompt} ```{.bash .shell-prompt}
rosrun stretch_core keyboard_teleop
ros2 run stretch_core keyboard_teleop
``` ```
Below are the keyboard commands that allow a user to control all of Stretch's joints. Below are the keyboard commands that allow a user to control all of Stretch's joints.
@ -66,14 +64,13 @@ To stop the node from sending twist messages, type **Ctrl** + **c**.
Begin by running the following command in your terminal: Begin by running the following command in your terminal:
```{.bash .shell-prompt} ```{.bash .shell-prompt}
roslaunch stretch_core stretch_driver.launch
ros2 launch stretch_core stretch_driver.launch.py mode:=navigation
``` ```
To teleoperate a Stretch's mobile base with the keyboard, you first need to switch the mode to *nagivation* for the robot to receive *Twist* messages. This is done using a rosservice call in a new terminal. In the same terminal run the teleop_twist_keyboard node with the argument remapping the *cmd_vel* topic name to *stretch/cmd_vel*.
To teleoperate a Stretch's mobile base with the keyboard, you first need to switch the mode to *nagivation* for the robot to receive *Twist* messages. In comparison with ROS1 that we needed to use the rosservice command, we can do it in the same driver launch as you can see in the command you just input! Now in other terminal run the teleop_twist_keyboard node with the argument remapping the *cmd_vel* topic name to *stretch/cmd_vel*.
```{.bash .shell-prompt} ```{.bash .shell-prompt}
rosservice call /switch_to_navigation_mode
rosrun teleop_twist_keyboard teleop_twist_keyboard.py cmd_vel:=stretch/cmd_vel
ros2 run teleop_twist_keyboard teleop_twist_keyboard cmd_vel:=stretch/cmd_vel
``` ```
Below are the keyboard commands that allow a user to move Stretch's base. Below are the keyboard commands that allow a user to move Stretch's base.
@ -112,28 +109,3 @@ To stop the node from sending twist messages, type **Ctrl** + **c**.
### Create a node for Mobile Base Teleoperating ### Create a node for Mobile Base Teleoperating
To move Stretch's mobile base using a python script, please look at [example 1](example_1.md) for reference. To move Stretch's mobile base using a python script, please look at [example 1](example_1.md) for reference.
<!-- ## Teleoperating in Gazebo
### Keyboard Teleoperating
For keyboard teleoperation, first [startup Stretch in simulation](gazebo_basics.md). Then run the following command in a new terminal.
```bash
roslaunch stretch_gazebo gazebo.launch
```
In a new terminal, type the following
```
roslaunch stretch_gazebo teleop_keyboard.launch
```
The same keyboard commands will be presented to a user to move the robot.
### Xbox Controller Teleoperating
An alternative for robot base teleoperation is to use an Xbox controller. Stop the keyboard teleoperation node by typing **Ctrl** + **c** in the terminal where the command was executed. Then connect the Xbox controller device to your local machine and run the following command.
```
roslaunch stretch_gazebo teleop_joy.launch
```
Note that the teleop_twist_joy package has a deadman switch by default which disables the drive commands to be published unless pressed. For a Logitech F310 joystick, this button is A. -->

Loading…
Cancel
Save