Ported ROS2 tutorials for the new documentationpull/11/head
@ -0,0 +1,120 @@ | |||||
## ArUco Marker Detector | |||||
For this tutorial, we will go over how to detect Stretch's ArUco markers and review the files that hold the information for the tags. | |||||
### Visualize ArUco Markers in RViz | |||||
Begin by running the stretch driver launch file. | |||||
```{.bash .shell-prompt} | |||||
ros2 launch stretch_core stretch_driver.launch.py | |||||
``` | |||||
To activate the RealSense camera and publish topics to be visualized, run the following launch file in a new terminal. | |||||
```{.bash .shell-prompt} | |||||
ros2 launch stretch_core d435i_high_resolution.launch.py | |||||
``` | |||||
Next, in a new terminal, run the stretch ArUco launch file which will bring up the [detect_aruco_markers](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) node. | |||||
```{.bash .shell-prompt} | |||||
ros2 launch stretch_core stretch_aruco.launch.py | |||||
``` | |||||
Within this tutorial package, there is an [RViz config file](https://github.com/hello-robot/stretch_tutorials/blob/humble/rviz/aruco_detector_example.rviz) with the topics for the transform frames in the Display tree. You can visualize these topics and the robot model by running the command below in a new terminal. | |||||
```{.bash .shell-prompt} | |||||
ros2 run rviz2 rviz2 -d /home/hello-robot/ament_ws/src/stretch_tutorials/rviz/aruco_detector_example.rviz | |||||
``` | |||||
You are going to need to teleoperate Stretch's head to detect the ArUco marker tags. Run the following command in a new terminal and control the head to point the camera toward the markers. | |||||
```{.bash .shell-prompt} | |||||
ros2 run stretch_core keyboard_teleop | |||||
``` | |||||
<p align="center"> | |||||
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/aruco_detector.gif"/> | |||||
</p> | |||||
### The ArUco Marker Dictionary | |||||
When defining the ArUco markers on Stretch, hello robot utilizes a YAML file, [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/config/stretch_marker_dict.yaml), that holds the information about the markers. | |||||
If [detect_aruco_markers](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) node doesn’t find an entry in [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/config/stretch_marker_dict.yaml) for a particular ArUco marker ID number, it uses the default entry. For example, most robots have shipped with the following default entry: | |||||
```yaml | |||||
'default': | |||||
'length_mm': 24 | |||||
'use_rgb_only': False | |||||
'name': 'unknown' | |||||
'link': None | |||||
``` | |||||
and the following entry for the ArUco marker on the top of the wrist | |||||
```yaml | |||||
'133': | |||||
'length_mm': 23.5 | |||||
'use_rgb_only': False | |||||
'name': 'wrist_top' | |||||
'link': 'link_aruco_top_wrist' | |||||
``` | |||||
**Dictionary Breakdown** | |||||
```yaml | |||||
'133': | |||||
``` | |||||
The dictionary key for each entry is the ArUco marker’s ID number or `default`. For example, the entry shown above for the ArUco marker on the top of the wrist assumes that the marker’s ID number is `133`. | |||||
```yaml | |||||
'length_mm': 23.5 | |||||
``` | |||||
The `length_mm` value used by [detect_aruco_markers](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) is important for estimating the pose of an ArUco marker. | |||||
!!! note | |||||
If the actual width and height of the marker do not match this value, then pose estimation will be poor. Thus, carefully measure custom Aruco markers. | |||||
```yaml | |||||
'use_rgb_only': False | |||||
``` | |||||
If `use_rgb_only` is `True`, [detect_aruco_markers](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) will ignore depth images from the [Intel RealSense D435i depth camera](https://www.intelrealsense.com/depth-camera-d435i/) when estimating the pose of the marker and will instead only use RGB images from the D435i. | |||||
```yaml | |||||
'name': 'wrist_top' | |||||
``` | |||||
`name` is used for the text string of the ArUco marker’s [ROS Marker](http://docs.ros.org/en/melodic/api/visualization_msgs/html/msg/Marker.html) in the [ROS MarkerArray](http://docs.ros.org/en/melodic/api/visualization_msgs/html/msg/MarkerArray.html) Message published by the [detect_aruco_markers](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) ROS node. | |||||
```yaml | |||||
'link': 'link_aruco_top_wrist' | |||||
``` | |||||
`link` is currently used by [stretch_calibration](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_calibration/stretch_calibration/collect_head_calibration_data.py). It is the name of the link associated with a body-mounted ArUco marker in the [robot’s URDF](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_description/urdf/stretch_aruco.xacro). | |||||
It’s good practice to add an entry to [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/config/stretch_marker_dict.yaml) for each ArUco marker you use. | |||||
### Create a New ArUco Marker | |||||
At Hello Robot, we’ve used the following guide when generating new ArUco markers. | |||||
We generate ArUco markers using a 6x6-bit grid (36 bits) with 250 unique codes. This corresponds with[ DICT_6X6_250 defined in OpenCV](https://docs.opencv.org/3.4/d9/d6a/group__aruco.html). We generate markers using this [online ArUco marker generator](https://chev.me/arucogen/) by setting the Dictionary entry to 6x6 and then setting the Marker ID and Marker size, mm as appropriate for the specific application. We strongly recommend measuring the actual marker by hand before adding an entry for it to [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/config/stretch_marker_dict.yaml). | |||||
We select marker ID numbers using the following ranges. | |||||
* 0 - 99: reserved for users | |||||
* 100 - 249: reserved for official use by Hello Robot Inc. | |||||
* 100 - 199: reserved for robots with distinct sets of body-mounted markers | |||||
* Allows different robots near each other to use distinct sets of body-mounted markers to avoid confusion. This could be valuable for various uses of body-mounted markers, including calibration, visual servoing, visual motion capture, and multi-robot tasks. | |||||
* 5 markers per robot = 2 on the mobile base + 2 on the wrist + 1 on the shoulder | |||||
* 20 distinct sets = 100 available ID numbers / 5 ID numbers per robot | |||||
* 200 - 249: reserved for official accessories | |||||
* 245 for the prototype docking station | |||||
* 246-249 for large floor markers | |||||
When coming up with this guide, we expected the following: | |||||
* Body-mounted accessories with the same ID numbers mounted to different robots could be disambiguated using the expected range of 3D locations of the ArUco markers on the calibrated body. | |||||
* Accessories in the environment with the same ID numbers could be disambiguated using a map or nearby observable features of the environment. |
@ -0,0 +1,458 @@ | |||||
# Example 12 | |||||
For this example, we will send follow joint trajectory commands for the head camera to search and locate an ArUco tag. In this instance, a Stretch robot will try to locate the docking station's ArUco tag. | |||||
## Modifying Stretch Marker Dictionary YAML File | |||||
When defining the ArUco markers on Stretch, hello robot utilizes a YAML file, [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/config/stretch_marker_dict.yaml), that holds the information about the markers. A further breakdown of the YAML file can be found in our [Aruco Marker Detection](aruco_marker_detection.md) tutorial. | |||||
Below is what needs to be included in the [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/config/stretch_marker_dict.yaml) file so the [detect_aruco_markers](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) node can find the docking station's ArUco tag. | |||||
```yaml | |||||
'245': | |||||
'length_mm': 88.0 | |||||
'use_rgb_only': False | |||||
'name': 'docking_station' | |||||
'link': None | |||||
``` | |||||
## Getting Started | |||||
Begin by running the stretch driver launch file. | |||||
```{.bash .shell-prompt} | |||||
ros2 launch stretch_core stretch_driver.launch.py | |||||
``` | |||||
To activate the RealSense camera and publish topics to be visualized, run the following launch file in a new terminal. | |||||
```{.bash .shell-prompt} | |||||
ros2 launch stretch_core d435i_high_resolution.launch.py | |||||
``` | |||||
Next, run the stretch ArUco launch file which will bring up the [detect_aruco_markers](https://github.com/hello-robot/stretch_ros2/blob/humble/stretch_core/stretch_core/detect_aruco_markers.py) node. In a new terminal, execute: | |||||
```{.bash .shell-prompt} | |||||
ros2 launch stretch_core stretch_aruco.launch.py | |||||
``` | |||||
Within this tutorial package, there is an [RViz config file](https://github.com/hello-robot/stretch_tutorials/blob/humble/rviz/aruco_detector_example.rviz) with the topics for the transform frames in the Display tree. You can visualize these topics and the robot model by running the command below in a new terminal. | |||||
```{.bash .shell-prompt} | |||||
ros2 run rviz2 rviz2 -d /home/hello-robot/ament_ws/src/stretch_tutorials/rviz/aruco_detector_example.rviz | |||||
``` | |||||
Then run the [aruco_tag_locator.py](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/aruco_tag_locator.py) node. In a new terminal, execute: | |||||
```{.bash .shell-prompt} | |||||
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/ | |||||
python3 aruco_tag_locator.py | |||||
``` | |||||
<p align="center"> | |||||
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/aruco_locator.gif"/> | |||||
</p> | |||||
### The Code | |||||
```python | |||||
#!/usr/bin/env python3 | |||||
# Import modules | |||||
import rclpy | |||||
import time | |||||
import tf2_ros | |||||
from tf2_ros import TransformException | |||||
from rclpy.time import Time | |||||
from math import pi | |||||
# Import hello_misc script for handling trajectory goals with an action client | |||||
import hello_helpers.hello_misc as hm | |||||
# We're going to subscribe to a JointState message type, so we need to import | |||||
# the definition for it | |||||
from sensor_msgs.msg import JointState | |||||
# Import the FollowJointTrajectory from the control_msgs.action package to | |||||
# control the Stretch robot | |||||
from control_msgs.action import FollowJointTrajectory | |||||
# Import JointTrajectoryPoint from the trajectory_msgs package to define | |||||
# robot trajectories | |||||
from trajectory_msgs.msg import JointTrajectoryPoint | |||||
# Import TransformStamped from the geometry_msgs package for the publisher | |||||
from geometry_msgs.msg import TransformStamped | |||||
class LocateArUcoTag(hm.HelloNode): | |||||
""" | |||||
A class that actuates the RealSense camera to find the docking station's | |||||
ArUco tag and returns a Transform between the `base_link` and the requested tag. | |||||
""" | |||||
def __init__(self): | |||||
""" | |||||
A function that initializes the subscriber and other needed variables. | |||||
:param self: The self reference. | |||||
""" | |||||
# Initialize the inhereted hm.Hellonode class | |||||
hm.HelloNode.__init__(self) | |||||
hm.HelloNode.main(self, 'aruco_tag_locator', 'aruco_tag_locator', wait_for_first_pointcloud=False) | |||||
# Initialize subscriber | |||||
self.joint_states_sub = self.create_subscription(JointState, '/stretch/joint_states', self.joint_states_callback, 1) | |||||
# Initialize publisher | |||||
self.transform_pub = self.create_publisher(TransformStamped, 'ArUco_transform', 10) | |||||
# Initialize the variable that will store the joint state positions | |||||
self.joint_state = None | |||||
# Provide the min and max joint positions for the head pan. These values | |||||
# are needed for sweeping the head to search for the ArUco tag | |||||
self.min_pan_position = -3.8 | |||||
self.max_pan_position = 1.50 | |||||
# Define the number of steps for the sweep, then create the step size for | |||||
# the head pan joint | |||||
self.pan_num_steps = 10 | |||||
self.pan_step_size = abs(self.min_pan_position - self.max_pan_position)/self.pan_num_steps | |||||
# Define the min tilt position, number of steps, and step size | |||||
self.min_tilt_position = -0.75 | |||||
self.tilt_num_steps = 3 | |||||
self.tilt_step_size = pi/16 | |||||
# Define the head actuation rotational velocity | |||||
self.rot_vel = 0.5 # radians per sec | |||||
def joint_states_callback(self, msg): | |||||
""" | |||||
A callback function that stores Stretch's joint states. | |||||
:param self: The self reference. | |||||
:param msg: The JointState message type. | |||||
""" | |||||
self.joint_state = msg | |||||
def send_command(self, command): | |||||
''' | |||||
Handles single joint control commands by constructing a FollowJointTrajectoryGoal | |||||
message and sending it to the trajectory_client created in hello_misc. | |||||
:param self: The self reference. | |||||
:param command: A dictionary message type. | |||||
''' | |||||
if (self.joint_state is not None) and (command is not None): | |||||
# Extract the string value from the `joint` key | |||||
joint_name = command['joint'] | |||||
# Set trajectory_goal as a FollowJointTrajectory.Goal and define | |||||
# the joint name | |||||
trajectory_goal = FollowJointTrajectory.Goal() | |||||
trajectory_goal.trajectory.joint_names = [joint_name] | |||||
# Create a JointTrajectoryPoint message type | |||||
point = JointTrajectoryPoint() | |||||
# Check to see if `delta` is a key in the command dictionary | |||||
if 'delta' in command: | |||||
# Get the current position of the joint and add the delta as a | |||||
# new position value | |||||
joint_index = self.joint_state.name.index(joint_name) | |||||
joint_value = self.joint_state.position[joint_index] | |||||
delta = command['delta'] | |||||
new_value = joint_value + delta | |||||
point.positions = [new_value] | |||||
# Check to see if `position` is a key in the command dictionary | |||||
elif 'position' in command: | |||||
# extract the head position value from the `position` key | |||||
point.positions = [command['position']] | |||||
# Set the rotational velocity | |||||
point.velocities = [self.rot_vel] | |||||
# Assign goal position with updated point variable | |||||
trajectory_goal.trajectory.points = [point] | |||||
# Specify the coordinate frame that we want (base_link) and set the time to be now. | |||||
trajectory_goal.trajectory.header.stamp = self.get_clock().now().to_msg() | |||||
trajectory_goal.trajectory.header.frame_id = 'base_link' | |||||
# Make the action call and send the goal. The last line of code waits | |||||
# for the result | |||||
self.trajectory_client.send_goal(trajectory_goal) | |||||
def find_tag(self, tag_name='docking_station'): | |||||
""" | |||||
A function that actuates the camera to search for a defined ArUco tag | |||||
marker. Then the function returns the pose. | |||||
:param self: The self reference. | |||||
:param tag_name: A string value of the ArUco marker name. | |||||
:returns transform: The docking station's TransformStamped message. | |||||
""" | |||||
# Create dictionaries to get the head in its initial position | |||||
pan_command = {'joint': 'joint_head_pan', 'position': self.min_pan_position} | |||||
self.send_command(pan_command) | |||||
tilt_command = {'joint': 'joint_head_tilt', 'position': self.min_tilt_position} | |||||
self.send_command(tilt_command) | |||||
# Nested for loop to sweep the joint_head_pan and joint_head_tilt in increments | |||||
for i in range(self.tilt_num_steps): | |||||
for j in range(self.pan_num_steps): | |||||
# Update the joint_head_pan position by the pan_step_size | |||||
pan_command = {'joint': 'joint_head_pan', 'delta': self.pan_step_size} | |||||
self.send_command(pan_command) | |||||
# Give time for system to do a Transform lookup before next step | |||||
time.sleep(0.2) | |||||
# Use a try-except block | |||||
try: | |||||
now = Time() | |||||
# Look up transform between the base_link and requested ArUco tag | |||||
transform = self.tf_buffer.lookup_transform('base_link', | |||||
tag_name, | |||||
now) | |||||
self.get_logger().info("Found Requested Tag: \n%s", transform) | |||||
# Publish the transform | |||||
self.transform_pub.publish(transform) | |||||
# Return the transform | |||||
return transform | |||||
except TransformException as ex: | |||||
continue | |||||
# Begin sweep with new tilt angle | |||||
pan_command = {'joint': 'joint_head_pan', 'position': self.min_pan_position} | |||||
self.send_command(pan_command) | |||||
tilt_command = {'joint': 'joint_head_tilt', 'delta': self.tilt_step_size} | |||||
self.send_command(tilt_command) | |||||
time.sleep(0.25) | |||||
# Notify that the requested tag was not found | |||||
self.get_logger().info("The requested tag '%s' was not found", tag_name) | |||||
def main(self): | |||||
""" | |||||
Function that initiates the issue_command function. | |||||
:param self: The self reference. | |||||
""" | |||||
# Create a StaticTranformBoradcaster Node. Also, start a Tf buffer that | |||||
# will store the tf information for a few seconds.Then set up a tf listener, which | |||||
# will subscribe to all of the relevant tf topics, and keep track of the information | |||||
self.static_broadcaster = tf2_ros.StaticTransformBroadcaster(self) | |||||
self.tf_buffer = tf2_ros.Buffer() | |||||
self.listener = tf2_ros.TransformListener(self.tf_buffer, self) | |||||
# Give the listener some time to accumulate transforms | |||||
time.sleep(1.0) | |||||
# Notify Stretch is searching for the ArUco tag with `get_logger().info()` | |||||
self.get_logger().info('Searching for docking ArUco tag') | |||||
# Search for the ArUco marker for the docking station | |||||
pose = self.find_tag("docking_station") | |||||
def main(): | |||||
try: | |||||
# Instantiate the `LocateArUcoTag()` object | |||||
node = LocateArUcoTag() | |||||
# Run the `main()` method | |||||
node.main() | |||||
node.new_thread.join() | |||||
except: | |||||
node.get_logger().info('Interrupt received, so shutting down') | |||||
node.destroy_node() | |||||
rclpy.shutdown() | |||||
if __name__ == '__main__': | |||||
main() | |||||
``` | |||||
### The Code Explained | |||||
Now let's break the code down. | |||||
```python | |||||
#!/usr/bin/env python3 | |||||
``` | |||||
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python3 script. | |||||
```python | |||||
import rclpy | |||||
import time | |||||
import tf2_ros | |||||
from tf2_ros import TransformException | |||||
from rclpy.time import Time | |||||
from math import pi | |||||
import hello_helpers.hello_misc as hm | |||||
from sensor_msgs.msg import JointState | |||||
from control_msgs.action import FollowJointTrajectory | |||||
from trajectory_msgs.msg import JointTrajectoryPoint | |||||
from geometry_msgs.msg import TransformStamped | |||||
``` | |||||
You need to import `rclpy` if you are writing a ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html). Import other python modules needed for this node. Import the `FollowJointTrajectory` from the [control_msgs.action](http://wiki.ros.org/control_msgs) package to control the Stretch robot. Import `JointTrajectoryPoint` from the [trajectory_msgs](https://github.com/ros2/common_interfaces/tree/humble/trajectory_msgs) package to define robot trajectories. The [hello_helpers](https://github.com/hello-robot/stretch_ros2/tree/humble/hello_helpers) package consists of a module that provides various Python scripts used across [stretch_ros](https://github.com/hello-robot/stretch_ros2). In this instance, we are importing the `hello_misc` script. | |||||
```python | |||||
def __init__(self): | |||||
# Initialize the inhereted hm.Hellonode class | |||||
hm.HelloNode.__init__(self) | |||||
hm.HelloNode.main(self, 'aruco_tag_locator', 'aruco_tag_locator', wait_for_first_pointcloud=False) | |||||
# Initialize subscriber | |||||
self.joint_states_sub = self.create_subscription(JointState, '/stretch/joint_states', self.joint_states_callback, 1) | |||||
# Initialize publisher | |||||
self.transform_pub = self.create_publisher(TransformStamped, 'ArUco_transform', 10) | |||||
# Initialize the variable that will store the joint state positions | |||||
self.joint_state = None | |||||
``` | |||||
The `LocateArUcoTag` class inherits the `HelloNode` class from `hm` and is instantiated. | |||||
Set up a subscriber with `self.create_subscription(JointState, '/stretch/joint_states', self.joint_states_callback, 1)`. We're going to subscribe to the topic `stretch/joint_states`, looking for `JointState` messages. When a message comes in, ROS is going to pass it to the function `joint_states_callback()` automatically. | |||||
`self.create_publisher(TransformStamped, 'ArUco_transform', 10)` declares that your node is publishing to the `ArUco_transform` topic using the message type `TransformStamped`. The `10` argument limits the amount of queued messages if any subscriber is not receiving them fast enough. | |||||
```python | |||||
self.min_pan_position = -4.10 | |||||
self.max_pan_position = 1.50 | |||||
self.pan_num_steps = 10 | |||||
self.pan_step_size = abs(self.min_pan_position - self.max_pan_position)/self.pan_num_steps | |||||
``` | |||||
Provide the minimum and maximum joint positions for the head pan. These values are needed for sweeping the head to search for the ArUco tag. We also define the number of steps for the sweep, then create the step size for the head pan joint. | |||||
```python | |||||
self.min_tilt_position = -0.75 | |||||
self.tilt_num_steps = 3 | |||||
self.tilt_step_size = pi/16 | |||||
``` | |||||
Set the minimum position of the tilt joint, the number of steps, and the size of each step. | |||||
```python | |||||
self.rot_vel = 0.5 # radians per sec | |||||
``` | |||||
Define the head actuation rotational velocity. | |||||
```python | |||||
def joint_states_callback(self, msg): | |||||
self.joint_state = msg | |||||
``` | |||||
The `joint_states_callback()` function stores Stretch's joint states. | |||||
```python | |||||
def send_command(self, command): | |||||
if (self.joint_state is not None) and (command is not None): | |||||
joint_name = command['joint'] | |||||
trajectory_goal = FollowJointTrajectory.Goal() | |||||
trajectory_goal.trajectory.joint_names = [joint_name] | |||||
point = JointTrajectoryPoint() | |||||
``` | |||||
Assign `trajectory_goal` as a `FollowJointTrajectory.Goal` message type. Then extract the string value from the `joint` key. Also, assign `point` as a `JointTrajectoryPoint` message type. | |||||
```python | |||||
if 'delta' in command: | |||||
joint_index = self.joint_state.name.index(joint_name) | |||||
joint_value = self.joint_state.position[joint_index] | |||||
delta = command['delta'] | |||||
new_value = joint_value + delta | |||||
point.positions = [new_value] | |||||
``` | |||||
Check to see if `delta` is a key in the command dictionary. Then get the current position of the joint and add the delta as a new position value. | |||||
```python | |||||
elif 'position' in command: | |||||
point.positions = [command['position']] | |||||
``` | |||||
Check to see if `position` is a key in the command dictionary. Then extract the position value. | |||||
```python | |||||
point.velocities = [self.rot_vel] | |||||
trajectory_goal.trajectory.points = [point] | |||||
trajectory_goal.trajectory.header.stamp = self.get_clock().now().to_msg() | |||||
trajectory_goal.trajectory.header.frame_id = 'base_link' | |||||
self.trajectory_client.send_goal(trajectory_goal) | |||||
``` | |||||
Then `trajectory_goal.trajectory.points` is defined by the positions set in `point`. Specify the coordinate frame that we want (*base_link*) and set the time to be now. Make the action call and send the goal. | |||||
```python | |||||
def find_tag(self, tag_name='docking_station'): | |||||
pan_command = {'joint': 'joint_head_pan', 'position': self.min_pan_position} | |||||
self.send_command(pan_command) | |||||
tilt_command = {'joint': 'joint_head_tilt', 'position': self.min_tilt_position} | |||||
self.send_command(tilt_command) | |||||
``` | |||||
Create a dictionary to get the head in its initial position for its search and send the commands with the `send_command()` function. | |||||
```python | |||||
for i in range(self.tilt_num_steps): | |||||
for j in range(self.pan_num_steps): | |||||
pan_command = {'joint': 'joint_head_pan', 'delta': self.pan_step_size} | |||||
self.send_command(pan_command) | |||||
time.sleep(0.5) | |||||
``` | |||||
Utilize a nested for loop to sweep the pan and tilt in increments. Then update the `joint_head_pan` position by the `pan_step_size`. Use `time.sleep()` function to give time to the system to do a Transform lookup before the next step. | |||||
```python | |||||
try: | |||||
now = Time() | |||||
transform = self.tf_buffer.lookup_transform('base_link', | |||||
tag_name, | |||||
now) | |||||
self.get_logger().info("Found Requested Tag: \n%s", transform) | |||||
self.transform_pub.publish(transform) | |||||
return transform | |||||
except TransformException as ex: | |||||
continue | |||||
``` | |||||
Use a try-except block to look up the transform between the *base_link* and the requested ArUco tag. Then publish and return the `TransformStamped` message. | |||||
```python | |||||
pan_command = {'joint': 'joint_head_pan', 'position': self.min_pan_position} | |||||
self.send_command(pan_command) | |||||
tilt_command = {'joint': 'joint_head_tilt', 'delta': self.tilt_step_size} | |||||
self.send_command(tilt_command) | |||||
time.sleep(.25) | |||||
``` | |||||
Begin sweep with new tilt angle. | |||||
```python | |||||
def main(self): | |||||
self.static_broadcaster = tf2_ros.StaticTransformBroadcaster(self) | |||||
self.tf_buffer = tf2_ros.Buffer() | |||||
self.listener = tf2_ros.TransformListener(self.tf_buffer, self) | |||||
time.sleep(1.0) | |||||
``` | |||||
Create a StaticTranformBoradcaster Node. Also, start a tf buffer that will store the tf information for a few seconds. Then set up a tf listener, which will subscribe to all of the relevant tf topics, and keep track of the information. Include `time.sleep(1.0)` to give the listener some time to accumulate transforms. | |||||
```python | |||||
self.get_logger().info('Searching for docking ArUco tag') | |||||
pose = self.find_tag("docking_station") | |||||
``` | |||||
Notice Stretch is searching for the ArUco tag with a `self.get_logger().info()` function. Then search for the ArUco marker for the docking station. | |||||
```python | |||||
def main(): | |||||
try: | |||||
node = LocateArUcoTag() | |||||
node.main() | |||||
node.new_thread.join() | |||||
except: | |||||
node.get_logger().info('Interrupt received, so shutting down') | |||||
node.destroy_node() | |||||
rclpy.shutdown() | |||||
``` | |||||
Instantiate the `LocateArUcoTag()` object and run the `main()` method. |
@ -0,0 +1,208 @@ | |||||
## Example 5 | |||||
In this example, we will review a Python script that prints out the positions of a selected group of Stretch joints. This script is helpful if you need the joint positions after you teleoperated Stretch with the Xbox controller or physically moved the robot to the desired configuration after hitting the run stop button. | |||||
If you are looking for a continuous print of the joint states while Stretch is in action, then you can use the [ros2 topic command-line tool](https://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Topics/Understanding-ROS2-Topics.html) shown in the [Internal State of Stretch Tutorial](https://github.com/hello-robot/stretch_tutorials/blob/master/ros2/internal_state_of_stretch.md). | |||||
Begin by starting up the stretch driver launch file. | |||||
```{.bash .shell-prompt} | |||||
ros2 launch stretch_core stretch_driver.launch.py | |||||
``` | |||||
You can then hit the run-stop button (you should hear a beep and the LED light in the button blink) and move the robot's joints to a desired configuration. Once you are satisfied with the configuration, hold the run-stop button until you hear a beep. Then run the following command to execute the [joint_state_printer.py](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/joint_state_printer.py) which will print the joint positions of the lift, arm, and wrist. In a new terminal, execute: | |||||
```{.bash .shell-prompt} | |||||
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/ | |||||
python3 joint_state_printer.py | |||||
``` | |||||
Your terminal will output the `position` information of the previously mentioned joints shown below. | |||||
```{.bash .no-copy} | |||||
name: ['joint_lift', 'wrist_extension', 'joint_wrist_yaw'] | |||||
position: [0.6043133175850597, 0.19873586673129257, 0.017257283863713464] | |||||
``` | |||||
!!! note | |||||
Stretch's arm has four prismatic joints and the sum of their positions gives the *wrist_extension* distance. The *wrist_extension* is needed when sending [joint trajectory commands](https://github.com/hello-robot/stretch_tutorials/blob/master/ros2/follow_joint_trajectory.md) to the robot. Further, you can not actuate an individual arm joint. Here is an image of the arm joints for reference: | |||||
<p align="center"> | |||||
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/joints.png"/> | |||||
</p> | |||||
### The Code | |||||
```python | |||||
#!/usr/bin/env python3 | |||||
import rclpy | |||||
from rclpy.node import Node | |||||
import sys | |||||
import time | |||||
# We're going to subscribe to a JointState message type, so we need to import | |||||
# the definition for it | |||||
from sensor_msgs.msg import JointState | |||||
class JointStatePublisher(Node): | |||||
""" | |||||
A class that prints the positions of desired joints in Stretch. | |||||
""" | |||||
def __init__(self): | |||||
""" | |||||
Function that initializes the subscriber. | |||||
:param self: The self reference | |||||
""" | |||||
super().__init__('stretch_joint_state') | |||||
# Set up a subscriber. We're going to subscribe to the topic "joint_states" | |||||
self.sub = self.create_subscription(JointState, 'joint_states', self.callback, 1) | |||||
def callback(self, msg): | |||||
""" | |||||
Callback function to deal with the incoming JointState messages. | |||||
:param self: The self reference. | |||||
:param msg: The JointState message. | |||||
""" | |||||
# Store the joint messages for later use | |||||
self.get_logger().info('Receiving JointState messages') | |||||
self.joint_states = msg | |||||
def print_states(self, joints): | |||||
""" | |||||
print_states function to deal with the incoming JointState messages. | |||||
:param self: The self reference. | |||||
:param joints: A list of string values of joint names. | |||||
""" | |||||
# Create an empty list that will store the positions of the requested joints | |||||
joint_positions = [] | |||||
# Use of forloop to parse the names of the requested joints list. | |||||
# The index() function returns the index at the first occurrence of | |||||
# the name of the requested joint in the self.joint_states.name list | |||||
for joint in joints: | |||||
if joint == "wrist_extension": | |||||
index = self.joint_states.name.index('joint_arm_l0') | |||||
joint_positions.append(4*self.joint_states.position[index]) | |||||
continue | |||||
index = self.joint_states.name.index(joint) | |||||
joint_positions.append(self.joint_states.position[index]) | |||||
# Print the joint position values to the terminal | |||||
print("name: " + str(joints)) | |||||
print("position: " + str(joint_positions)) | |||||
# Sends a signal to rclpy to shutdown the ROS interfaces | |||||
rclpy.shutdown() | |||||
# Exit the Python interpreter | |||||
sys.exit(0) | |||||
def main(args=None): | |||||
# Initialize the node | |||||
rclpy.init(args=args) | |||||
joint_publisher = JointStatePublisher() | |||||
time.sleep(1) | |||||
rclpy.spin_once(joint_publisher) | |||||
# Create a list of the joints and name them joints. These will be an argument | |||||
# for the print_states() function | |||||
joints = ["joint_lift", "wrist_extension", "joint_wrist_yaw"] | |||||
joint_publisher.print_states(joints) | |||||
# Give control to ROS. This will allow the callback to be called whenever new | |||||
# messages come in. If we don't put this line in, then the node will not work, | |||||
# and ROS will not process any messages | |||||
rclpy.spin(joint_publisher) | |||||
if __name__ == '__main__': | |||||
main() | |||||
``` | |||||
### The Code Explained | |||||
Now let's break the code down. | |||||
```python | |||||
#!/usr/bin/env python3 | |||||
``` | |||||
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python3 script. | |||||
```python | |||||
import rclpy | |||||
import sys | |||||
import time | |||||
from rclpy.node import Node | |||||
from sensor_msgs.msg import JointState | |||||
``` | |||||
You need to import rclpy if you are writing a ROS 2 Node. Import `sensor_msgs.msg` so that we can subscribe to `JointState` messages. | |||||
```python | |||||
self.sub = self.create_subscription(JointState, 'joint_states', self.callback, 1) | |||||
``` | |||||
Set up a subscriber. We're going to subscribe to the topic `joint_states`, looking for `JointState` messages. When a message comes in, ROS is going to pass it to the function "callback" automatically | |||||
```python | |||||
def callback(self, msg): | |||||
self.joint_states = msg | |||||
``` | |||||
This is the callback function where the `JointState` messages are stored as `self.joint_states`. Further information about this message type can be found here: [JointState Message](http://docs.ros.org/en/lunar/api/sensor_msgs/html/msg/JointState.html) | |||||
```python | |||||
def print_states(self, joints): | |||||
joint_positions = [] | |||||
``` | |||||
This is the `print_states()` function which takes in a list of joints of interest as its argument. the is also an empty list set as `joint_positions` and this is where the positions of the requested joints will be appended. | |||||
```python | |||||
for joint in joints: | |||||
if joint == "wrist_extension": | |||||
index = self.joint_states.name.index('joint_arm_l0') | |||||
joint_positions.append(4*self.joint_states.position[index]) | |||||
continue | |||||
index = self.joint_states.name.index(joint) | |||||
joint_positions.append(self.joint_states.position[index]) | |||||
``` | |||||
In this section of the code, a for loop is used to parse the names of the requested joints from the `self.joint_states` list. The `index()` function returns the index of the name of the requested joint and appends the respective position to the `joint_positions` list. | |||||
```python | |||||
rclpy.shutdown() | |||||
sys.exit(0) | |||||
``` | |||||
The first line of code initiates a clean shutdown of ROS. The second line of code exits the Python interpreter. | |||||
```python | |||||
rclpy.init(args=args) | |||||
joint_publisher = JointStatePublisher() | |||||
time.sleep(1) | |||||
``` | |||||
The next line, rclpy.init_node initializes the node. In this case, your node will take on the name 'stretch_joint_state'. **NOTE:** the name must be a base name, i.e. it cannot contain any slashes "/". | |||||
Declare object, *joint_publisher*, from the `JointStatePublisher` class. | |||||
The use of the `time.sleep()` function is to allow the *joint_publisher* class to initialize all of its features before requesting to publish joint positions of desired joints (running the `print_states()` method). | |||||
```python | |||||
joints = ["joint_lift", "wrist_extension", "joint_wrist_yaw"] | |||||
#joints = ["joint_head_pan","joint_head_tilt", joint_gripper_finger_left", "joint_gripper_finger_right"] | |||||
joint_publisher.print_states(joints) | |||||
``` | |||||
Create a list of the desired joints that you want positions to be printed. Then use that list as an argument for the `print_states()` method. | |||||
```python | |||||
rclpy.spin(joint_publisher) | |||||
``` | |||||
Give control to ROS. This will allow the callback to be called whenever new messages come in. If we don't put this line in, then the node will not work, and ROS will not process any messages. |
@ -0,0 +1,355 @@ | |||||
## Example 6 | |||||
In this example, we will review a Python script that prints and stores the effort values from a specified joint. If you are looking for a continuous print of the joint state efforts while Stretch is in action, then you can use the [ros2 topic command-line tool](https://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Topics/Understanding-ROS2-Topics.html) shown in the [Internal State of Stretch Tutorial](https://github.com/hello-robot/stretch_tutorials/blob/master/ros2/internal_state_of_stretch.md). | |||||
<p align="center"> | |||||
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/effort_sensing.gif"/> | |||||
</p> | |||||
Begin by running the following command in a terminal. | |||||
```{.bash .shell-prompt} | |||||
ros2 launch stretch_core stretch_driver.launch.py | |||||
``` | |||||
There's no need to switch to the position mode in comparison with ROS1 because the default mode of the launcher is this position mode. Then run the [effort_sensing.py](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/effort_sensing.py) node. In a new terminal, execute: | |||||
```{.bash .shell-prompt} | |||||
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/ | |||||
python3 effort_sensing.py | |||||
``` | |||||
This will send a `FollowJointTrajectory` command to move Stretch's arm or head while also printing the effort of the lift. | |||||
### The Code | |||||
```python | |||||
##!/usr/bin/env python3 | |||||
import rclpy | |||||
import hello_helpers.hello_misc as hm | |||||
import os | |||||
import csv | |||||
import time | |||||
import pandas as pd | |||||
import matplotlib | |||||
matplotlib.use('tkagg') | |||||
import matplotlib.pyplot as plt | |||||
from matplotlib import animation | |||||
from datetime import datetime | |||||
from control_msgs.action import FollowJointTrajectory | |||||
from trajectory_msgs.msg import JointTrajectoryPoint | |||||
class JointActuatorEffortSensor(hm.HelloNode): | |||||
def __init__(self, export_data=True, animate=True): | |||||
hm.HelloNode.__init__(self) | |||||
hm.HelloNode.main(self, 'issue_command', 'issue_command', wait_for_first_pointcloud=False) | |||||
self.joints = ['joint_lift'] | |||||
self.joint_effort = [] | |||||
self.save_path = '/home/hello-robot/ament_ws/src/stretch_tutorials/stored_data' | |||||
self.export_data = export_data | |||||
self.result = False | |||||
self.file_name = datetime.now().strftime("effort_sensing_tutorial_%Y%m%d%I") | |||||
def issue_command(self): | |||||
trajectory_goal = FollowJointTrajectory.Goal() | |||||
trajectory_goal.trajectory.joint_names = self.joints | |||||
point0 = JointTrajectoryPoint() | |||||
point0.positions = [0.9] | |||||
trajectory_goal.trajectory.points = [point0] | |||||
trajectory_goal.trajectory.header.stamp = self.get_clock().now().to_msg() | |||||
trajectory_goal.trajectory.header.frame_id = 'base_link' | |||||
while self.joint_state is None: | |||||
time.sleep(0.1) | |||||
self._send_goal_future = self.trajectory_client.send_goal_async(trajectory_goal, self.feedback_callback) | |||||
self.get_logger().info('Sent position goal = {0}'.format(trajectory_goal)) | |||||
self._send_goal_future.add_done_callback(self.goal_response_callback) | |||||
def goal_response_callback(self, future): | |||||
goal_handle = future.result() | |||||
if not goal_handle.accepted: | |||||
self.get_logger().info('Failed') | |||||
return | |||||
self.get_logger().info('Succeded') | |||||
self._get_result_future = goal_handle.get_result_async() | |||||
self._get_result_future.add_done_callback(self.get_result_callback) | |||||
def get_result_callback(self, future): | |||||
self.result = future.result().result | |||||
self.get_logger().info('Sent position goal = {0}'.format(self.result)) | |||||
def feedback_callback(self, feedback_msg): | |||||
if 'wrist_extension' in self.joints: | |||||
self.joints.remove('wrist_extension') | |||||
self.joints.append('joint_arm_l0') | |||||
current_effort = [] | |||||
for joint in self.joints: | |||||
index = self.joint_state.name.index(joint) | |||||
current_effort.append(self.joint_state.effort[index]) | |||||
if not self.export_data: | |||||
print("name: " + str(self.joints)) | |||||
print("effort: " + str(current_effort)) | |||||
else: | |||||
self.joint_effort.append(current_effort) | |||||
if self.export_data: | |||||
file_name = self.file_name | |||||
completeName = os.path.join(self.save_path, file_name) | |||||
with open(completeName, "w") as f: | |||||
writer = csv.writer(f) | |||||
writer.writerow(self.joints) | |||||
writer.writerows(self.joint_effort) | |||||
def plot_data(self, animate = True): | |||||
while not self.result: | |||||
time.sleep(0.1) | |||||
file_name = self.file_name | |||||
self.completeName = os.path.join(self.save_path, file_name) | |||||
self.data = pd.read_csv(self.completeName) | |||||
self.y_anim = [] | |||||
self.animate = animate | |||||
for joint in self.data.columns: | |||||
# Create figure, labels, and title | |||||
fig = plt.figure() | |||||
plt.title(joint + ' Effort Sensing') | |||||
plt.ylabel('Effort') | |||||
plt.xlabel('Data Points') | |||||
# Conditional statement for animation plotting | |||||
if self.animate: | |||||
self.effort = self.data[joint] | |||||
frames = len(self.effort)-1 | |||||
anim = animation.FuncAnimation(fig=fig,func=self.plot_animate, repeat=False,blit=False,frames=frames, interval =75) | |||||
plt.show() | |||||
## If you want to save a video, make sure to comment out plt.show(), | |||||
## right before this comment. | |||||
# save_path = str(self.completeName + '.mp4') | |||||
# anim.save(save_path, writer=animation.FFMpegWriter(fps=10)) | |||||
# Reset y_anim for the next joint effort animation | |||||
del self.y_anim[:] | |||||
# Conditional statement for regular plotting (No animation) | |||||
else: | |||||
self.data[joint].plot(kind='line') | |||||
# save_path = str(self.completeName + '.png') | |||||
# plt.savefig(save_path, bbox_inches='tight') | |||||
plt.show() | |||||
def plot_animate(self,i): | |||||
# Append self.effort values for given joint | |||||
self.y_anim.append(self.effort.values[i]) | |||||
plt.plot(self.y_anim, color='blue') | |||||
def main(self): | |||||
self.get_logger().info('issuing command') | |||||
self.issue_command() | |||||
def main(): | |||||
try: | |||||
node = JointActuatorEffortSensor(export_data=True, animate=True) | |||||
node.main() | |||||
node.plot_data() | |||||
node.new_thread.join() | |||||
except KeyboardInterrupt: | |||||
node.get_logger().info('interrupt received, so shutting down') | |||||
node.destroy_node() | |||||
rclpy.shutdown() | |||||
if __name__ == '__main__': | |||||
main() | |||||
``` | |||||
### The Code Explained | |||||
This code is similar to that of the [multipoint_command](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/multipoint_command.py) and [joint_state_printer](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/joint_state_printer.py) node. Therefore, this example will highlight sections that are different from those tutorials. Now let's break the code down. | |||||
```python | |||||
#!/usr/bin/env python3 | |||||
``` | |||||
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html | |||||
) will have this declaration at the top. The first line makes sure your script is executed as a Python3 script. | |||||
```python | |||||
import rclpy | |||||
import hello_helpers.hello_misc as hm | |||||
import os | |||||
import csv | |||||
import time | |||||
import pandas as pd | |||||
import matplotlib | |||||
matplotlib.use('tkagg') | |||||
import matplotlib.pyplot as plt | |||||
from matplotlib import animation | |||||
from datetime import datetime | |||||
from control_msgs.action import FollowJointTrajectory | |||||
from trajectory_msgs.msg import JointTrajectoryPoint | |||||
``` | |||||
You need to import rclpy if you are writing a ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html). Import the `FollowJointTrajectory` from the `control_msgs.action` package to control the Stretch robot. Import `JointTrajectoryPoint` from the `trajectory_msgs` package to define robot trajectories. The `hello_helpers` package consists of a module that provides various Python scripts used across [stretch_ros](https://github.com/hello-robot/stretch_ros2). In this instance, we are importing the `hello_misc` script. | |||||
```Python | |||||
class JointActuatorEffortSensor(hm.HelloNode): | |||||
def __init__(self, export_data=False): | |||||
hm.HelloNode.__init__(self) | |||||
hm.HelloNode.main(self, 'issue_command', 'issue_command', wait_for_first_pointcloud=False) | |||||
``` | |||||
The `JointActuatorEffortSensor` class inherits the `HelloNode` class from `hm` and is initialized also the HelloNode class already have the topic joint_states, thanks to this we don't need to create a subscriber. | |||||
```python | |||||
self.joints = ['joint_lift'] | |||||
``` | |||||
Create a list of the desired joints you want to print. | |||||
```Python | |||||
self.joint_effort = [] | |||||
self.save_path = '/home/hello-robot/ament_ws/src/stretch_tutorials/stored_data' | |||||
self.export_data = export_data | |||||
self.result = False | |||||
self.file_name = datetime.now().strftime("effort_sensing_tutorial_%Y%m%d%I") | |||||
``` | |||||
Create an empty list to store the joint effort values. The `self.save_path` is the directory path where the .txt file of the effort values will be stored. You can change this path to a preferred directory. The `self.export_data` is a boolean and its default value is set to `True`. If set to `False`, then the joint values will be printed in the terminal, otherwise, it will be stored in a .txt file and that's what we want to see the plot graph. Also we want to give our text file a name since the beginning and we use the `self.file_name` to access this later. | |||||
```python | |||||
self._send_goal_future = self.trajectory_client.send_goal_async(trajectory_goal, self.feedback_callback) | |||||
self._send_goal_future.add_done_callback(self.goal_response_callback) | |||||
``` | |||||
The [ActionClient.send_goal_async()](https://docs.ros2.org/latest/api/rclpy/api/actions.html#rclpy.action.client.ActionClient.send_goal_async) method returns a future to a goal handle. Include the goal and `feedback_callback` functions in the send goal function. Also we need to register a `goal_response_callback` for when the future is complete | |||||
```python | |||||
def goal_response_callback(self,future): | |||||
goal_handle = future.result() | |||||
if not goal_handle.accepted: | |||||
self.get_logger().info('Failed') | |||||
return | |||||
self.get_logger().info('Succeded') | |||||
``` | |||||
Looking at the `goal_response_callback` in more detail we can see if the future is complete with the messages that will appear. | |||||
```python | |||||
self._get_result_future = goal_handle.get_result_async() | |||||
self._get_result_future.add_done_callback(self.get_result_callback) | |||||
``` | |||||
We need the goal_handle to request the result with the method get_result_async. With this we will get a future that will complete when the result is ready so we need a callback for this result. | |||||
```python | |||||
def get_result_callback(self, future): | |||||
self.result = future.result().result | |||||
self.get_logger().info('Sent position goal = {0}'.format(result)) | |||||
``` | |||||
In the result callback we log the result of our poistion goal | |||||
```python | |||||
def feedback_callback(self,feedback_msg): | |||||
``` | |||||
The feedback callback function takes in the `FollowJointTrajectoryActionFeedback` message as its argument. | |||||
```python | |||||
if 'wrist_extension' in self.joints: | |||||
self.joints.remove('wrist_extension') | |||||
self.joints.append('joint_arm_l0') | |||||
``` | |||||
Use a conditional statement to replace `wrist_extenstion` with `joint_arm_l0`. This is because `joint_arm_l0` has the effort values that the `wrist_extension` is experiencing. | |||||
```python | |||||
current_effort = [] | |||||
for joint in self.joints: | |||||
index = self.joint_states.name.index(joint) | |||||
current_effort.append(self.joint_states.effort[index]) | |||||
``` | |||||
Create an empty list to store the current effort values. Then use a for loop to parse the joint names and effort values. | |||||
```python | |||||
if not self.export_data: | |||||
print("name: " + str(self.joints)) | |||||
print("effort: " + str(current_effort)) | |||||
else: | |||||
self.joint_effort.append(current_effort) | |||||
``` | |||||
Use a conditional statement to print effort values in the terminal or store values into a list that will be used for exporting the data in a .txt file. | |||||
```python | |||||
if self.export_data: | |||||
file_name = self.file_name | |||||
completeName = os.path.join(self.save_path, file_name) | |||||
with open(completeName, "w") as f: | |||||
writer = csv.writer(f) | |||||
writer.writerow(self.joints) | |||||
writer.writerows(self.joint_effort) | |||||
``` | |||||
A conditional statement is used to export the data to a .txt file. The file's name is set to the one we created at the beginning. | |||||
```python | |||||
def plot_data(self, animate = True): | |||||
while not self.result: | |||||
time.sleep(0.1) | |||||
file_name = self.file_name | |||||
self.completeName = os.path.join(self.save_path, file_name) | |||||
self.data = pd.read_csv(self.completeName) | |||||
self.y_anim = [] | |||||
self.animate = animate | |||||
``` | |||||
This function will help us initialize some values to plot our data, we need to wait until we get the results to start plotting, because the file could still be storing values and we want to plot every point also we need to create an empty list for the animation. | |||||
```python | |||||
for joint in self.data.columns: | |||||
# Create figure, labels, and title | |||||
fig = plt.figure() | |||||
plt.title(joint + ' Effort Sensing') | |||||
plt.ylabel('Effort') | |||||
plt.xlabel('Data Points') | |||||
``` | |||||
Create a for loop to print each joint's effort writing the correct labels for x and y | |||||
```python | |||||
if self.animate: | |||||
self.effort = self.data[joint] | |||||
frames = len(self.effort)-1 | |||||
anim = animation.FuncAnimation(fig=fig,func=self.plot_animate, repeat=False,blit=False,frames=frames, interval =75) | |||||
plt.show() | |||||
del self.y_anim[:] | |||||
else: | |||||
self.data[joint].plot(kind='line') | |||||
# save_path = str(self.completeName + '.png') | |||||
# plt.savefig(save_path, bbox_inches='tight') | |||||
plt.show() | |||||
``` | |||||
This is a conditional statement for the animation plotting | |||||
```python | |||||
def plot_animate(self,i): | |||||
self.y_anim.append(self.effort.values[i]) | |||||
plt.plot(self.y_anim, color='blue') | |||||
``` | |||||
We will create another function that will plot every increment in the data frame | |||||
<p align="center"> | |||||
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/stored_data/2022-06-30_11:26:20-AM.png"/> | |||||
</p> |
@ -0,0 +1,277 @@ | |||||
# Capture Image | |||||
## Example 7 | |||||
In this example, we will review a Python script that captures an image from the [RealSense camera](https://www.intelrealsense.com/depth-camera-d435i/). | |||||
<p align="center"> | |||||
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/camera_image.jpeg"/> | |||||
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/camera_image_edge_detection.jpeg"/> | |||||
</p> | |||||
Begin by running the stretch `driver.launch` file. | |||||
```{.bash .shell-prompt} | |||||
ros2 launch stretch_core stretch_driver.launch.py | |||||
``` | |||||
To activate the [RealSense camera](https://www.intelrealsense.com/depth-camera-d435i/) and publish topics to be visualized, run the following launch file in a new terminal. | |||||
```{.bash .shell-prompt} | |||||
ros2 launch stretch_core d435i_low_resolution.launch.py | |||||
``` | |||||
Within this tutorial package, there is an RViz config file with the topics for perception already in the Display tree. You can visualize these topics and the robot model by running the command below in a new terminal. | |||||
```{.bash .shell-prompt} | |||||
ros2 run rviz2 rviz2 -d /home/hello-robot/ament_ws/src/stretch_tutorials/rviz/perception_example.rviz | |||||
``` | |||||
## Capture Image with Python Script | |||||
In this section, we will use a Python node to capture an image from the [RealSense camera](https://www.intelrealsense.com/depth-camera-d435i/). Execute the [capture_image.py](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/capture_image.py) node to save a .jpeg image of the image topic `/camera/color/image_raw`. In a terminal, execute: | |||||
```{.bash .shell-prompt} | |||||
cd ~/ament_ws/src/stretch_tutorials/stretch_ros_tutorials | |||||
python3 capture_image.py | |||||
``` | |||||
An image named **camera_image.jpeg** is saved in the **stored_data** folder in this package, if you don't have this folder you can create it yourself. | |||||
### The Code | |||||
```python | |||||
#!/usr/bin/env python3 | |||||
import rclpy | |||||
import sys | |||||
import os | |||||
import cv2 | |||||
from rclpy.node import Node | |||||
from sensor_msgs.msg import Image | |||||
from cv_bridge import CvBridge, CvBridgeError | |||||
class CaptureImage(Node): | |||||
""" | |||||
A class that converts a subscribed ROS image to a OpenCV image and saves | |||||
the captured image to a predefined directory. | |||||
""" | |||||
def __init__(self): | |||||
""" | |||||
A function that initializes a CvBridge class, subscriber, and save path. | |||||
:param self: The self reference. | |||||
""" | |||||
super().__init__('stretch_capture_image') | |||||
self.bridge = CvBridge() | |||||
self.sub = self.create_subscription(Image, '/camera/color/image_raw', self.image_callback, 10) | |||||
self.save_path = '/home/hello-robot/ament_ws/src/stretch_tutorials/stored_data' | |||||
self.br = CvBridge() | |||||
def image_callback(self, msg): | |||||
""" | |||||
A callback function that converts the ROS image to a CV2 image and stores the | |||||
image. | |||||
:param self: The self reference. | |||||
:param msg: The ROS image message type. | |||||
""" | |||||
try: | |||||
image = self.bridge.imgmsg_to_cv2(msg, 'bgr8') | |||||
except CvBridgeError as e: | |||||
self.get_logger().warn('CV Bridge error: {0}'.format(e)) | |||||
file_name = 'camera_image.jpeg' | |||||
completeName = os.path.join(self.save_path, file_name) | |||||
cv2.imwrite(completeName, image) | |||||
rclpy.shutdown() | |||||
sys.exit(0) | |||||
def main(args=None): | |||||
rclpy.init(args=args) | |||||
capture_image = CaptureImage() | |||||
rclpy.spin(capture_image) | |||||
if __name__ == '__main__': | |||||
main() | |||||
``` | |||||
### The Code Explained | |||||
Now let's break the code down. | |||||
```python | |||||
#!/usr/bin/env python3 | |||||
``` | |||||
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python3 script. | |||||
```python | |||||
import rclpy | |||||
import sys | |||||
import os | |||||
import cv2 | |||||
``` | |||||
You need to import `rclpy` if you are writing a ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html). There are functions from `sys`, `os`, and `cv2` that are required within this code. `cv2` is a library of Python functions that implements computer vision algorithms. Further information about cv2 can be found here: [OpenCV Python](https://www.geeksforgeeks.org/opencv-python-tutorial/). | |||||
```python | |||||
from rclpy.node import Node | |||||
from sensor_msgs.msg import Image | |||||
from cv_bridge import CvBridge, CvBridgeError | |||||
``` | |||||
The `sensor_msgs.msg` is imported so that we can subscribe to ROS `Image` messages. Import [CvBridge](http://wiki.ros.org/cv_bridge) to convert between ROS `Image` messages and OpenCV images and the `Node` is neccesary to create a node in ROS2. | |||||
```python | |||||
def __init__(self): | |||||
""" | |||||
A function that initializes a CvBridge class, subscriber, and save path. | |||||
:param self: The self reference. | |||||
""" | |||||
super().__init__('stretch_capture_image') | |||||
self.bridge = CvBridge() | |||||
self.sub = self.create_subscription(Image, '/camera/color/image_raw', self.image_callback, 10) | |||||
self.save_path = '/home/hello-robot/ament_ws/src/stretch_tutorials/stored_data' | |||||
self.br = CvBridge() | |||||
``` | |||||
Initialize the node, CvBridge class, the subscriber, and the directory where the captured image will be stored. | |||||
```python | |||||
def image_callback(self, msg): | |||||
""" | |||||
A callback function that converts the ROS image to a cv2 image and stores the | |||||
image. | |||||
:param self: The self reference. | |||||
:param msg: The ROS image message type. | |||||
""" | |||||
try: | |||||
image = self.bridge.imgmsg_to_cv2(msg, 'bgr8') | |||||
except CvBridgeError as e: | |||||
rospy.logwarn('CV Bridge error: {0}'.format(e)) | |||||
``` | |||||
Try to convert the ROS Image message to a cv2 Image message using the `imgmsg_to_cv2()` function. | |||||
```python | |||||
file_name = 'camera_image.jpeg' | |||||
completeName = os.path.join(self.save_path, file_name) | |||||
cv2.imwrite(completeName, image) | |||||
``` | |||||
Join the directory and file name using the `path.join()` function. Then use the `imwrite()` function to save the image. | |||||
```python | |||||
rclpy.shutdown() | |||||
sys.exit(0) | |||||
``` | |||||
The first line of code initiates a clean shutdown of ROS. The second line of code exits the Python interpreter. | |||||
```python | |||||
rclpy.init(args=args) | |||||
capture_image = CaptureImage() | |||||
``` | |||||
The next line, rclpy.init_node initializes the node. In this case, your node will take on the name 'stretch_capture_image'. Also setup CaptureImage class with `capture_image = CaptureImage()`. | |||||
!!! note | |||||
The name must be a base name, i.e. it cannot contain any slashes "/". | |||||
```python | |||||
rclpy.spin(capture_image) | |||||
``` | |||||
Give control to ROS. This will allow the callback to be called whenever new messages come in. If we don't put this line in, then the node will not work, and ROS will not process any messages. | |||||
## Edge Detection | |||||
In this section, we highlight a node that utilizes the [Canny Edge filter](https://www.geeksforgeeks.org/python-opencv-canny-function/) algorithm to detect the edges from an image and convert it back as a ROS image to be visualized in RViz. In a terminal, execute: | |||||
```{.bash .shell-prompt} | |||||
cd ~/ament_ws/src/stretch_tutorials/stretch_ros_tutorials | |||||
python3 edge_detection.py | |||||
``` | |||||
The node will publish a new Image topic named `/image_edge_detection`. This can be visualized in RViz and a gif is provided below for reference. | |||||
<p align="center"> | |||||
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/camera_image_edge_detection.gif"/> | |||||
</p> | |||||
### The Code | |||||
```python | |||||
#!/usr/bin/env python3 | |||||
import rclpy | |||||
from rclpy.node import Node | |||||
import cv2 | |||||
from sensor_msgs.msg import Image | |||||
from cv_bridge import CvBridge, CvBridgeError | |||||
class EdgeDetection(Node): | |||||
""" | |||||
A class that converts a subscribed ROS image to a OpenCV image and saves | |||||
the captured image to a predefined directory. | |||||
""" | |||||
def __init__(self): | |||||
""" | |||||
A function that initializes a CvBridge class, subscriber, and other | |||||
parameter values. | |||||
:param self: The self reference. | |||||
""" | |||||
super().__init__('stretch_edge_detection') | |||||
self.bridge = CvBridge() | |||||
self.sub = self.create_subscription(Image, '/camera/color/image_raw', self.callback, 1) | |||||
self.pub = self.create_publisher(Image, '/image_edge_detection', 1) | |||||
self.save_path = '/home/hello-robot/catkin_ws/src/stretch_tutorials/stored_data' | |||||
self.lower_thres = 100 | |||||
self.upper_thres = 200 | |||||
self.get_logger().info("Publishing the CV2 Image. Use RViz to visualize.") | |||||
def callback(self, msg): | |||||
""" | |||||
A callback function that converts the ROS image to a CV2 image and goes | |||||
through the Canny Edge filter in OpenCV for edge detection. Then publishes | |||||
that filtered image to be visualized in RViz. | |||||
:param self: The self reference. | |||||
:param msg: The ROS image message type. | |||||
""" | |||||
try: | |||||
image = self.bridge.imgmsg_to_cv2(msg, 'bgr8') | |||||
except CvBridgeError as e: | |||||
self.get_logger().warn('CV Bridge error: {0}'.format(e)) | |||||
image = cv2.Canny(image, self.lower_thres, self.upper_thres) | |||||
image_msg = self.bridge.cv2_to_imgmsg(image, 'passthrough') | |||||
image_msg.header = msg.header | |||||
self.pub.publish(image_msg) | |||||
def main(args=None): | |||||
rclpy.init(args=args) | |||||
edge_detection = EdgeDetection() | |||||
rclpy.spin(edge_detection) | |||||
if __name__ == '__main__': | |||||
main() | |||||
``` | |||||
### The Code Explained | |||||
Since there are similarities in the capture image node, we will only break down the different components of the edge detection node. | |||||
Define the lower and upper bounds of the Hysteresis Thresholds. | |||||
```python | |||||
image = cv2.Canny(image, self.lower_thres, self.upper_thres) | |||||
``` | |||||
Run the Canny Edge function to detect edges from the cv2 image. | |||||
```python | |||||
image_msg = self.bridge.cv2_to_imgmsg(image, 'passthrough') | |||||
``` | |||||
Convert the cv2 image back to a ROS image so it can be published. | |||||
```python | |||||
image_msg.header = msg.header | |||||
self.pub.publish(image_msg) | |||||
``` | |||||
Publish the ROS image with the same header as the subscribed ROS message. |
@ -0,0 +1,150 @@ | |||||
# Example 8 | |||||
This example will showcase how to save the interpreted speech from Stretch's [ReSpeaker Mic Array v2.0](https://wiki.seeedstudio.com/ReSpeaker_Mic_Array_v2.0/) to a text file. | |||||
<p align="center"> | |||||
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/respeaker.jpg"/> | |||||
</p> | |||||
Begin by running the `respeaker.launch.py` file in a terminal. | |||||
```{.bash .shell-prompt} | |||||
ros2 launch respeaker_ros2 respeaker.launch.py | |||||
``` | |||||
Then run the [speech_text.py](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/speech_text.py) node. In a new terminal, execute: | |||||
```{.bash .shell-prompt} | |||||
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/ | |||||
python3 speech_text.py | |||||
``` | |||||
The ReSpeaker will be listening and will start to interpret speech and save the transcript to a text file. To shut down the node, type `Ctrl` + `c` in the terminal. | |||||
### The Code | |||||
```python | |||||
#!/usr/bin/env python3 | |||||
# Import modules | |||||
import rclpy | |||||
import os | |||||
from rclpy.node import Node | |||||
# Import SpeechRecognitionCandidates from the speech_recognition_msgs package | |||||
from speech_recognition_msgs.msg import SpeechRecognitionCandidates | |||||
class SpeechText(Node): | |||||
def __init__(self): | |||||
super().__init__('stretch_speech_text') | |||||
# Initialize subscriber | |||||
self.sub = self.create_subscription(SpeechRecognitionCandidates, "speech_to_text", self.callback, 1) | |||||
# Create path to save captured images to the stored data folder | |||||
self.save_path = '/home/hello-robot/ament_ws/src/stretch_tutorials/stored_data' | |||||
# Create log message | |||||
self.get_logger().info("Listening to speech") | |||||
def callback(self,msg): | |||||
# Take all items in the iterable list and join them into a single string | |||||
transcript = ' '.join(map(str,msg.transcript)) | |||||
# Define the file name and create a complete path name | |||||
file_name = 'speech.txt' | |||||
completeName = os.path.join(self.save_path, file_name) | |||||
# Append 'hello' at the end of file | |||||
with open(completeName, "a+") as file_object: | |||||
file_object.write("\n") | |||||
file_object.write(transcript) | |||||
def main(args=None): | |||||
# Initialize the node and name it speech_text | |||||
rclpy.init(args=args) | |||||
# Instantiate the SpeechText class | |||||
speech_txt = SpeechText() | |||||
# Give control to ROS. This will allow the callback to be called whenever new | |||||
# messages come in. If we don't put this line in, then the node will not work, | |||||
# and ROS will not process any messages | |||||
rclpy.spin(speech_txt) | |||||
if __name__ == '__main__': | |||||
main() | |||||
``` | |||||
### The Code Explained | |||||
Now let's break the code down. | |||||
```python | |||||
#!/usr/bin/env python3 | |||||
``` | |||||
Every Python ROS [Node](https://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python3 script. | |||||
```python | |||||
import rclpy | |||||
import os | |||||
from rclpy.node import Node | |||||
``` | |||||
You need to import rclpy if you are writing a ROS Node. | |||||
```python | |||||
from speech_recognition_msgs.msg import SpeechRecognitionCandidates | |||||
``` | |||||
Import `SpeechRecognitionCandidates` from the `speech_recgonition_msgs.msg` so that we can receive the interpreted speech. | |||||
```python | |||||
def __init__(self): | |||||
super().__init__('stretch_speech_text') | |||||
self.sub = self.create_subscription(SpeechRecognitionCandidates, "speech_to_text", self.callback, 1) | |||||
``` | |||||
Set up a subscriber. We're going to subscribe to the topic `speech_to_text`, looking for `SpeechRecognitionCandidates` messages. When a message comes in, ROS is going to pass it to the function "callback" automatically. | |||||
```python | |||||
self.save_path = '/home/hello-robot/ament_ws/src/stretch_tutorials/stored_data' | |||||
``` | |||||
Define the directory to save the text file. | |||||
```python | |||||
transcript = ' '.join(map(str,msg.transcript)) | |||||
``` | |||||
Take all items in the iterable list and join them into a single string named transcript. | |||||
```python | |||||
file_name = 'speech.txt' | |||||
completeName = os.path.join(self.save_path, file_name) | |||||
``` | |||||
Define the file name and create a complete path directory. | |||||
```python | |||||
with open(completeName, "a+") as file_object: | |||||
file_object.write("\n") | |||||
file_object.write(transcript) | |||||
``` | |||||
Append the transcript to the text file. | |||||
```python | |||||
def main(args=None): | |||||
rclpy.init(args=args) | |||||
speech_txt = SpeechText() | |||||
``` | |||||
The next line, rclpy.init() method initializes the node. In this case, your node will take on the name 'stretch_speech_text'. Instantiate the `SpeechText()` class. | |||||
!!! note | |||||
The name must be a base name, i.e. it cannot contain any slashes "/". | |||||
```python | |||||
rclpy.spin(speech_txt) | |||||
``` | |||||
Give control to ROS with `rclpy.spin()`. This will allow the callback to be called whenever new messages come in. If we don't put this line in, then the node will not work, and ROS will not process any messages. |
@ -0,0 +1,470 @@ | |||||
## Example 9 | |||||
This example aims to combine the [ReSpeaker Microphone Array](respeaker_mic_array.md) and [Follow Joint Trajectory](follow_joint_trajectory.md) tutorials to voice teleoperate the mobile base of the Stretch robot. | |||||
Begin by running the following command in a new terminal. | |||||
```{.bash .shell-prompt} | |||||
ros2 launch stretch_core stretch_driver.launch.py | |||||
``` | |||||
Then run the `respeaker.launch.py` file. In a new terminal, execute: | |||||
```{.bash .shell-prompt} | |||||
ros2 launch respeaker_ros2 respeaker.launch.py | |||||
``` | |||||
Then run the [voice_teleoperation_base.py](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/voice_teleoperation_base.py) node in a new terminal. | |||||
```{.bash .shell-prompt} | |||||
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/ | |||||
python3 voice_teleoperation_base.py | |||||
``` | |||||
In terminal 3, a menu of voice commands is printed. You can reference this menu layout below. | |||||
```{.bash .no-copy} | |||||
------------ VOICE TELEOP MENU ------------ | |||||
VOICE COMMANDS | |||||
"forward": BASE FORWARD | |||||
"back" : BASE BACK | |||||
"left" : BASE ROTATE LEFT | |||||
"right" : BASE ROTATE RIGHT | |||||
"stretch": BASE ROTATES TOWARDS SOUND | |||||
STEP SIZE | |||||
"big" : BIG | |||||
"medium" : MEDIUM | |||||
"small" : SMALL | |||||
"quit" : QUIT AND CLOSE NODE | |||||
------------------------------------------- | |||||
``` | |||||
To stop the node from sending twist messages, type `Ctrl` + `c` or say "**quit**". | |||||
### The Code | |||||
```python | |||||
#!/usr/bin/env python3 | |||||
# Import modules | |||||
import math | |||||
import rclpy | |||||
import sys | |||||
from rclpy.duration import Duration | |||||
# We're going to subscribe to 64-bit integers, so we need to import the definition | |||||
# for them | |||||
from sensor_msgs.msg import JointState | |||||
# Import Int32 message typs from the std_msgs package | |||||
from std_msgs.msg import Int32 | |||||
# Import the FollowJointTrajectory from the control_msgs.msg package to | |||||
# control the Stretch robot | |||||
from control_msgs.action import FollowJointTrajectory | |||||
# Import JointTrajectoryPoint from the trajectory_msgs package to define | |||||
# robot trajectories | |||||
from trajectory_msgs.msg import JointTrajectoryPoint | |||||
# Import hello_misc script for handling trajectory goals with an action client | |||||
import hello_helpers.hello_misc as hm | |||||
# Import SpeechRecognitionCandidates from the speech_recognition_msgs package | |||||
from speech_recognition_msgs.msg import SpeechRecognitionCandidates | |||||
class GetVoiceCommands: | |||||
def __init__(self, node): | |||||
self.node = node | |||||
# Set step size as medium by default | |||||
self.step_size = 'medium' | |||||
self.rad_per_deg = math.pi/180.0 | |||||
# Small step size parameters | |||||
self.small_deg = 5.0 | |||||
self.small_rad = self.rad_per_deg * self.small_deg | |||||
self.small_translate = 0.025 | |||||
# Medium step size parameters | |||||
self.medium_deg = 10.0 | |||||
self.medium_rad = self.rad_per_deg * self.medium_deg | |||||
self.medium_translate = 0.05 | |||||
# Big step size parameters | |||||
self.big_deg = 20.0 | |||||
self.big_rad = self.rad_per_deg * self.big_deg | |||||
self.big_translate = 0.1 | |||||
# Initialize the voice command | |||||
self.voice_command = None | |||||
# Initialize the sound direction | |||||
self.sound_direction = 0 | |||||
# Initialize subscribers | |||||
self.speech_to_text_sub = self.node.create_subscription(SpeechRecognitionCandidates, "/speech_to_text", self.callback_speech, 1) | |||||
self.sound_direction_sub = self.node.create_subscription(Int32, "/sound_direction", self.callback_direction, 1) | |||||
def callback_direction(self, msg): | |||||
self.sound_direction = msg.data * -self.rad_per_deg | |||||
def callback_speech(self,msg): | |||||
self.voice_command = ' '.join(map(str,msg.transcript)) | |||||
def get_inc(self): | |||||
if self.step_size == 'small': | |||||
inc = {'rad': self.small_rad, 'translate': self.small_translate} | |||||
if self.step_size == 'medium': | |||||
inc = {'rad': self.medium_rad, 'translate': self.medium_translate} | |||||
if self.step_size == 'big': | |||||
inc = {'rad': self.big_rad, 'translate': self.big_translate} | |||||
return inc | |||||
def print_commands(self): | |||||
""" | |||||
A function that prints the voice teleoperation menu. | |||||
:param self: The self reference. | |||||
""" | |||||
print(' ') | |||||
print('------------ VOICE TELEOP MENU ------------') | |||||
print(' ') | |||||
print(' VOICE COMMANDS ') | |||||
print(' "forward": BASE FORWARD ') | |||||
print(' "back" : BASE BACK ') | |||||
print(' "left" : BASE ROTATE LEFT ') | |||||
print(' "right" : BASE ROTATE RIGHT ') | |||||
print(' "stretch": BASE ROTATES TOWARDS SOUND ') | |||||
print(' ') | |||||
print(' STEP SIZE ') | |||||
print(' "big" : BIG ') | |||||
print(' "medium" : MEDIUM ') | |||||
print(' "small" : SMALL ') | |||||
print(' ') | |||||
print(' ') | |||||
print(' "quit" : QUIT AND CLOSE NODE ') | |||||
print(' ') | |||||
print('-------------------------------------------') | |||||
def get_command(self): | |||||
command = None | |||||
# Move base forward command | |||||
if self.voice_command == 'forward': | |||||
command = {'joint': 'translate_mobile_base', 'inc': self.get_inc()['translate']} | |||||
# Move base back command | |||||
if self.voice_command == 'back': | |||||
command = {'joint': 'translate_mobile_base', 'inc': -self.get_inc()['translate']} | |||||
# Rotate base left command | |||||
if self.voice_command == 'left': | |||||
command = {'joint': 'rotate_mobile_base', 'inc': self.get_inc()['rad']} | |||||
# Rotate base right command | |||||
if self.voice_command == 'right': | |||||
command = {'joint': 'rotate_mobile_base', 'inc': -self.get_inc()['rad']} | |||||
# Move base to sound direction command | |||||
if self.voice_command == 'stretch': | |||||
command = {'joint': 'rotate_mobile_base', 'inc': self.sound_direction} | |||||
# Set the step size of translational and rotational base motions | |||||
if (self.voice_command == "small") or (self.voice_command == "medium") or (self.voice_command == "big"): | |||||
self.step_size = self.voice_command | |||||
self.node.get_logger().info('Step size = {0}'.format(self.step_size)) | |||||
if self.voice_command == 'quit': | |||||
# Sends a signal to ros to shutdown the ROS interfaces | |||||
self.node.get_logger().info("done") | |||||
# Exit the Python interpreter | |||||
sys.exit(0) | |||||
# Reset voice command to None | |||||
self.voice_command = None | |||||
# return the updated command | |||||
return command | |||||
class VoiceTeleopNode(hm.HelloNode): | |||||
def __init__(self): | |||||
hm.HelloNode.__init__(self) | |||||
self.rate = 10.0 | |||||
self.joint_state = None | |||||
hm.HelloNode.main(self, 'voice_teleop', 'voice_teleop', wait_for_first_pointcloud=False) | |||||
self.speech = GetVoiceCommands(self) | |||||
def joint_states_callback(self, msg): | |||||
self.joint_state = msg | |||||
def send_command(self, command): | |||||
joint_state = self.joint_state | |||||
# Conditional statement to send joint trajectory goals | |||||
if (joint_state is not None) and (command is not None): | |||||
# Assign point as a JointTrajectoryPoint message type | |||||
point = JointTrajectoryPoint() | |||||
point.time_from_start = Duration(seconds=0).to_msg() | |||||
# Assign trajectory_goal as a FollowJointTrajectoryGoal message type | |||||
trajectory_goal = FollowJointTrajectory.Goal() | |||||
trajectory_goal.goal_time_tolerance = Duration(seconds=0).to_msg() | |||||
# Extract the joint name from the command dictionary | |||||
joint_name = command['joint'] | |||||
trajectory_goal.trajectory.joint_names = [joint_name] | |||||
# Extract the increment type from the command dictionary | |||||
inc = command['inc'] | |||||
self.get_logger().info('inc = {0}'.format(inc)) | |||||
new_value = inc | |||||
# Assign the new_value position to the trajectory goal message type | |||||
point.positions = [new_value] | |||||
trajectory_goal.trajectory.points = [point] | |||||
trajectory_goal.trajectory.header.stamp = self.get_clock().now().to_msg() | |||||
self.get_logger().info('joint_name = {0}, trajectory_goal = {1}'.format(joint_name, trajectory_goal)) | |||||
# Make the action call and send goal of the new joint position | |||||
self.trajectory_client.send_goal(trajectory_goal) | |||||
self.get_logger().info('Done sending command.') | |||||
# Reprint the voice command menu | |||||
self.speech.print_commands() | |||||
def timer_get_command(self): | |||||
# Get voice command | |||||
command = self.speech.get_command() | |||||
# Send voice command for joint trajectory goals | |||||
self.send_command(command) | |||||
def main(self): | |||||
self.create_subscription(JointState, '/stretch/joint_states', self.joint_states_callback, 1) | |||||
rate = self.create_rate(self.rate) | |||||
self.speech.print_commands() | |||||
self.sleep = self.create_timer(1/self.rate, self.timer_get_command) | |||||
def main(args=None): | |||||
try: | |||||
#rclpy.init(args=args) | |||||
node = VoiceTeleopNode() | |||||
node.main() | |||||
node.new_thread.join() | |||||
except KeyboardInterrupt: | |||||
node.get_logger().info('interrupt received, so shutting down') | |||||
node.destroy_node() | |||||
rclpy.shutdown() | |||||
if __name__ == '__main__': | |||||
main() | |||||
``` | |||||
### The Code Explained | |||||
This code is similar to that of the [multipoint_command](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/multipoint_command.py) and [joint_state_printer](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/joint_state_printer.py) node. Therefore, this example will highlight sections that are different from those tutorials. Now let's break the code down. | |||||
```python | |||||
#!/usr/bin/env python3 | |||||
``` | |||||
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python script. | |||||
```python | |||||
import math | |||||
import rclpy | |||||
import sys | |||||
from rclpy.duration import Duration | |||||
from sensor_msgs.msg import JointState | |||||
from std_msgs.msg import Int32 | |||||
from control_msgs.action import FollowJointTrajectory | |||||
from trajectory_msgs.msg import JointTrajectoryPoint | |||||
import hello_helpers.hello_misc as hm | |||||
from speech_recognition_msgs.msg import SpeechRecognitionCandidates | |||||
``` | |||||
You need to import `rclpy` if you are writing a ROS Node. Import the `FollowJointTrajectory` from the `control_msgs.action` package to control the Stretch robot. Import `JointTrajectoryPoint` from the `trajectory_msgs` package to define robot trajectories. The `hello_helpers` package consists of a module that provides various Python scripts used across stretch_ros. In this instance, we are importing the `hello_misc` script. Import `sensor_msgs.msg` so that we can subscribe to JointState messages. | |||||
```python | |||||
class GetVoiceCommands: | |||||
``` | |||||
Create a class that subscribes to the `speech-to-text` recognition messages, prints a voice command menu, and defines step size for translational and rotational mobile base motion. | |||||
```python | |||||
self.node = node | |||||
self.step_size = 'medium' | |||||
self.rad_per_deg = math.pi/180.0 | |||||
``` | |||||
Set the default step size as medium and create a float value, `self.rad_per_deg`, to convert degrees to radians. | |||||
```python | |||||
self.small_deg = 5.0 | |||||
self.small_rad = self.rad_per_deg * self.small_deg | |||||
self.small_translate = 0.025 | |||||
self.medium_deg = 10.0 | |||||
self.medium_rad = self.rad_per_deg * self.medium_deg | |||||
self.medium_translate = 0.05 | |||||
self.big_deg = 20.0 | |||||
self.big_rad = self.rad_per_deg * self.big_deg | |||||
self.big_translate = 0.1 | |||||
``` | |||||
Define the three rotation and translation step sizes. | |||||
```python | |||||
self.voice_command = None | |||||
self.sound_direction = 0 | |||||
self.speech_to_text_sub = self.node.create_subscription(SpeechRecognitionCandidates, "/speech_to_text", self.callback_speech, 1) | |||||
self.sound_direction_sub = self.node.create_subscription(Int32, "/sound_direction", self.callback_direction, 1) | |||||
``` | |||||
Initialize the voice command and sound direction to values that will not result in moving the base. | |||||
Set up two subscribers. The first one subscribes to the topic `/speech_to_text`, looking for `SpeechRecognitionCandidates` messages. When a message comes in, ROS is going to pass it to the function `callback_speech` automatically. The second subscribes to `/sound_direction` message and passes it to the `callback_direction` function. | |||||
```python | |||||
def callback_direction(self, msg): | |||||
self.sound_direction = msg.data * -self.rad_per_deg | |||||
``` | |||||
The `callback_direction` function converts the `sound_direction` topic from degrees to radians. | |||||
```python | |||||
if self.step_size == 'small': | |||||
inc = {'rad': self.small_rad, 'translate': self.small_translate} | |||||
if self.step_size == 'medium': | |||||
inc = {'rad': self.medium_rad, 'translate': self.medium_translate} | |||||
if self.step_size == 'big': | |||||
inc = {'rad': self.big_rad, 'translate': self.big_translate} | |||||
return inc | |||||
``` | |||||
The `callback_speech` stores the increment size for translational and rotational base motion in `inc`. The increment size is contingent on the `self.step_size` string value. | |||||
```python | |||||
command = None | |||||
if self.voice_command == 'forward': | |||||
command = {'joint': 'translate_mobile_base', 'inc': self.get_inc()['translate']} | |||||
if self.voice_command == 'back': | |||||
command = {'joint': 'translate_mobile_base', 'inc': -self.get_inc()['translate']} | |||||
if self.voice_command == 'left': | |||||
command = {'joint': 'rotate_mobile_base', 'inc': self.get_inc()['rad']} | |||||
if self.voice_command == 'right': | |||||
command = {'joint': 'rotate_mobile_base', 'inc': -self.get_inc()['rad']} | |||||
if self.voice_command == 'stretch': | |||||
command = {'joint': 'rotate_mobile_base', 'inc': self.sound_direction} | |||||
``` | |||||
In the `get_command()` function, the `command` is initialized as `None`, or set as a dictionary where the `joint` and `inc` values are stored. The `command` message type is dependent on the `self.voice_command` string value. | |||||
```python | |||||
if (self.voice_command == "small") or (self.voice_command == "medium") or (self.voice_command == "big"): | |||||
self.step_size = self.voice_command | |||||
self.node.get_logger().info('Step size = {0}'.format(self.step_size)) | |||||
``` | |||||
Based on the `self.voice_command` value set the step size for the increments. | |||||
```python | |||||
if self.voice_command == 'quit': | |||||
self.node.get_logger().info("done") | |||||
sys.exit(0) | |||||
``` | |||||
If the `self.voice_command` is equal to `quit`, then initiate a clean shutdown of ROS and exit the Python interpreter. | |||||
```python | |||||
class VoiceTeleopNode(hm.HelloNode): | |||||
def __init__(self): | |||||
hm.HelloNode.__init__(self) | |||||
self.rate = 10.0 | |||||
self.joint_state = None | |||||
hm.HelloNode.main(self, 'voice_teleop', 'voice_teleop', wait_for_first_pointcloud=False) | |||||
self.speech = GetVoiceCommands(self) | |||||
``` | |||||
A class that inherits the `HelloNode` class from `hm` declares object from the `GetVoiceCommands` class and sends joint trajectory commands. The main function instantiates the `HelloNode` class. | |||||
```python | |||||
def send_command(self, command): | |||||
joint_state = self.joint_state | |||||
if (joint_state is not None) and (command is not None): | |||||
point = JointTrajectoryPoint() | |||||
point.time_from_start = Duration(seconds=0).to_msg() | |||||
``` | |||||
The `send_command` function stores the joint state message and uses a conditional statement to send joint trajectory goals. Also, assign `point` as a `JointTrajectoryPoint` message type. | |||||
```python | |||||
trajectory_goal = FollowJointTrajectory.Goal() | |||||
trajectory_goal.goal_time_tolerance = Duration(seconds=0).to_msg() | |||||
``` | |||||
Assign `trajectory_goal` as a `FollowJointTrajectory.Goal` message type. | |||||
```python | |||||
joint_name = command['joint'] | |||||
trajectory_goal.trajectory.joint_names = [joint_name] | |||||
``` | |||||
Extract the joint name from the command dictionary. | |||||
```python | |||||
inc = command['inc'] | |||||
self.get_logger().info('inc = {0}'.format(inc)) | |||||
new_value = inc | |||||
``` | |||||
Extract the increment type from the command dictionary. | |||||
```python | |||||
point.positions = [new_value] | |||||
trajectory_goal.trajectory.points = [point] | |||||
``` | |||||
Assign the new value position to the trajectory goal message type. | |||||
```python | |||||
self.trajectory_client.send_goal(trajectory_goal) | |||||
self.get_logger().info('Done sending command.') | |||||
``` | |||||
Make the action call and send the goal of the new joint position. | |||||
```python | |||||
self.speech.print_commands() | |||||
``` | |||||
Reprint the voice command menu after the trajectory goal is sent. | |||||
```python | |||||
def main(self): | |||||
self.create_subscription(JointState, '/stretch/joint_states', self.joint_states_callback, 1) | |||||
rate = self.create_rate(self.rate) | |||||
self.speech.print_commands() | |||||
``` | |||||
The main function initializes the subscriber and we are going to use the publishing rate that we set before. | |||||
```python | |||||
try: | |||||
#rclpy.init(args=args) | |||||
node = VoiceTeleopNode() | |||||
node.main() | |||||
node.new_thread.join() | |||||
except KeyboardInterrupt: | |||||
node.get_logger().info('interrupt received, so shutting down') | |||||
node.destroy_node() | |||||
rclpy.shutdown() | |||||
``` | |||||
Declare a `VoiceTeleopNode` object. Then execute the `main()` method. |
@ -0,0 +1,61 @@ | |||||
# ReSpeaker Microphone Array | |||||
For this tutorial, we will get a high-level view of how to use Stretch's [ReSpeaker Mic Array v2.0](https://wiki.seeedstudio.com/ReSpeaker_Mic_Array_v2.0/). | |||||
<p align="center"> | |||||
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/respeaker.jpg"/> | |||||
</p> | |||||
## Stretch Body Package | |||||
In this section we will use command line tools in the [Stretch_Body](https://github.com/hello-robot/stretch_body) package, a low-level Python API for Stretch's hardware, to directly interact with the ReSpeaker. | |||||
Begin by typing the following command in a new terminal. | |||||
```{.bash .shell-prompt} | |||||
stretch_respeaker_test.py | |||||
``` | |||||
The following will be displayed in your terminal: | |||||
```{.bash .no-copy} | |||||
For use with S T R E T C H (TM) RESEARCH EDITION from Hello Robot Inc. | |||||
* waiting for audio... | |||||
* recording 3 seconds | |||||
* done | |||||
* playing audio | |||||
* done | |||||
``` | |||||
The ReSpeaker Mico Array will wait until it hears audio loud enough to trigger its recording feature. Stretch will record audio for 3 seconds and then replay it through its speakers. This command line is a good method to see if the hardware is working correctly. | |||||
To stop the python script, type `Ctrl` + `c` in the terminal. | |||||
## ReSpeaker_ROS Package | |||||
A [ROS package for the ReSpeaker](https://index.ros.org/p/respeaker_ros/#melodic) is utilized for this section. | |||||
Begin by running the `sample_respeaker.launch.py` file in a terminal. | |||||
```{.bash .shell-prompt} | |||||
ros2 launch respeaker_ros2 respeaker.launch.py | |||||
``` | |||||
This will bring up the necessary nodes that will allow the ReSpeaker to implement a voice and sound interface with the robot. | |||||
## ReSpeaker Topics | |||||
Below are executables you can run to see the ReSpeaker results. | |||||
```{.bash .shell-prompt} | |||||
ros2 topic echo /sound_direction | |||||
ros2 topic echo /sound_localization | |||||
ros2 topic echo /is_speeching | |||||
ros2 topic echo /audio | |||||
ros2 topic echo /speech_audio | |||||
ros2 topic echo /speech_to_text | |||||
``` | |||||
There's also another topic called `/status_led`, with this topic you can change the color of the LEDs in the ReSpeaker, you need to publish the desired color in the terminal using `ros2 topic pub`. We will explore this topics in the next tutorial. | |||||
You can also set various parameters via `dynamic_reconfigure` by running the following command in a new terminal. | |||||
```{.bash .shell-prompt} | |||||
ros2 run rqt_reconfigure rqt_reconfigure |
@ -0,0 +1,168 @@ | |||||
# ReSpeaker Microphone Array Topics | |||||
In this tutorial we will see the topics more in detail and have an idea of what the ReSpeaker can do. If you just landed here, it might be a good idea to first review the previous tutorial which covered the basics of the ReSpeaker and the information about the package used | |||||
## ReSpeaker Topics | |||||
Begin by running the `sample_respeaker.launch.py` file in a terminal. | |||||
```{.bash .shell-prompt} | |||||
ros2 launch respeaker_ros respeaker.launch.py | |||||
``` | |||||
This will bring up the necessary nodes that will allow the ReSpeaker to implement a voice and sound interface with the robot. To see the topics that are available for us you can run the command `ros2 topic list -t` and search the topics that we are looking for. | |||||
Don't worry these are the executables you can run to see the ReSpeaker results. | |||||
```{.bash .shell-prompt} | |||||
ros2 topic echo /sound_direction # Result of Direction (in Radians) of Audio | |||||
ros2 topic echo /sound_localization # Result of Direction as Pose (Quaternion values) | |||||
ros2 topic echo /is_speeching # Result of Voice Activity Detector | |||||
ros2 topic echo /audio # Raw audio data | |||||
ros2 topic echo /speech_audio # Raw audio data when there is speech | |||||
ros2 topic echo /speech_to_text # Voice recognition | |||||
ros2 topic pub /status_led ... # Modify LED color | |||||
``` | |||||
Let's go one by one and see what we can expect of each topic, the first is the `sound_direction` topic, in the terminal execute the command that you learned earlier: | |||||
```{.bash .shell-prompt} | |||||
ros2 topic echo /sound_direction # Result of Direction (in Radians) of Audio | |||||
``` | |||||
This will give you the direction of the sound detected by the ReSpeaker in radians | |||||
```{.bash .no-copy} | |||||
data: 21 | |||||
--- | |||||
data: 138 | |||||
--- | |||||
data: -114 | |||||
--- | |||||
data: -65 | |||||
--- | |||||
``` | |||||
The Direction of Arrival (DOA) for the ReSpeaker goes from -180 to 180, to know more about how is it in Stretch watch this DOA diagram: | |||||
<p align="center"> | |||||
<img height=500 src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/humble/images/respeaker_doa_up.png"/> | |||||
</p> | |||||
The next topic is the `sound_localization`, this is similar to the `sound_direction` topic but now the result it's as pose (Quaternion Values), try it out, execute the command: | |||||
```{.bash .shell-prompt} | |||||
ros2 topic echo /sound_localization # Result of Direction as Pose (Quaternion values) | |||||
``` | |||||
With this you will have in your terminal this: | |||||
```{.bash .no-copy} | |||||
--- | |||||
header: | |||||
stamp: | |||||
sec: 1695325677 | |||||
nanosec: 882383094 | |||||
frame_id: respeaker_base | |||||
pose: | |||||
position: | |||||
x: -0.0 | |||||
y: 0.0 | |||||
z: 0.0 | |||||
orientation: | |||||
x: 0.0 | |||||
y: 0.0 | |||||
z: 0.43051109680829525 | |||||
w: 0.9025852843498605 | |||||
--- | |||||
``` | |||||
The next one on the list is the `is_speeching` topic, with this you will have the result of Voice Activity Detector, let's try it out: | |||||
```{.bash .shell-prompt} | |||||
ros2 topic echo /is_speeching # Result of Voice Activity Detector | |||||
``` | |||||
The result will be a true or false in the data but it can detect sounds as true so be careful with this | |||||
```{.bash .no-copy} | |||||
data: false | |||||
--- | |||||
data: true | |||||
--- | |||||
data: false | |||||
--- | |||||
``` | |||||
The `audio` topic is goint to output all the Raw audio data, if you want to see what this does execute the command: | |||||
```{.bash .shell-prompt} | |||||
ros2 topic echo /audio # Raw audio data | |||||
``` | |||||
You will expect a lot of data from this, you will see this output: | |||||
```{.bash .no-copy} | |||||
--- | |||||
data: | |||||
- 229 | |||||
- 0 | |||||
- 135 | |||||
- 0 | |||||
- 225 | |||||
- 0 | |||||
- 149 | |||||
- 0 | |||||
- 94 | |||||
- 0 | |||||
- 15 | |||||
``` | |||||
For the `speech_audio` topic you can expect the same result as the `audio` topic but this time you are going to have the raw data when there is a speech, execute the next command and speak near the microphone array: | |||||
```{.bash .shell-prompt} | |||||
ros2 topic echo /speech_audio # Raw audio data when there is speech | |||||
``` | |||||
So if it's almost the same topic but now is going to ouput the data when you are talking then you guessed right, the result will look like the same as before. | |||||
```{.bash .no-copy} | |||||
--- | |||||
data: | |||||
- 17 | |||||
- 254 | |||||
- 70 | |||||
- 254 | |||||
``` | |||||
Passing to the `speech_to_text` topic, with this you can say a small sentence and it will output what you said. In the terminal, execute the next command and speak near the microphone array again: | |||||
```{.bash .shell-prompt} | |||||
ros2 topic echo /speech_to_text # Voice recognition | |||||
``` | |||||
In this instance, "hello robot" was said. The following will be displayed in your terminal: | |||||
```{.bash .no-copy} | |||||
transcript: | |||||
- hello robot | |||||
confidence: | |||||
- ###### | |||||
--- | |||||
``` | |||||
And for the final topic, the `status_led`, with this you can setup custom LED patterns and effects. There are 3 ways to do it, the first one is using `rqt_publisher`, in the terminal input: | |||||
```{.bash .shell-prompt} | |||||
ros2 run rqt_publisher rqt_publisher | |||||
``` | |||||
With this the rqt_publisher window will open, there you need to add the topic manually, search for the `/status_led` topic, then click in the plus button, this is the add new publisher button and the topic will be added, then you can start moving the RGBA values between 0 to 1 and that's it, you can try it with the next example: | |||||
<p align="center"> | |||||
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/humble/images/status_led_rqt.png"/> | |||||
</p> | |||||
You will see that there's a purple light coming out from the ReSpeaker, you can change the rate and color if you want. | |||||
Now for the next way you can do it in the terminal, let's try again with the same values that we had so input this command in the terminal: | |||||
```bash | |||||
ros2 topic pub /status_led std_msgs/msg/ColorRGBA "r: 1.0 | |||||
g: 0.0 | |||||
b: 1.0 | |||||
a: 1.0" | |||||
``` | |||||
And you can see that we have the same result as earlier, good job! | |||||
And for the final way it's going to be with a python code, here you can modify the lights just as we did before but now you have color patterns that you can create, let's try it so that you can see yourself, input in the terminal: | |||||
```{.bash .shell-prompt} | |||||
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/ | |||||
python3 led_color_change.py | |||||
``` | |||||
With this we can change the colors as well but the difference is that we are able to create our own patterns, in the [ReSpeaker Documentation](https://wiki.seeedstudio.com/ReSpeaker_Mic_Array_v2.0/#control-the-leds) there are more options to customize and control de LEDs. |