Browse Source

Added ReSpeaker tutorials

pull/10/head
hello-jesus 1 year ago
parent
commit
b0e5d384dd
4 changed files with 850 additions and 0 deletions
  1. +150
    -0
      ros2/example_8.md
  2. +470
    -0
      ros2/example_9.md
  3. +61
    -0
      ros2/respeaker_mic_array.md
  4. +169
    -0
      ros2/respeaker_topics.md

+ 150
- 0
ros2/example_8.md View File

@ -0,0 +1,150 @@
# Example 8
This example will showcase how to save the interpreted speech from Stretch's [ReSpeaker Mic Array v2.0](https://wiki.seeedstudio.com/ReSpeaker_Mic_Array_v2.0/) to a text file.
<p align="center">
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/respeaker.jpg"/>
</p>
Begin by running the `respeaker.launch.py` file in a terminal.
```{.bash .shell-prompt}
ros2 launch respeaker_ros2 respeaker.launch.py
```
Then run the [speech_text.py](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/speech_text.py) node. In a new terminal, execute:
```{.bash .shell-prompt}
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/
python3 speech_text.py
```
The ReSpeaker will be listening and will start to interpret speech and save the transcript to a text file. To shut down the node, type `Ctrl` + `c` in the terminal.
### The Code
```python
#!/usr/bin/env python3
# Import modules
import rclpy
import os
from rclpy.node import Node
# Import SpeechRecognitionCandidates from the speech_recognition_msgs package
from speech_recognition_msgs.msg import SpeechRecognitionCandidates
class SpeechText(Node):
def __init__(self):
super().__init__('stretch_speech_text')
# Initialize subscriber
self.sub = self.create_subscription(SpeechRecognitionCandidates, "speech_to_text", self.callback, 1)
# Create path to save captured images to the stored data folder
self.save_path = '/home/hello-robot/ament_ws/src/stretch_tutorials/stored_data'
# Create log message
self.get_logger().info("Listening to speech")
def callback(self,msg):
# Take all items in the iterable list and join them into a single string
transcript = ' '.join(map(str,msg.transcript))
# Define the file name and create a complete path name
file_name = 'speech.txt'
completeName = os.path.join(self.save_path, file_name)
# Append 'hello' at the end of file
with open(completeName, "a+") as file_object:
file_object.write("\n")
file_object.write(transcript)
def main(args=None):
# Initialize the node and name it speech_text
rclpy.init(args=args)
# Instantiate the SpeechText class
speech_txt = SpeechText()
# Give control to ROS. This will allow the callback to be called whenever new
# messages come in. If we don't put this line in, then the node will not work,
# and ROS will not process any messages
rclpy.spin(speech_txt)
if __name__ == '__main__':
main()
```
### The Code Explained
Now let's break the code down.
```python
#!/usr/bin/env python3
```
Every Python ROS [Node](https://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python3 script.
```python
import rclpy
import os
from rclpy.node import Node
```
You need to import rclpy if you are writing a ROS Node.
```python
from speech_recognition_msgs.msg import SpeechRecognitionCandidates
```
Import `SpeechRecognitionCandidates` from the `speech_recgonition_msgs.msg` so that we can receive the interpreted speech.
```python
def __init__(self):
super().__init__('stretch_speech_text')
self.sub = self.create_subscription(SpeechRecognitionCandidates, "speech_to_text", self.callback, 1)
```
Set up a subscriber. We're going to subscribe to the topic `speech_to_text`, looking for `SpeechRecognitionCandidates` messages. When a message comes in, ROS is going to pass it to the function "callback" automatically.
```python
self.save_path = '/home/hello-robot/ament_ws/src/stretch_tutorials/stored_data'
```
Define the directory to save the text file.
```python
transcript = ' '.join(map(str,msg.transcript))
```
Take all items in the iterable list and join them into a single string named transcript.
```python
file_name = 'speech.txt'
completeName = os.path.join(self.save_path, file_name)
```
Define the file name and create a complete path directory.
```python
with open(completeName, "a+") as file_object:
file_object.write("\n")
file_object.write(transcript)
```
Append the transcript to the text file.
```python
def main(args=None):
rclpy.init(args=args)
speech_txt = SpeechText()
```
The next line, rclpy.init() method initializes the node. In this case, your node will take on the name 'stretch_speech_text'. Instantiate the `SpeechText()` class.
!!! note
The name must be a base name, i.e. it cannot contain any slashes "/".
```python
rclpy.spin(speech_txt)
```
Give control to ROS with `rclpy.spin()`. This will allow the callback to be called whenever new messages come in. If we don't put this line in, then the node will not work, and ROS will not process any messages.

+ 470
- 0
ros2/example_9.md View File

@ -0,0 +1,470 @@
## Example 9
This example aims to combine the [ReSpeaker Microphone Array](respeaker_mic_array.md) and [Follow Joint Trajectory](follow_joint_trajectory.md) tutorials to voice teleoperate the mobile base of the Stretch robot.
Begin by running the following command in a new terminal.
```{.bash .shell-prompt}
ros2 launch stretch_core stretch_driver.launch.py
```
Then run the `respeaker.launch.py` file. In a new terminal, execute:
```{.bash .shell-prompt}
ros2 launch respeaker_ros2 respeaker.launch.py
```
Then run the [voice_teleoperation_base.py](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/voice_teleoperation_base.py) node in a new terminal.
```{.bash .shell-prompt}
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/
python3 voice_teleoperation_base.py
```
In terminal 3, a menu of voice commands is printed. You can reference this menu layout below.
```{.bash .no-copy}
------------ VOICE TELEOP MENU ------------
VOICE COMMANDS
"forward": BASE FORWARD
"back" : BASE BACK
"left" : BASE ROTATE LEFT
"right" : BASE ROTATE RIGHT
"stretch": BASE ROTATES TOWARDS SOUND
STEP SIZE
"big" : BIG
"medium" : MEDIUM
"small" : SMALL
"quit" : QUIT AND CLOSE NODE
-------------------------------------------
```
To stop the node from sending twist messages, type `Ctrl` + `c` or say "**quit**".
### The Code
```python
#!/usr/bin/env python3
# Import modules
import math
import rclpy
import sys
from rclpy.duration import Duration
# We're going to subscribe to 64-bit integers, so we need to import the definition
# for them
from sensor_msgs.msg import JointState
# Import Int32 message typs from the std_msgs package
from std_msgs.msg import Int32
# Import the FollowJointTrajectory from the control_msgs.msg package to
# control the Stretch robot
from control_msgs.action import FollowJointTrajectory
# Import JointTrajectoryPoint from the trajectory_msgs package to define
# robot trajectories
from trajectory_msgs.msg import JointTrajectoryPoint
# Import hello_misc script for handling trajectory goals with an action client
import hello_helpers.hello_misc as hm
# Import SpeechRecognitionCandidates from the speech_recognition_msgs package
from speech_recognition_msgs.msg import SpeechRecognitionCandidates
class GetVoiceCommands:
def __init__(self, node):
self.node = node
# Set step size as medium by default
self.step_size = 'medium'
self.rad_per_deg = math.pi/180.0
# Small step size parameters
self.small_deg = 5.0
self.small_rad = self.rad_per_deg * self.small_deg
self.small_translate = 0.025
# Medium step size parameters
self.medium_deg = 10.0
self.medium_rad = self.rad_per_deg * self.medium_deg
self.medium_translate = 0.05
# Big step size parameters
self.big_deg = 20.0
self.big_rad = self.rad_per_deg * self.big_deg
self.big_translate = 0.1
# Initialize the voice command
self.voice_command = None
# Initialize the sound direction
self.sound_direction = 0
# Initialize subscribers
self.speech_to_text_sub = self.node.create_subscription(SpeechRecognitionCandidates, "/speech_to_text", self.callback_speech, 1)
self.sound_direction_sub = self.node.create_subscription(Int32, "/sound_direction", self.callback_direction, 1)
def callback_direction(self, msg):
self.sound_direction = msg.data * -self.rad_per_deg
def callback_speech(self,msg):
self.voice_command = ' '.join(map(str,msg.transcript))
def get_inc(self):
if self.step_size == 'small':
inc = {'rad': self.small_rad, 'translate': self.small_translate}
if self.step_size == 'medium':
inc = {'rad': self.medium_rad, 'translate': self.medium_translate}
if self.step_size == 'big':
inc = {'rad': self.big_rad, 'translate': self.big_translate}
return inc
def print_commands(self):
"""
A function that prints the voice teleoperation menu.
:param self: The self reference.
"""
print(' ')
print('------------ VOICE TELEOP MENU ------------')
print(' ')
print(' VOICE COMMANDS ')
print(' "forward": BASE FORWARD ')
print(' "back" : BASE BACK ')
print(' "left" : BASE ROTATE LEFT ')
print(' "right" : BASE ROTATE RIGHT ')
print(' "stretch": BASE ROTATES TOWARDS SOUND ')
print(' ')
print(' STEP SIZE ')
print(' "big" : BIG ')
print(' "medium" : MEDIUM ')
print(' "small" : SMALL ')
print(' ')
print(' ')
print(' "quit" : QUIT AND CLOSE NODE ')
print(' ')
print('-------------------------------------------')
def get_command(self):
command = None
# Move base forward command
if self.voice_command == 'forward':
command = {'joint': 'translate_mobile_base', 'inc': self.get_inc()['translate']}
# Move base back command
if self.voice_command == 'back':
command = {'joint': 'translate_mobile_base', 'inc': -self.get_inc()['translate']}
# Rotate base left command
if self.voice_command == 'left':
command = {'joint': 'rotate_mobile_base', 'inc': self.get_inc()['rad']}
# Rotate base right command
if self.voice_command == 'right':
command = {'joint': 'rotate_mobile_base', 'inc': -self.get_inc()['rad']}
# Move base to sound direction command
if self.voice_command == 'stretch':
command = {'joint': 'rotate_mobile_base', 'inc': self.sound_direction}
# Set the step size of translational and rotational base motions
if (self.voice_command == "small") or (self.voice_command == "medium") or (self.voice_command == "big"):
self.step_size = self.voice_command
self.node.get_logger().info('Step size = {0}'.format(self.step_size))
if self.voice_command == 'quit':
# Sends a signal to ros to shutdown the ROS interfaces
self.node.get_logger().info("done")
# Exit the Python interpreter
sys.exit(0)
# Reset voice command to None
self.voice_command = None
# return the updated command
return command
class VoiceTeleopNode(hm.HelloNode):
def __init__(self):
hm.HelloNode.__init__(self)
self.rate = 10.0
self.joint_state = None
hm.HelloNode.main(self, 'voice_teleop', 'voice_teleop', wait_for_first_pointcloud=False)
self.speech = GetVoiceCommands(self)
def joint_states_callback(self, msg):
self.joint_state = msg
def send_command(self, command):
joint_state = self.joint_state
# Conditional statement to send joint trajectory goals
if (joint_state is not None) and (command is not None):
# Assign point as a JointTrajectoryPoint message type
point = JointTrajectoryPoint()
point.time_from_start = Duration(seconds=0).to_msg()
# Assign trajectory_goal as a FollowJointTrajectoryGoal message type
trajectory_goal = FollowJointTrajectory.Goal()
trajectory_goal.goal_time_tolerance = Duration(seconds=0).to_msg()
# Extract the joint name from the command dictionary
joint_name = command['joint']
trajectory_goal.trajectory.joint_names = [joint_name]
# Extract the increment type from the command dictionary
inc = command['inc']
self.get_logger().info('inc = {0}'.format(inc))
new_value = inc
# Assign the new_value position to the trajectory goal message type
point.positions = [new_value]
trajectory_goal.trajectory.points = [point]
trajectory_goal.trajectory.header.stamp = self.get_clock().now().to_msg()
self.get_logger().info('joint_name = {0}, trajectory_goal = {1}'.format(joint_name, trajectory_goal))
# Make the action call and send goal of the new joint position
self.trajectory_client.send_goal(trajectory_goal)
self.get_logger().info('Done sending command.')
# Reprint the voice command menu
self.speech.print_commands()
def timer_get_command(self):
# Get voice command
command = self.speech.get_command()
# Send voice command for joint trajectory goals
self.send_command(command)
def main(self):
self.create_subscription(JointState, '/stretch/joint_states', self.joint_states_callback, 1)
rate = self.create_rate(self.rate)
self.speech.print_commands()
self.sleep = self.create_timer(1/self.rate, self.timer_get_command)
def main(args=None):
try:
#rclpy.init(args=args)
node = VoiceTeleopNode()
node.main()
node.new_thread.join()
except KeyboardInterrupt:
node.get_logger().info('interrupt received, so shutting down')
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
```
### The Code Explained
This code is similar to that of the [multipoint_command](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/multipoint_command.py) and [joint_state_printer](https://github.com/hello-robot/stretch_tutorials/blob/humble/stretch_ros_tutorials/joint_state_printer.py) node. Therefore, this example will highlight sections that are different from those tutorials. Now let's break the code down.
```python
#!/usr/bin/env python3
```
Every Python ROS [Node](http://docs.ros.org/en/humble/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Nodes/Understanding-ROS2-Nodes.html) will have this declaration at the top. The first line makes sure your script is executed as a Python script.
```python
import math
import rclpy
import sys
from rclpy.duration import Duration
from sensor_msgs.msg import JointState
from std_msgs.msg import Int32
from control_msgs.action import FollowJointTrajectory
from trajectory_msgs.msg import JointTrajectoryPoint
import hello_helpers.hello_misc as hm
from speech_recognition_msgs.msg import SpeechRecognitionCandidates
```
You need to import `rclpy` if you are writing a ROS Node. Import the `FollowJointTrajectory` from the `control_msgs.action` package to control the Stretch robot. Import `JointTrajectoryPoint` from the `trajectory_msgs` package to define robot trajectories. The `hello_helpers` package consists of a module that provides various Python scripts used across stretch_ros. In this instance, we are importing the `hello_misc` script. Import `sensor_msgs.msg` so that we can subscribe to JointState messages.
```python
class GetVoiceCommands:
```
Create a class that subscribes to the `speech-to-text` recognition messages, prints a voice command menu, and defines step size for translational and rotational mobile base motion.
```python
self.node = node
self.step_size = 'medium'
self.rad_per_deg = math.pi/180.0
```
Set the default step size as medium and create a float value, `self.rad_per_deg`, to convert degrees to radians.
```python
self.small_deg = 5.0
self.small_rad = self.rad_per_deg * self.small_deg
self.small_translate = 0.025
self.medium_deg = 10.0
self.medium_rad = self.rad_per_deg * self.medium_deg
self.medium_translate = 0.05
self.big_deg = 20.0
self.big_rad = self.rad_per_deg * self.big_deg
self.big_translate = 0.1
```
Define the three rotation and translation step sizes.
```python
self.voice_command = None
self.sound_direction = 0
self.speech_to_text_sub = self.node.create_subscription(SpeechRecognitionCandidates, "/speech_to_text", self.callback_speech, 1)
self.sound_direction_sub = self.node.create_subscription(Int32, "/sound_direction", self.callback_direction, 1)
```
Initialize the voice command and sound direction to values that will not result in moving the base.
Set up two subscribers. The first one subscribes to the topic `/speech_to_text`, looking for `SpeechRecognitionCandidates` messages. When a message comes in, ROS is going to pass it to the function `callback_speech` automatically. The second subscribes to `/sound_direction` message and passes it to the `callback_direction` function.
```python
def callback_direction(self, msg):
self.sound_direction = msg.data * -self.rad_per_deg
```
The `callback_direction` function converts the `sound_direction` topic from degrees to radians.
```python
if self.step_size == 'small':
inc = {'rad': self.small_rad, 'translate': self.small_translate}
if self.step_size == 'medium':
inc = {'rad': self.medium_rad, 'translate': self.medium_translate}
if self.step_size == 'big':
inc = {'rad': self.big_rad, 'translate': self.big_translate}
return inc
```
The `callback_speech` stores the increment size for translational and rotational base motion in `inc`. The increment size is contingent on the `self.step_size` string value.
```python
command = None
if self.voice_command == 'forward':
command = {'joint': 'translate_mobile_base', 'inc': self.get_inc()['translate']}
if self.voice_command == 'back':
command = {'joint': 'translate_mobile_base', 'inc': -self.get_inc()['translate']}
if self.voice_command == 'left':
command = {'joint': 'rotate_mobile_base', 'inc': self.get_inc()['rad']}
if self.voice_command == 'right':
command = {'joint': 'rotate_mobile_base', 'inc': -self.get_inc()['rad']}
if self.voice_command == 'stretch':
command = {'joint': 'rotate_mobile_base', 'inc': self.sound_direction}
```
In the `get_command()` function, the `command` is initialized as `None`, or set as a dictionary where the `joint` and `inc` values are stored. The `command` message type is dependent on the `self.voice_command` string value.
```python
if (self.voice_command == "small") or (self.voice_command == "medium") or (self.voice_command == "big"):
self.step_size = self.voice_command
self.node.get_logger().info('Step size = {0}'.format(self.step_size))
```
Based on the `self.voice_command` value set the step size for the increments.
```python
if self.voice_command == 'quit':
self.node.get_logger().info("done")
sys.exit(0)
```
If the `self.voice_command` is equal to `quit`, then initiate a clean shutdown of ROS and exit the Python interpreter.
```python
class VoiceTeleopNode(hm.HelloNode):
def __init__(self):
hm.HelloNode.__init__(self)
self.rate = 10.0
self.joint_state = None
hm.HelloNode.main(self, 'voice_teleop', 'voice_teleop', wait_for_first_pointcloud=False)
self.speech = GetVoiceCommands(self)
```
A class that inherits the `HelloNode` class from `hm` declares object from the `GetVoiceCommands` class and sends joint trajectory commands. The main function instantiates the `HelloNode` class.
```python
def send_command(self, command):
joint_state = self.joint_state
if (joint_state is not None) and (command is not None):
point = JointTrajectoryPoint()
point.time_from_start = Duration(seconds=0).to_msg()
```
The `send_command` function stores the joint state message and uses a conditional statement to send joint trajectory goals. Also, assign `point` as a `JointTrajectoryPoint` message type.
```python
trajectory_goal = FollowJointTrajectory.Goal()
trajectory_goal.goal_time_tolerance = Duration(seconds=0).to_msg()
```
Assign `trajectory_goal` as a `FollowJointTrajectory.Goal` message type.
```python
joint_name = command['joint']
trajectory_goal.trajectory.joint_names = [joint_name]
```
Extract the joint name from the command dictionary.
```python
inc = command['inc']
self.get_logger().info('inc = {0}'.format(inc))
new_value = inc
```
Extract the increment type from the command dictionary.
```python
point.positions = [new_value]
trajectory_goal.trajectory.points = [point]
```
Assign the new value position to the trajectory goal message type.
```python
self.trajectory_client.send_goal(trajectory_goal)
self.get_logger().info('Done sending command.')
```
Make the action call and send the goal of the new joint position.
```python
self.speech.print_commands()
```
Reprint the voice command menu after the trajectory goal is sent.
```python
def main(self):
self.create_subscription(JointState, '/stretch/joint_states', self.joint_states_callback, 1)
rate = self.create_rate(self.rate)
self.speech.print_commands()
```
The main function initializes the subscriber and we are going to use the publishing rate that we set before.
```python
try:
#rclpy.init(args=args)
node = VoiceTeleopNode()
node.main()
node.new_thread.join()
except KeyboardInterrupt:
node.get_logger().info('interrupt received, so shutting down')
node.destroy_node()
rclpy.shutdown()
```
Declare a `VoiceTeleopNode` object. Then execute the `main()` method.

+ 61
- 0
ros2/respeaker_mic_array.md View File

@ -0,0 +1,61 @@
# ReSpeaker Microphone Array
For this tutorial, we will get a high-level view of how to use Stretch's [ReSpeaker Mic Array v2.0](https://wiki.seeedstudio.com/ReSpeaker_Mic_Array_v2.0/).
<p align="center">
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/respeaker.jpg"/>
</p>
## Stretch Body Package
In this section we will use command line tools in the [Stretch_Body](https://github.com/hello-robot/stretch_body) package, a low-level Python API for Stretch's hardware, to directly interact with the ReSpeaker.
Begin by typing the following command in a new terminal.
```{.bash .shell-prompt}
stretch_respeaker_test.py
```
The following will be displayed in your terminal:
```{.bash .no-copy}
For use with S T R E T C H (TM) RESEARCH EDITION from Hello Robot Inc.
* waiting for audio...
* recording 3 seconds
* done
* playing audio
* done
```
The ReSpeaker Mico Array will wait until it hears audio loud enough to trigger its recording feature. Stretch will record audio for 3 seconds and then replay it through its speakers. This command line is a good method to see if the hardware is working correctly.
To stop the python script, type `Ctrl` + `c` in the terminal.
## ReSpeaker_ROS Package
A [ROS package for the ReSpeaker](https://index.ros.org/p/respeaker_ros/#melodic) is utilized for this section.
Begin by running the `sample_respeaker.launch.py` file in a terminal.
```{.bash .shell-prompt}
ros2 launch respeaker_ros2 respeaker.launch.py
```
This will bring up the necessary nodes that will allow the ReSpeaker to implement a voice and sound interface with the robot.
## ReSpeaker Topics
Below are executables you can run to see the ReSpeaker results.
```{.bash .shell-prompt}
ros2 topic echo /sound_direction
ros2 topic echo /sound_localization
ros2 topic echo /is_speeching
ros2 topic echo /audio
ros2 topic echo /speech_audio
ros2 topic echo /speech_to_text
```
There's also another topic called `/status_led`, with this topic you can change the color of the LEDs in the ReSpeaker, you need to publish the desired color in the terminal using `ros2 topic pub`. We will explore this topics in the next tutorial.
You can also set various parameters via `dynamic_reconfigure` by running the following command in a new terminal.
```{.bash .shell-prompt}
ros2 run rqt_reconfigure rqt_reconfigure

+ 169
- 0
ros2/respeaker_topics.md View File

@ -0,0 +1,169 @@
# ReSpeaker Microphone Array Topics
In this tutorial we will see the topics more in detail and have an idea of what the ReSpeaker can do. If you just landed here, it might be a good idea to first review the previous tutorial which covered the basics of the ReSpeaker and the information about the package used
## ReSpeaker Topics
Begin by running the `sample_respeaker.launch.py` file in a terminal.
```{.bash .shell-prompt}
ros2 launch respeaker_ros respeaker.launch.py
```
This will bring up the necessary nodes that will allow the ReSpeaker to implement a voice and sound interface with the robot. To see the topics that are available for us you can run the command `ros2 topic list -t` and search the topics that we are looking for.
Don't worry these are the executables you can run to see the ReSpeaker results.
```{.bash .shell-prompt}
ros2 topic echo /sound_direction # Result of Direction (in Radians) of Audio
ros2 topic echo /sound_localization # Result of Direction as Pose (Quaternion values)
ros2 topic echo /is_speeching # Result of Voice Activity Detector
ros2 topic echo /audio # Raw audio data
ros2 topic echo /speech_audio # Raw audio data when there is speech
ros2 topic echo /speech_to_text # Voice recognition
ros2 topic pub /status_led ... # Modify LED color
```
Let's go one by one and see what we can expect of each topic, the first is the `sound_direction` topic, in the terminal execute the command that you learned earlier:
```{.bash .shell-prompt}
ros2 topic echo /sound_direction # Result of Direction (in Radians) of Audio
```
This will give you the direction of the sound detected by the ReSpeaker in radians
```{.bash .no-copy}
data: 21
---
data: 138
---
data: -114
---
data: -65
---
```
The Direction of Arrival (DOA) for the ReSpeaker goes from -180 to 180, to see know more about how is it in Stretch watch this DOA diagram:
<p align="center">
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/respeaker.jpg"/>
</p>
The next topic is the `sound_localization`, this is similar to the `sound_direction` topic but now the result it's as pose (Quaternion Values), try it out, execute the command:
```{.bash .shell-prompt}
ros2 topic echo /sound_localization # Result of Direction as Pose (Quaternion values)
```
With this you will have in your terminal this:
```{.bash .no-copy}
---
header:
stamp:
sec: 1695325677
nanosec: 882383094
frame_id: respeaker_base
pose:
position:
x: -0.0
y: 0.0
z: 0.0
orientation:
x: 0.0
y: 0.0
z: 0.43051109680829525
w: 0.9025852843498605
---
```
The next one on the list is the `is_speeching` topic, with this you will have the result of Voice Activity Detector, let's try it out:
```{.bash .shell-prompt}
ros2 topic echo /is_speeching # Result of Voice Activity Detector
```
The result will be a true or false in the data but it can detect sounds as true so be careful with this
```{.bash .no-copy}
data: false
---
data: true
---
data: false
---
```
The `audio` topic is goint to output all the Raw audio data, if you want to see what this does execute the command:
```{.bash .shell-prompt}
ros2 topic echo /audio # Raw audio data
```
You will expect a lot of data from this, you will see this output:
```{.bash .no-copy}
---
data:
- 229
- 0
- 135
- 0
- 225
- 0
- 149
- 0
- 94
- 0
- 15
```
For the `speech_audio` topic you can expect the same result as the `audio` topic but this time you are going to have the raw data when there is a speech, execute the next command and speak near the microphone array:
```{.bash .shell-prompt}
ros2 topic echo /speech_audio # Raw audio data when there is speech
```
So if it's almost the same topic but now is going to ouput the data when you are talking then you guessed right, the result will look like the same as before.
```{.bash .no-copy}
---
data:
- 17
- 254
- 70
- 254
```
Passing to the `speech_to_text` topic, with this you can say a small sentence and it will output what you said. In the terminal, execute the next command and speak near the microphone array again:
```{.bash .shell-prompt}
ros2 topic echo /speech_to_text # Voice recognition
```
In this instance, "hello robot" was said. The following will be displayed in your terminal:
```{.bash .no-copy}
transcript:
- hello robot
confidence:
- ######
---
```
And for the final topic, the `status_led`, with this you can setup custom LED patterns and effects. There are 3 ways to do it, the first one is using `rqt_publisher`, in the terminal input:
```{.bash .shell-prompt}
ros2 run rqt_publisher rqt_publisher
```
With this the rqt_publisher window will open, there you need to add the topic manually, search for the `/status_led` topic, then click in the plus button, this is the add new publisher button and the topic will be added, then you can start moving the RGBA values between 0 to 1 and that's it, you can try it with the next example:
<p align="center">
<img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/noetic/images/respeaker.jpg"/>
</p>
You will see that there's a purple light coming out from the ReSpeaker, you can change the rate and color if you want.
Now for the next way you can do it in the terminal, let's try again with the same values that we had so input this command in the terminal:
```bash
ros2 topic pub /status_led std_msgs/msg/ColorRGBA "r: 1.0
g: 0.0
b: 1.0
a: 1.0"
```
And you can see that we have the same result as earlier, good job!
And for the final way it's going to be with a python code, here you can modify the lights just as we did before but now you have color patterns that you can create, let's try it so that you can see yourself, input in the terminal:
```{.bash .shell-prompt}
cd ament_ws/src/stretch_tutorials/stretch_ros_tutorials/
python3 led_color_change.py
```
With this we can change the colors as well but the difference is that we are able to create our own patterns, in the [RealSpeaker Documentation](https://wiki.seeedstudio.com/ReSpeaker_Mic_Array_v2.0/#control-the-leds) there are more options to customize and control de LEDs.
```

Loading…
Cancel
Save