diff --git a/README.md b/README.md index 9442b46..bba8802 100644 --- a/README.md +++ b/README.md @@ -12,10 +12,10 @@ This repo provides instructions on installing and using code on the Stretch RE1 7. [MoveIt! Basics](moveit_basics.md) 8. [Follow Joint Trajectory Commands](follow_joint_trajectory.md) 9. [Perception](perception.md) -10. [FUNMAP](https://github.com/hello-robot/stretch_ros/tree/master/stretch_funmap) -11. Microphone Array -11. ROS testing -12. Other Nav Stack Features +10. [ReSpeaker Microphone Array](respeaker_microphone_array.md) +11. [FUNMAP](https://github.com/hello-robot/stretch_ros/tree/master/stretch_funmap) +12. ROS testing +13. Other Nav Stack Features ## Other ROS Examples @@ -27,4 +27,4 @@ To help get you get started on your software development, here are examples of n 4. [Give Stretch a Balloon](example_4.md) - Create a "balloon" marker that goes where ever Stretch goes. 5. [Print Joint States](example_5.md) - Print the joint states of Stretch. 6. [Store Effort Values](example_6.md) - Print, store, and plot the effort values of the Stretch robot. -7. [Capture Image](example_7.md) - Capture images from the RealSense camera data. +7. [Capture Image](example_7.md) - Capture images from the RealSense camera data. diff --git a/example_7.md b/example_7.md index ab9a841..36c770d 100644 --- a/example_7.md +++ b/example_7.md @@ -209,7 +209,7 @@ and ROS will not process any messages. ## Edge Detection -In this section we highlight a node that utilizes the [Canny Edge filter](https://www.geeksforgeeks.org/python-opencv-canny-function/) algorithm to detect the edges from an image and converted back as a ROS image to be visualized in RViz. Begin by running the following commands. +In this section, we highlight a node that utilizes the [Canny Edge filter](https://www.geeksforgeeks.org/python-opencv-canny-function/) algorithm to detect the edges from an image and convert it back as a ROS image to be visualized in RViz. Begin by running the following commands. ```bash # Terminal 4 @@ -291,7 +291,7 @@ Define lower and upper bounds of the Hysteresis Thresholds. ```python image = cv2.Canny(image, self.lower_thres, self.upper_thres) ``` -Run the Canny Edge function to detect edges from the cv2 image. Further details of the function can be found here: [Canny Edge detection](https://www.geeksforgeeks.org/python-opencv-canny-function/). +Run the Canny Edge function to detect edges from the cv2 image. ```python image_msg = self.bridge.cv2_to_imgmsg(image, 'passthrough') diff --git a/images/respeaker.jpg b/images/respeaker.jpg new file mode 100644 index 0000000..fc0959b Binary files /dev/null and b/images/respeaker.jpg differ diff --git a/navigation_stack.md b/navigation_stack.md index 39ccbbb..43336a3 100644 --- a/navigation_stack.md +++ b/navigation_stack.md @@ -10,7 +10,7 @@ roslaunch stretch_navigation mapping.launch Rviz will show the robot and the map that is being constructed. With the terminal open, use the instructions printed by the teleop package to teleoperate the robot around the room. Avoid sharp turns and revisit previously visited spots to form loop closures.
- +
In Rviz, once you see a map that has reconstructed the space well enough, you can run the following commands to save the map to `stretch_user/`. @@ -53,7 +53,7 @@ roslaunch stretch_navigation mapping.launch teleop_type:=joystick ```- +
### Using ROS Remote Master diff --git a/perception.md b/perception.md index 87e2c1c..cb02fd6 100644 --- a/perception.md +++ b/perception.md @@ -61,3 +61,6 @@ The `DepthCloud` display is visualized in the main RViz window. This display tak ## Deep Perception Hello Robot also has a ROS package that uses deep learning models for various detection demos. A link to the package is provided: [stretch_deep_perception](https://github.com/hello-robot/stretch_ros/tree/master/stretch_deep_perception). + + +**Next Tutorial:** [ReSpeaker Microphone Array](respeaker_mircophone_array.md) diff --git a/respeaker_microphone_array.md b/respeaker_microphone_array.md new file mode 100644 index 0000000..fd3dc3f --- /dev/null +++ b/respeaker_microphone_array.md @@ -0,0 +1,78 @@ +## ReSpeaker Microphone Array + +For this tutorial, we will go over on a high level how to use Stretch's [ReSpeaker Mic Array v2.0](https://wiki.seeedstudio.com/ReSpeaker_Mic_Array_v2.0/). + + + ++ +
+ + +### Stretch Body Package +In this tutorial's section we will use command line tools in the [Stretch_Body](https://github.com/hello-robot/stretch_body) package, a low level Python API for Stretch's hardware, to directly interact with the ReSpeaker. + +Begin by typing the following command in a new terminal. + +```bash +stretch_respeaker_test.py +``` + +The following will be displayed in your terminal +```bash +hello-robot@stretch-re1-1005:~$ stretch_respeaker_test.py +For use with S T R E T C H (TM) RESEARCH EDITION from Hello Robot Inc. + + +* waiting for audio... +* recording 3 seconds +* done +* playing audio +* done + +``` + +The ReSpeaker Mico Array will wait until it hears audio loud enough to trigger its recording feature. Stretch will record audio for 3 seconds and then replay it through its speakers. This command line is a good method to see if the hardware is working correctly. + +To stop the python script, type **Ctrl** + **c** in the terminal. + +### ReSpeaker_ROS Package + +A [ROS package for the ReSpeaker](https://index.ros.org/p/respeaker_ros/#melodic) is utilized for this tutorial's section. + +Begin by running the `sample_respeaker.launch` file in a terminal. + +```bash +# Terminal 1 +roslaunch respeaker_ros sample_respeaker.launch +``` +This will bring up the necessary nodes that will allow the ReSpeaker to implement a voice and sound interface with the robot. + +Below are executables you can run and see the ReSpeaker results. + +```bash +rostopic echo /sound_direction # Result of Direction (in Radians) of Audio +rostopic echo /sound_localization # Result of Direction as Pose (Quaternion values) +rostopic echo /is_speeching # Result of Voice Activity Detector +rostopic echo /audio # Raw audio data +rostopic echo /speech_audio # Raw audio data when there is speech +rostopic echo /speech_to_text # Voice recognition +``` + +An example is when you run the `speech_to_text` executable and speak near the microphone array. In this instance, "hello robot" was said. + +```bash +# Terminal 2 +hello-robot@stretch-re1-1005:~$ rostopic echo /speech_to_text +transcript: + - hello robot +confidence: [] +--- +``` + +You can also set various parameters via`dynamic_reconfigure` running the following command in a new terminal. + +```bash +# Terminal 3 +rosrun rqt_reconfigure rqt_reconfigure +```