Browse Source

Merged previous commits.

noetic
hello-sanchez 2 years ago
parent
commit
d1938c7531
6 changed files with 90 additions and 9 deletions
  1. +5
    -5
      README.md
  2. +2
    -2
      example_7.md
  3. BIN
     
  4. +2
    -2
      navigation_stack.md
  5. +3
    -0
      perception.md
  6. +78
    -0
      respeaker_microphone_array.md

+ 5
- 5
README.md View File

@ -12,10 +12,10 @@ This repo provides instructions on installing and using code on the Stretch RE1
7. [MoveIt! Basics](moveit_basics.md)
8. [Follow Joint Trajectory Commands](follow_joint_trajectory.md)
9. [Perception](perception.md)
10. [FUNMAP](https://github.com/hello-robot/stretch_ros/tree/master/stretch_funmap)
11. Microphone Array
11. ROS testing
12. Other Nav Stack Features
10. [ReSpeaker Microphone Array](respeaker_microphone_array.md)
11. [FUNMAP](https://github.com/hello-robot/stretch_ros/tree/master/stretch_funmap)
12. ROS testing
13. Other Nav Stack Features
## Other ROS Examples
@ -27,4 +27,4 @@ To help get you get started on your software development, here are examples of n
4. [Give Stretch a Balloon](example_4.md) - Create a "balloon" marker that goes where ever Stretch goes.
5. [Print Joint States](example_5.md) - Print the joint states of Stretch.
6. [Store Effort Values](example_6.md) - Print, store, and plot the effort values of the Stretch robot.
7. [Capture Image](example_7.md) - Capture images from the RealSense camera data.
7. [Capture Image](example_7.md) - Capture images from the RealSense camera data.

+ 2
- 2
example_7.md View File

@ -209,7 +209,7 @@ and ROS will not process any messages.
## Edge Detection
In this section we highlight a node that utilizes the [Canny Edge filter](https://www.geeksforgeeks.org/python-opencv-canny-function/) algorithm to detect the edges from an image and converted back as a ROS image to be visualized in RViz. Begin by running the following commands.
In this section, we highlight a node that utilizes the [Canny Edge filter](https://www.geeksforgeeks.org/python-opencv-canny-function/) algorithm to detect the edges from an image and convert it back as a ROS image to be visualized in RViz. Begin by running the following commands.
```bash
# Terminal 4
@ -291,7 +291,7 @@ Define lower and upper bounds of the Hysteresis Thresholds.
```python
image = cv2.Canny(image, self.lower_thres, self.upper_thres)
```
Run the Canny Edge function to detect edges from the cv2 image. Further details of the function can be found here: [Canny Edge detection](https://www.geeksforgeeks.org/python-opencv-canny-function/).
Run the Canny Edge function to detect edges from the cv2 image.
```python
image_msg = self.bridge.cv2_to_imgmsg(image, 'passthrough')

BIN
View File


+ 2
- 2
navigation_stack.md View File

@ -10,7 +10,7 @@ roslaunch stretch_navigation mapping.launch
Rviz will show the robot and the map that is being constructed. With the terminal open, use the instructions printed by the teleop package to teleoperate the robot around the room. Avoid sharp turns and revisit previously visited spots to form loop closures.
<p align="center">
<img height=600 src="images/mapping.gif"/>
<img src="images/mapping.gif"/>
</p>
In Rviz, once you see a map that has reconstructed the space well enough, you can run the following commands to save the map to `stretch_user/`.
@ -53,7 +53,7 @@ roslaunch stretch_navigation mapping.launch teleop_type:=joystick
```
<p align="center">
<img height=600 src="images/gazebo_mapping.gif"/>
<img src="images/gazebo_mapping.gif"/>
</p>
### Using ROS Remote Master

+ 3
- 0
perception.md View File

@ -61,3 +61,6 @@ The `DepthCloud` display is visualized in the main RViz window. This display tak
## Deep Perception
Hello Robot also has a ROS package that uses deep learning models for various detection demos. A link to the package is provided: [stretch_deep_perception](https://github.com/hello-robot/stretch_ros/tree/master/stretch_deep_perception).
**Next Tutorial:** [ReSpeaker Microphone Array](respeaker_mircophone_array.md)

+ 78
- 0
respeaker_microphone_array.md View File

@ -0,0 +1,78 @@
## ReSpeaker Microphone Array
For this tutorial, we will go over on a high level how to use Stretch's [ReSpeaker Mic Array v2.0](https://wiki.seeedstudio.com/ReSpeaker_Mic_Array_v2.0/).
<p align="center">
<img src="images/respeaker.jpg"/>
</p>
### Stretch Body Package
In this tutorial's section we will use command line tools in the [Stretch_Body](https://github.com/hello-robot/stretch_body) package, a low level Python API for Stretch's hardware, to directly interact with the ReSpeaker.
Begin by typing the following command in a new terminal.
```bash
stretch_respeaker_test.py
```
The following will be displayed in your terminal
```bash
hello-robot@stretch-re1-1005:~$ stretch_respeaker_test.py
For use with S T R E T C H (TM) RESEARCH EDITION from Hello Robot Inc.
* waiting for audio...
* recording 3 seconds
* done
* playing audio
* done
```
The ReSpeaker Mico Array will wait until it hears audio loud enough to trigger its recording feature. Stretch will record audio for 3 seconds and then replay it through its speakers. This command line is a good method to see if the hardware is working correctly.
To stop the python script, type **Ctrl** + **c** in the terminal.
### ReSpeaker_ROS Package
A [ROS package for the ReSpeaker](https://index.ros.org/p/respeaker_ros/#melodic) is utilized for this tutorial's section.
Begin by running the `sample_respeaker.launch` file in a terminal.
```bash
# Terminal 1
roslaunch respeaker_ros sample_respeaker.launch
```
This will bring up the necessary nodes that will allow the ReSpeaker to implement a voice and sound interface with the robot.
Below are executables you can run and see the ReSpeaker results.
```bash
rostopic echo /sound_direction # Result of Direction (in Radians) of Audio
rostopic echo /sound_localization # Result of Direction as Pose (Quaternion values)
rostopic echo /is_speeching # Result of Voice Activity Detector
rostopic echo /audio # Raw audio data
rostopic echo /speech_audio # Raw audio data when there is speech
rostopic echo /speech_to_text # Voice recognition
```
An example is when you run the `speech_to_text` executable and speak near the microphone array. In this instance, "hello robot" was said.
```bash
# Terminal 2
hello-robot@stretch-re1-1005:~$ rostopic echo /speech_to_text
transcript:
- hello robot
confidence: []
---
```
You can also set various parameters via`dynamic_reconfigure` running the following command in a new terminal.
```bash
# Terminal 3
rosrun rqt_reconfigure rqt_reconfigure
```

Loading…
Cancel
Save