Browse Source

Change outdated galactic links to iron

pull/10/head
hello-jesus 1 year ago
parent
commit
9267d62eb5
5 changed files with 13 additions and 13 deletions
  1. +3
    -3
      ros2/align_to_aruco.md
  2. +3
    -3
      ros2/deep_perception.md
  3. +1
    -1
      ros2/example_2.md
  4. +4
    -4
      ros2/navigation_simple_commander.md
  5. +2
    -2
      ros2/obstacle_avoider.md

+ 3
- 3
ros2/align_to_aruco.md View File

@ -4,7 +4,7 @@ ArUco markers are a type of fiducials that are used extensively in robotics for
## ArUco Detection
Stretch uses the OpenCV ArUco detection library and is configured to identify a specific set of ArUco markers belonging to the 6x6, 250 dictionary. To understand why this is important, please refer to [this](https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html) handy guide provided by OpenCV.
Stretch comes preconfigured to identify ArUco markers. The ROS node that enables this is the detect_aruco_markers [node](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/stretch_core/detect_aruco_markers.py) in the stretch_core package. Thanks to this node, identifying and estimating the pose of a marker is as easy as pointing the camera at the marker and running the detection node. It is also possible and quite convenient to visualize the detections with RViz.
Stretch comes preconfigured to identify ArUco markers. The ROS node that enables this is the detect_aruco_markers [node](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_core/stretch_core/detect_aruco_markers.py) in the stretch_core package. Thanks to this node, identifying and estimating the pose of a marker is as easy as pointing the camera at the marker and running the detection node. It is also possible and quite convenient to visualize the detections with RViz.
## Computing Transformations
If you have not already done so, now might be a good time to review the [tf listener](https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/example_10/) tutorial. Go on, we can wait…
@ -50,7 +50,7 @@ ros2 launch stretch_core align_to_aruco.launch.py
</p>
## Code Breakdown
Let's jump into the code to see how things work under the hood. Follow along [here](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/stretch_core/align_to_aruco.py) to have a look at the entire script.
Let's jump into the code to see how things work under the hood. Follow along [here](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_core/stretch_core/align_to_aruco.py) to have a look at the entire script.
We make use of two separate Python classes for this demo. The FrameListener class is derived from the Node class and is the place where we compute the TF transformations. For an explantion of this class, you can refer to the [TF listener](https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/example_10/) tutorial.
```python
@ -118,4 +118,4 @@ The align_to_marker() method is where we command Stretch to the pose goal in thr
def align_to_marker(self):
```
If you want to work with a different ArUco marker than the one we used in this tutorial, you can do so by changing line 44 in the [code](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/stretch_core/align_to_aruco.py#L44) to the one you wish to detect. Also, don't forget to add the marker in the [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/config/stretch_marker_dict.yaml) ArUco marker dictionary.
If you want to work with a different ArUco marker than the one we used in this tutorial, you can do so by changing line 44 in the [code](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_core/stretch_core/align_to_aruco.py#L44) to the one you wish to detect. Also, don't forget to add the marker in the [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_core/config/stretch_marker_dict.yaml) ArUco marker dictionary.

+ 3
- 3
ros2/deep_perception.md View File

@ -28,7 +28,7 @@ Voila! You just executed your first deep-learning model on Stretch!
## Code Breakdown
Luckily, the stretch_deep_pereption package is extremely modular and is designed to work with a wide array of detectors. Although most of the heavy lifting in this tutorial is being done by the neural network, let's attempt to break down the code into functional blocks to understand the detection pipeline.
The control flow begins with executing the [detect_objects.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/detect_objects.py) script. In the main() function, we create an instance of the ObjectDetector class from the [object_detect_pytorch.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/object_detect_pytorch.py) script where we configure the YOLOv5s model. Next, we pass this detector to an instance of the DetectionNode class from the [detection_node.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/detection_node.py) script and call the main function.
The control flow begins with executing the [detect_objects.py](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_deep_perception/stretch_deep_perception/detect_objects.py) script. In the main() function, we create an instance of the ObjectDetector class from the [object_detect_pytorch.py](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_deep_perception/stretch_deep_perception/object_detect_pytorch.py) script where we configure the YOLOv5s model. Next, we pass this detector to an instance of the DetectionNode class from the [detection_node.py](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_deep_perception/stretch_deep_perception/detection_node.py) script and call the main function.
```python
def main():
confidence_threshold = 0.0
@ -134,9 +134,9 @@ ros2 launch stretch_deep_perception stretch_detect_faces.launch.py
![detect_faces](https://user-images.githubusercontent.com/97639181/196327737-7091cd61-f79a-4ff0-a291-039ab3f7127a.gif)
## Code Breakdown
Ain't that something! If you followed the breakdown in object detection, you'll find that the only change if you are looking to detect faces, facial landmarks or estimat head pose instead of detecting objects is in using a different deep learning model that does just that. For this, we will explore how to use the OpenVINO toolkit. Let's head to the detect_faces.py [node](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/detect_faces.py) to begin.
Ain't that something! If you followed the breakdown in object detection, you'll find that the only change if you are looking to detect faces, facial landmarks or estimat head pose instead of detecting objects is in using a different deep learning model that does just that. For this, we will explore how to use the OpenVINO toolkit. Let's head to the detect_faces.py [node](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_deep_perception/stretch_deep_perception/detect_faces.py) to begin.
In the main() method, we see a similar structure as with the object detction node. We first create an instance of the detector using the HeadPoseEstimator class from the [head_estimator.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/head_estimator.py) script to configure the deep learning models. Next, we pass this to an instance of the DetectionNode class from the detection_node.py script and call the main function.
In the main() method, we see a similar structure as with the object detction node. We first create an instance of the detector using the HeadPoseEstimator class from the [head_estimator.py](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_deep_perception/stretch_deep_perception/head_estimator.py) script to configure the deep learning models. Next, we pass this to an instance of the DetectionNode class from the detection_node.py script and call the main function.
```python
...

+ 1
- 1
ros2/example_2.md View File

@ -5,7 +5,7 @@
This example aims to provide instructions on how to filter scan messages.
For robots with laser scanners, ROS provides a special Message type in the [sensor_msgs](https://github.com/ros2/common_interfaces/tree/galactic/sensor_msgs) package called [LaserScan](https://github.com/ros2/common_interfaces/blob/galactic/sensor_msgs/msg/LaserScan.msg) to hold information about a given scan. Let's take a look at the message specification itself:
For robots with laser scanners, ROS provides a special Message type in the [sensor_msgs](https://github.com/ros2/common_interfaces/tree/iron/sensor_msgs) package called [LaserScan](https://github.com/ros2/common_interfaces/blob/iron/sensor_msgs/msg/LaserScan.msg) to hold information about a given scan. Let's take a look at the message specification itself:
```{.bash .no-copy}
# Laser scans angles are measured counter clockwise,

+ 4
- 4
ros2/navigation_simple_commander.md View File

@ -2,7 +2,7 @@
In this tutorial, we will work with Stretch to explore the Simple Commander Python API to enable autonomous navigation programmatically. We will also demonstrate a security patrol routine for Stretch developed using this API. If you just landed here, it might be a good idea to first review the previous tutorial which covered mapping and navigation using RViz as an interface.
## The Simple Commander Python API
To develop complex behaviors with Stretch where navigation is just one aspect of the autonomy stack, we need to be able to plan and execute navigation routines as part of a bigger program. Luckily, the Nav2 stack exposes a Python API that abstracts the ROS layer and the Behavior Tree framework (more on that later!) from the user through a pre-configured library called the [robot navigator](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_nav2/stretch_nav2/robot_navigator.py). This library defines a class called BasicNavigator which wraps the planner, controller and recovery action servers and exposes methods such as `goToPose()`, `goToPoses()` and `followWaypoints()` to execute navigation behaviors.
To develop complex behaviors with Stretch where navigation is just one aspect of the autonomy stack, we need to be able to plan and execute navigation routines as part of a bigger program. Luckily, the Nav2 stack exposes a Python API that abstracts the ROS layer and the Behavior Tree framework (more on that later!) from the user through a pre-configured library called the [robot navigator](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_nav2/stretch_nav2/robot_navigator.py). This library defines a class called BasicNavigator which wraps the planner, controller and recovery action servers and exposes methods such as `goToPose()`, `goToPoses()` and `followWaypoints()` to execute navigation behaviors.
Let's first see the demo in action and then explore the code to understand how this works!
@ -16,7 +16,7 @@ stretch_robot_stow.py
```
## Setup
Let's set the patrol route up before you can execute this demo in your map. This requires reading the position of the robot at various locations in the map and entering the co-ordinates in the array called `security_route` in the [simple_commander_demo.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_nav2/stretch_nav2/simple_commander_demo.py#L30) file.
Let's set the patrol route up before you can execute this demo in your map. This requires reading the position of the robot at various locations in the map and entering the co-ordinates in the array called `security_route` in the [simple_commander_demo.py](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_nav2/stretch_nav2/simple_commander_demo.py#L30) file.
First, execute the following command while passing the correct map YAML. Then, press the 'Startup' button:
@ -24,7 +24,7 @@ First, execute the following command while passing the correct map YAML. Then, p
ros2 launch stretch_nav2 navigation.launch.py map:=${HELLO_ROBOT_FLEET}/maps/<map_name>.yaml
```
Since we expect the first point in the patrol route to be at the origin of the map, the first coordinates should be (0.0, 0.0). Next, to define the route, the easiest way to define the waypoints in the `security_route` array is by setting the robot at random locations in the map using the '2D Pose Estimate' button in RViz as shown below. For each location, note the x, and y coordinates in the position field of the base_footprint frame and add it to the `security_route` array in [simple_commander_demo.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_nav2/stretch_nav2/simple_commander_demo.py#L30).
Since we expect the first point in the patrol route to be at the origin of the map, the first coordinates should be (0.0, 0.0). Next, to define the route, the easiest way to define the waypoints in the `security_route` array is by setting the robot at random locations in the map using the '2D Pose Estimate' button in RViz as shown below. For each location, note the x, and y coordinates in the position field of the base_footprint frame and add it to the `security_route` array in [simple_commander_demo.py](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_nav2/stretch_nav2/simple_commander_demo.py#L30).
<p align="center">
<img height=500 src="https://user-images.githubusercontent.com/97639181/206782270-e84b33c4-e155-468d-8a46-d926b88ba428.gif"/>
@ -50,7 +50,7 @@ ros2 launch stretch_nav2 demo_security.launch.py map:=${HELLO_ROBOT_FLEET}/maps/
</p>
## Code Breakdown
Now, let's jump into the code to see how things work under the hood. Follow along in the [code](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_nav2/stretch_nav2/simple_commander_demo.py) to have a look at the entire script.
Now, let's jump into the code to see how things work under the hood. Follow along in the [code](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_nav2/stretch_nav2/simple_commander_demo.py) to have a look at the entire script.
First, we import the `BasicNavigator` class from the robot_navigator library which comes standard with the Nav2 stack. This class wraps around the planner, controller and recovery action servers.

+ 2
- 2
ros2/obstacle_avoider.md View File

@ -12,7 +12,7 @@ LaserScanSpeckleFilter - We use this filter to remove phantom detections in the
LaserScanBoxFilter - Stretch is prone to returning false detections right over the mobile base. While navigating, since it’s safe to assume that Stretch is not standing right above an obstacle, we filter out any detections that are in a box shape over the mobile base.
Beware that filtering laser scans comes at the cost of a sparser scan that might not be ideal for all applications. If you want to tweak the values for your end application, you could do so by changing the values in the [laser_filter_params.yaml](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/config/laser_filter_params.yaml) file and by following the laser_filters package wiki. Also, if you are feeling zany and want to use the raw unfiltered scans from the laser scanner, simply subscribe to the /scan topic instead of the /scan_filtered topic.
Beware that filtering laser scans comes at the cost of a sparser scan that might not be ideal for all applications. If you want to tweak the values for your end application, you could do so by changing the values in the [laser_filter_params.yaml](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_core/config/laser_filter_params.yaml) file and by following the laser_filters package wiki. Also, if you are feeling zany and want to use the raw unfiltered scans from the laser scanner, simply subscribe to the /scan topic instead of the /scan_filtered topic.
![laser_filtering](https://user-images.githubusercontent.com/97639181/196327251-c39f3cbb-c898-48c8-ae28-2683564061d9.gif)
@ -42,7 +42,7 @@ ros2 launch stretch_core rplidar_keepout.launch.py
![avoidance](https://user-images.githubusercontent.com/97639181/196327294-1b2dde5e-2fdc-4a67-a188-ae6b1f5e6a06.gif)
## Code Breakdown:
Let's jump into the code to see how things work under the hood. Follow along [here](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/stretch_core/avoider.py) to have a look at the entire script.
Let's jump into the code to see how things work under the hood. Follow along [here](https://github.com/hello-robot/stretch_ros2/blob/iron/stretch_core/stretch_core/avoider.py) to have a look at the entire script.
The turning distance is defined by the distance attribute and the keepout distance is defined by the keepout attribute.

Loading…
Cancel
Save