ArUco markers are a type of fiducials that are used extensively in robotics for identification and pose estimation. In this tutorial we will learn how to identify ArUco markers with the ArUco detection node and enable Stretch to navigate and align itself with respect to the marker.
ArUco markers are a type of fiducials that are used extensively in robotics for identification and pose estimation. In this tutorial we will learn how to identify ArUco markers with the ArUco detection node and enable Stretch to navigate and align itself with respect to the marker.
## ArUco Detection
## ArUco Detection
Stretch uses the OpenCV ArUco detection library and is configured to identify a specific set of ArUco markers belonging to the 6x6, 250 dictionary. To understand why this is important, please refer to this handy guide provided by OpenCV.
Stretch uses the OpenCV ArUco detection library and is configured to identify a specific set of ArUco markers belonging to the 6x6, 250 dictionary. To understand why this is important, please refer to [this](https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html) handy guide provided by OpenCV.
Stretch comes preconfigured to identify ArUco markers. The ROS node that enables this is the detect_aruco_markers node in the stretch_core package. Thanks to this node, identifying and estimating the pose of a marker is as easy as pointing the camera at the marker and running the detection node. It is also possible and quite convenient to visualize the detections with RViz.
Stretch comes preconfigured to identify ArUco markers. The ROS node that enables this is the detect_aruco_markers [node](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/stretch_core/detect_aruco_markers.py) in the stretch_core package. Thanks to this node, identifying and estimating the pose of a marker is as easy as pointing the camera at the marker and running the detection node. It is also possible and quite convenient to visualize the detections with RViz.
## Computing Transformations
## Computing Transformations
If you have not already done so, now might be a good time to review the tf_tranformation tutorial. Go on, we can wait…
Now that we know how to program stretch to return the transform between known reference frames, we can use this knowledge to compute the transform between the detected marker and the robot base_link.. Enter TF transformations! From its current pose, for Stretch to align itself in front of the marker, we need to command it to reach there. But even before that, we need to program Stretch to know the goal pose. We define the goal pose to be 0.5 metre outward from the marker in the marker negative y-axis (Green axis). This is easier to visualize through the figure below.
If you have not already done so, now might be a good time to review the [tf listener](https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/example_10/) tutorial. Go on, we can wait…
Now that we know how to program stretch to return the transform between known reference frames, we can use this knowledge to compute the transform between the detected marker and the robot base_link. From its current pose, for Stretch to align itself in front of the marker, we need to command it to reach there. But even before that, we need to program Stretch to know the goal pose. We define the goal pose to be 0.5 metre outward from the marker in the marker negative y-axis (Green axis). This is easier to visualize through the figure below.
@ -18,14 +18,14 @@ Now that we know how to program stretch to return the transform between known re
By monitoring the /aruco/marker_array and /aruco/axes topics, we can visualize the markers in RViz. The detection node also publishes the tf pose of the detected markers. This can be visualized by using the TF plugin and selecting the detected marker to inspect the pose. Next, we will use exactly that to compute the transform between the detected marker and the base_link of the robot.
By monitoring the /aruco/marker_array and /aruco/axes topics, we can visualize the markers in RViz. The detection node also publishes the tf pose of the detected markers. This can be visualized by using the TF plugin and selecting the detected marker to inspect the pose. Next, we will use exactly that to compute the transform between the detected marker and the base_link of the robot.
Now, we can compute the transformation from the robot base_link frame to the goal pose and pass this is an SE3 pose to the mobile base.
Now, we can compute the transformation from the robot base_link frame to the goal pose and pass this as an SE3 pose to the mobile base.
Since we want Stretch to align with respect to the marker we define a 0.5m offset in the marker y-axis where Stretch would come to a stop. At the same time, we also want Stretch to point the arm towards the marker so as to make the subsequent manipulation tasks easier to accomplish. This would result in the end pose of the base_link as shown below. Sweet! The next task is to plan a trajectory for the mobile base to reach this end pose. We do this in three steps:
1. Turn theta degrees towards the goal position
2. Travel straight to the goal position
3. Turn phi degrees to attain the goal orientation
Since we want Stretch to stop at a fixed distance with respect to the marker, we define a 0.5m offset in the marker y-axis where Stretch would come to a stop. At the same time, we also want Stretch to align its orientation to point its arm towards the marker so as to make the subsequent manipulation tasks easier to accomplish. This would result in the end pose of the base_link as shown in the above figure. Sweet! The next task is to generate a simple motion plan for the mobile base to reach this end pose. We do this in three steps:
1. Turn theta degrees towards the goal position. This would be the angle formed between the robot x-axis and the line connecting the start and the goal positions.
2. Travel straight to the goal position. This would be the euclidean distance between the start and the goal positions.
3. Turn phi degrees to attain the goal orientation. This would be the correction angle necessary to align the robot y-axis with the marker x-axis.
Luckily, we know how to command Stretch to execute a trajectory using the joint trajectory server. If you are just starting, have a look at the tutorial to know how to command Stretch using the Joint trajectory Server.
Luckily, we know how to command Stretch to execute a trajectory using the joint trajectory server. If you are just starting, have a look at the [Follow Joint Trajectory Commands](https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/follow_joint_trajectory/) tutorial to know how to command Stretch using the Joint trajectory Server.
## Warnings
## Warnings
Since we won't be using the arm for this demo, it's safer to stow Stretch's arm in. Execute the command:
Since we won't be using the arm for this demo, it's safer to stow Stretch's arm in. Execute the command:
@ -34,7 +34,11 @@ stretch_robot_stow.py
```
```
## See It In Action
## See It In Action
First, we need to point the camera towards the marker. To do this, you could use the keyboard teleop node.
First, we need to point the camera towards the marker. To do this, you could use the keyboard teleop node. To do this, run:
Let's jump into the code to see how things work under the hood. Follow along [here](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/stretch_core/align_to_aruco.py) to have a look at the entire script.
Let's jump into the code to see how things work under the hood. Follow along [here](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/stretch_core/align_to_aruco.py) to have a look at the entire script.
We make use of two separate Python classes for this demo. The FrameListener() class is derived from the Node class and is the place where we compute the TF transformations. For an explantion of this class, you can refer to the TF listener tutorial.
We make use of two separate Python classes for this demo. The FrameListener class is derived from the Node class and is the place where we compute the TF transformations. For an explantion of this class, you can refer to the [TF listener](https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/example_10/) tutorial.
```python
```python
class FrameListener(Node):
class FrameListener(Node):
```
```
The AlignToAruco() class is where we command Stretch to the pose goal. This class is derived from the FrameListener class so that they can both share the node instance.
The AlignToAruco class is where we command Stretch to the pose goal. This class is derived from the FrameListener class so that they can both share the node instance.
```python
```python
class AlignToAruco(FrameListener):
class AlignToAruco(FrameListener):
```
```
The constructor initializes the Joint trajectory action client. It also initialized the attribute called offset that determines the end distance between the marker and the robot.
The constructor initializes the Joint trajectory action client. It also initializes the attribute called offset that determines the end distance between the marker and the robot.
```python
```python
def __init__(self, node, offset=0.75):
def __init__(self, node, offset=0.75):
self.trans_base = TransformStamped()
self.trans_base = TransformStamped()
@ -85,7 +89,7 @@ The copute_difference() method is where we call the get_transform() method from
@ -108,9 +112,9 @@ From this, it is relatively straightforward to compute the angle phi and the euc
z_rot_base = -phi + z_rot_base + 3.14159
z_rot_base = -phi + z_rot_base + 3.14159
```
```
The align_to_marker() method is where we command Stretch to the pose goal in three steps using the Joint Trajectory action server. For an explanation on how to form the trajectory goal, you can refer to the Follow Joint Trajectory tutorial.
The align_to_marker() method is where we command Stretch to the pose goal in three steps using the Joint Trajectory action server. For an explanation on how to form the trajectory goal, you can refer to the [Follow Joint Trajectory Commands](https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/follow_joint_trajectory/) tutorial.
```python
```python
def align_to_marker(self):
def align_to_marker(self):
```
```
The End! If you want to work with a different ArUco marker than the one we used in this tutorial, you can do so by changing line 44 in the [code](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/stretch_core/align_to_aruco.py#L44) to the one you wish to detect. Also, don't forget to add the marker in the ArUco marker dictionary.
If you want to work with a different ArUco marker than the one we used in this tutorial, you can do so by changing line 44 in the [code](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/stretch_core/align_to_aruco.py#L44) to the one you wish to detect. Also, don't forget to add the marker in the [stretch_marker_dict.yaml](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/config/stretch_marker_dict.yaml) ArUco marker dictionary.
Ever wondered if there is a way to make a robot do awesome things without explicitly having to program it to do so? Deep Perception is a branch of Deep Learning that enables sensing the elements that make up an environment with the help of artificial neural networks without writing complicated code. Well, almost. The most wonderful thing about Stretch is that it comes preloaded with software that makes it a breeze to get started with topics such as Deep Learning. In this tutorial, we will deploy deep neural networks on Stretch using two popular Deep Learning frameworks, namely, PyTorch and OpenVino.
Ever wondered if there is a way to make a robot do awesome things without explicitly having to program it to do so? Deep Perception is a branch of Deep Learning that enables sensing the elements that make up an environment with the help of artificial neural networks without writing complicated code. Well, almost. The most wonderful thing about Stretch is that it comes preloaded with software that makes it a breeze to get started with topics such as Deep Learning. In this tutorial, we will deploy deep neural networks on Stretch using two popular Deep Learning frameworks, namely, PyTorch and OpenVino.
## YOLOv5 with PyTorch
## YOLOv5 with PyTorch
PyTorch is an open source end-to-end machine learning framework that makes many pretrained production quality neural networks available for general use. In this tutorial we will use the YOLOv5s model trained on the COCO dataset.
[PyTorch](https://pytorch.org/) is an open source end-to-end machine learning framework that makes many pretrained production quality neural networks available for general use. In this tutorial we will use the YOLOv5s model trained on the COCO dataset.
YOLOv5 is a popular object detection model that divides a supplied image into a grid and detects objects in each cell of the grid recursively. The YOLOv5s model that we have deployed on Stretch has been pretrained on the COCO dataset which allows Stretch to detect a wide range of day to day objects. However, that’s not all, in this demo we want to go a step further and use this extremely versatile object detection model to extract useful information about the scene.
[YOLOv5](https://github.com/ultralytics/yolov5) is a popular object detection model that divides a supplied image into a grid and detects objects in each cell of the grid recursively. The YOLOv5s model that we have deployed on Stretch has been pretrained on the [COCO dataset](https://cocodataset.org/#home) which allows Stretch to detect a wide range of day to day objects. However, that’s not all, in this demo we want to go a step further and use this extremely versatile object detection model to extract useful information about the scene.
## Extracting Bounding Boxes and Depth Information
## Extracting Bounding Boxes and Depth Information
Often, it’s not enough to simply identify an object. Stretch is a mobile manipulator and its job is to manipulate objects in its environment. But before it can do that, it needs information of where exactly the object is located with respect to itself so that a motion plan to reach the object can be generated. This is possible by knowing which pixels correspond to the object of interest in the image frame and then using that to extract the depth information in the camera frame. Once we have this information, it is possible to compute a transform of these points in the end effector frame for Stretch to generate a motion plan.
Often, it’s not enough to simply identify an object. Stretch is a mobile manipulator and its job is to manipulate objects in its environment. But before it can do that, it needs information of where exactly the object is located with respect to itself so that a motion plan to reach the object can be generated. This is possible by knowing which pixels correspond to the object of interest in the image frame and then using that to extract the depth information in the camera frame. Once we have this information, it is possible to compute a transform of these points in the end effector frame for Stretch to generate a motion plan.
Voila! You just executed your first deep learning model on Stretch!
Voila! You just executed your first deep learning model on Stretch!
## Code Breakdown
## Code Breakdown
Luckily, the stretch_deep_pereption package is extremely modular and is designed to work with a wide array of detectors. Although most of the heavy lifting in this tutorial is being done by the neural network, let's attempt to breakdown the [code](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/detect_objects.py) into funtional blocks to understand the detection pipeline.
Luckily, the stretch_deep_pereption package is extremely modular and is designed to work with a wide array of detectors. Although most of the heavy lifting in this tutorial is being done by the neural network, let's attempt to breakdown the code into funtional blocks to understand the detection pipeline.
The control flow begins with executing the detect_objects.py node. In the main() function, we create an instance of the ObjectDetector class from the object_detect_pytorch.py script where we configure the YOLOv5s model. Next, we pass this detector to an instance of the DetectionNode class from the detection_node.py script and call the main function.
The control flow begins with executing the [detect_objects.py](class="na">https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/detect_objects.py) script. In the main() function, we create an instance of the ObjectDetector class from the [object_detect_pytorch.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/object_detect_pytorch.py) script where we configure the YOLOv5s model. Next, we pass this detector to an instance of the DetectionNode class from the [detection_node.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/detection_node.py) script and call the main function.
```python
```python
def main():
def main():
confidence_threshold = 0.0
confidence_threshold = 0.0
@ -49,6 +49,7 @@ class ObjectDetector:
self.model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5m, yolov5l, yolov5x, custom
self.model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5m, yolov5l, yolov5x, custom
self.confidence_threshold = confidence_threshold
self.confidence_threshold = confidence_threshold
```
```
The apply_to_image() method passes the stream of RGB images from the realsense camera to the YOLOv5s model and returns detections in the form of a dictionary consisting of class_id, label, confidence and bouding box coordinates. The last part is exactly what we need for further computations.
The apply_to_image() method passes the stream of RGB images from the realsense camera to the YOLOv5s model and returns detections in the form of a dictionary consisting of class_id, label, confidence and bouding box coordinates. The last part is exactly what we need for further computations.
## Face Detection, Facial Landmarks Detection and Head Pose Estimation with OpenVINO and OpenCV
## Face Detection, Facial Landmarks Detection and Head Pose Estimation with OpenVINO and OpenCV
OpenVINO is a toolkit popularized by Intel to optimize and deploy machine learning inference that can utilize hardware acceleration dongles such as the Intel Neural Compute Stick with Intel based compute architectures. More convenient is the fact that most of the deep learning models in the Open Model Zoo are accessible and configurable using the familiar OpenCV API with the opencv-python-inference-engine library.
Detecting objects is just one thing Stretch can do well, it can do much more using pretrained models. For this part of the tutorial, we will be using Intel’s OpenVINO toolkit with OpenCV. The cool thing about this demo is that it uses three different models in tandem to not just detect human faces, but also important features of the human face such as the eyes, nose and the lips with head pose information. This is important in the context of precise manipulation tasks such as assisted feeding where we want to know the exact location of the facial features the end effector must reach.
[OpenVINO](https://docs.openvino.ai/latest/index.html) is a toolkit popularized by Intel to optimize and deploy machine learning inference that can utilize hardware acceleration dongles such as the Intel Neural Compute Stick with Intel based compute architectures. More convenient is the fact that most of the deep learning models in the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo) are accessible and configurable using the familiar OpenCV API with the [opencv-python-inference-engine](https://pypi.org/project/opencv-python-inference-engine/) library.
With that, let’s jump right into it! Detecting objects is just one thing Stretch can do well, it can do much more using pretrained models. For this part of the tutorial, we will be using Intel’s OpenVINO toolkit with OpenCV. The cool thing about this demo is that it uses three different models in tandem to not just detect human faces, but also important features of the human face such as the eyes, nose and the lips with head pose information. This is important in the context of precise manipulation tasks such as assisted feeding where we want to know the exact location of the facial features the end effector must reach.
With that, let’s jump right into it!
## Warning
## Warning
Running inference on Stretch results in continuous high current draw by the CPU. Pleas ensure proper ventilation with the onboard fan. It is recommended to run the demo in tethered mode.
Running inference on Stretch results in continuous high current draw by the CPU. Pleas ensure proper ventilation with the onboard fan. It is recommended to run the demo in tethered mode.
Ain't that something! If you followed the breakdown in object detection, you'll find that the only change if you are looking to detect faces instead of objects is in using a different deep learning model that does just that. For this, we will explore how to use the OpenVINO toolkit. Let's head to the detect_faces.py [node](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/detect_faces.py) to begin.
Ain't that something! If you followed the breakdown in object detection, you'll find that the only change if you are looking to detect faces, facial landmarks or estimat head pose instead of detecting objects is in using a different deep learning model that does just that. For this, we will explore how to use the OpenVINO toolkit. Let's head to the detect_faces.py [node](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/detect_faces.py) to begin.
In the main() method, we see a similar structure as with the object detction node. We first create an instance of the detector using the HeadPoseEstimator class from the head_estimator.py script to configure the deep learning models. Next, we pass this to an instance of the DetectionNode class from the detection_node.py script and call the main function.
In the main() method, we see a similar structure as with the object detction node. We first create an instance of the detector using the HeadPoseEstimator class from the [head_estimator.py](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_deep_perception/stretch_deep_perception/head_estimator.py) script to configure the deep learning models. Next, we pass this to an instance of the DetectionNode class from the detection_node.py script and call the main function.
```python
```python
...
...
@ -153,7 +156,7 @@ In the main() method, we see a similar structure as with the object detction nod
node.main()
node.main()
```
```
In addition to detecting faces, this class also enables detecting facial landmarks as well as estimating head pose. The constructor initializes and configures three separate models, namely head_detection_model, head_pose_model and landmarks_model, with the help of the renamed_cv2.dnn.readNet() wrappers. Note that renamed_cv2 is simply the opencv_python_inference_engine library compiled under a different namespace for using with Stretch so as not to conflict with the regular OpenCV library and having functionalities from both available to users concurrently.
In addition to detecting faces, this class also enables detecting facial landmarks as well as estimating head pose. The constructor initializes and configures three separate models, namely head_detection_model, head_pose_model and landmarks_model, with the help of the renamed_cv2.dnn.readNet() wrappers. Note that renamed_cv2 is simply the opencv_python_inference_engine library compiled under a different namespace for use with Stretch so as not to conflict with the regular OpenCV library and having functionalities from both available to users concurrently.
@ -199,4 +202,4 @@ The apply_to_image() method calls individual methods like detect_faces(), estima
The detecion_node.py script then takes over as we saw with the object detection tutorial to publish the detections on pertinent topics.
The detecion_node.py script then takes over as we saw with the object detection tutorial to publish the detections on pertinent topics.
Now go ahead and experiment with a few more pretrained models using PyTorch or OpenVINO on Stretch. If you are feeling extra motivated, try creating your own neural networks and training them. Stretch is ready to deploy them.
Now go ahead and experiment with a few more pretrained models using PyTorch or OpenVINO on Stretch. If you are feeling extra motivated, try creating your own neural networks and training them. Stretch is ready to deploy them!
In this tutorial we will work with Stretch to detect and avoid obstacles using the onboard RPlidar A1 laser scanner and learn how to filter laser scan data. If you want to know more about the laser scanner setup on Stretch and how to get it up and running, we recommend visiting the previous tutorials on filtering laser scans and mobile base collision avoidance.
In this tutorial we will work with Stretch to detect and avoid obstacles using the onboard RPlidar A1 laser scanner and learn how to filter laser scan data. If you want to know more about the laser scanner setup on Stretch and how to get it up and running, we recommend visiting the previous tutorials on [Filtering Laser Scans](https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/example_2/) and [Mobile Base Collision Avoidance](https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/example_3/).
A major drawback of using any ToF (Time of Flight) sensor is the inherent inaccuracies as a result of occlusions and weird reflection and diffraction phenomena the light pulses are subject to in an unstructured environment. This results in unexpected and undesired noise that can get in the way of an otherwise extremely useful sensor. Fortunately, it is easy to account for and eliminate these inaccuracies to a great extent by filering out the noise. We will do this with a ROS package called laser_filters that comes prebuilt with some pretty handy laser scan message filters.
A major drawback of using any ToF (Time of Flight) sensor is the inherent inaccuracies as a result of occlusions and weird reflection and diffraction phenomena the light pulses are subject to in an unstructured environment. This results in unexpected and undesired noise that can get in the way of an otherwise extremely useful sensor. Fortunately, it is easy to account for and eliminate these inaccuracies to a great extent by filering out the noise. We will do this with a ROS package called [laser_filters](http://wiki.ros.org/laser_filters) that comes prebuilt with some pretty handy laser scan message filters.
By the end of this tutorial, you will be able to tweak them for your particular use case and publish and visualize them on the /scan_filtered topic using RViz. So let’s jump in! We will look at three filters from this package that have been tuned to work well with Stretch in an array of scenarios.
By the end of this tutorial, you will be able to tweak them for your particular use case and publish and visualize them on the /scan_filtered topic using RViz. So let’s jump in! We will look at three filters from this package that have been tuned to work well with Stretch in an array of scenarios.
@ -12,17 +12,17 @@ LaserScanSpeckleFilter - We use this filter to remove phantom detections in the
LaserScanBoxFilter - Stretch is prone to returning false detections right over the mobile base. While navigating, since it’s safe to assume that Stretch is not standing right above an obstacle, we filter out any detections that are in a box shape over the mobile base.
LaserScanBoxFilter - Stretch is prone to returning false detections right over the mobile base. While navigating, since it’s safe to assume that Stretch is not standing right above an obstacle, we filter out any detections that are in a box shape over the mobile base.
However, beware that filtering laser scans comes at the cost of a sparser scan that might not be ideal for all applications. If you want to tweak the values for your end application, you could do so by changing the values in the laser_filter_params.yaml file and by following the laser_filters package wiki. Also, if you are feeling zany and want to use the raw unfiltered scans from the laser scanner, simply subscribe to the /scan topic instead of the /scan_filtered topic.
Beware that filtering laser scans comes at the cost of a sparser scan that might not be ideal for all applications. If you want to tweak the values for your end application, you could do so by changing the values in the [laser_filter_params.yaml](https://github.com/hello-robot/stretch_ros2/blob/galactic/stretch_core/config/laser_filter_params.yaml) file and by following the laser_filters package wiki. Also, if you are feeling zany and want to use the raw unfiltered scans from the laser scanner, simply subscribe to the /scan topic instead of the /scan_filtered topic.
Now, let’s use what we have learned so far to upgrade the collision avoidance demo in a way that Stretch is able to scan an entire room autonomously without bumping into things or people. To account for dynamic obstacles getting too close to the robot, we will define a keepout distance of 0.4 m - detections below this value stop the robot. To keep Stretch from getting too close to static obstacles, we will define another variable called turning distance of 0.75 m - frontal detections below this value make Stretch turn to the left until it sees a clear path ahead.
Now, let’s use what we have learned so far to upgrade the collision avoidance demo in a way that Stretch is able to scan an entire room autonomously without bumping into things or people. To account for dynamic obstacles getting too close to the robot, we will define a keepout distance of 0.4 m - detections below this value stop the robot. To keep Stretch from getting too close to static obstacles, we will define another variable called turning distance of 0.75 m - frontal detections below this value make Stretch turn to the left until it sees a clear path ahead.
Building up on the teleoperation using velocity commands tutorial, let's implement a simple logic for obstacle avoidance. The logic can be broken down into three steps:
1. If the minimum value from the frontal scans is greater than 0.75 m then continue to move forward
2. If the minimum value from the frontal scans is less than 0.75 m then turn to the right until this is no longer true
3. If the minimum value from the overall scans is less than 0.4 m then stop the robot
Building up on the [teleoperation](https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/example_1/) using velocity commands tutorial, let's implement a simple logic for obstacle avoidance. The logic can be broken down into three steps:
1. If the minimum value from the frontal scans is greater than 0.75 m, then continue to move forward
2. If the minimum value from the frontal scans is less than 0.75 m, then turn to the right until this is no longer true
3. If the minimum value from the overall scans is less than 0.4 m, then stop the robot
## Warnings
## Warnings
If you see Stretch try to run over your lazy cat or headbutt a wall, just press the bright runstop button on Stretch's head to calm it down. For pure navigation tasks, it's also safer to stow Stretch's arm in. Execute the command:
If you see Stretch try to run over your lazy cat or headbutt a wall, just press the bright runstop button on Stretch's head to calm it down. For pure navigation tasks, it's also safer to stow Stretch's arm in. Execute the command:
@ -47,13 +47,13 @@ The turning distance is defined by the distance attribute and the keepout distan
self.keepout = 0.4 # robot stops at this distance
self.keepout = 0.4 # robot stops at this distance
```
```
To pass velocity commands to the mobile base we publish the translational and rotational velocities to the /stretch/cmd_vel topic. To subscribe to the filtered laser scans from the laser scanner, we subscribe to the /scan_filtered topic. While you are at it, go ahead and check the behavior by switching to the /scan topic instead. You're welcome!
To pass velocity commands to the mobile base, we publish the translational and rotational velocities to the /stretch/cmd_vel topic. To subscribe to the filtered laser scans from the laser scanner, we subscribe to the /scan_filtered topic. While you are at it, go ahead and check the behavior by switching to the /scan topic instead. See why filtering is necessary?
```python
```python
self.publisher_ = self.create_publisher(Twist, '/stretch/cmd_vel', 1) #/stretch_diff_drive_controller/cmd_vel for gazebo
self.publisher_ = self.create_publisher(Twist, '/stretch/cmd_vel', 1) #/stretch_diff_drive_controller/cmd_vel for gazebo