Browse Source

ROSCon demoes initial commit

galactic
hello-chintan 2 years ago
parent
commit
f298fd062b
4 changed files with 70 additions and 0 deletions
  1. +27
    -0
      align_to_aruco.md
  2. +21
    -0
      deep_perception.md
  3. +0
    -0
     
  4. +22
    -0
      obstacle_avoider.md

+ 27
- 0
align_to_aruco.md View File

@ -0,0 +1,27 @@
ArUco markers are a type of fiducials that are used extensively in robotics for identification and pose estimation. In this tutorial we will learn how to identify ArUco markers with the ArUco detection node and enable Stretch to navigate and align itself with respect to the marker.
Stretch uses the OpenCV ArUco detection library and is configured to identify a specific set of ArUco markers belonging to the 6x6, 250 dictionary. To understand why this is important please refer to this handy guide provided by OpenCV.
Stretch comes preconfigured to identify ArUco markers and the ROS node that enables this is the detect_aruco_markers node in the stretch_core package. Thanks to this, identifying and estimating the pose of a marker is as easy as pointing the camera to the marker and running the detection node. It is also possible and quite convenient to visualize the detections with RViz.
To do this, simply point the camera towards a marker and execute the following commands:
Terminal 1:
```
ros2 run stretch_core detect_aruco_markers
```
Terminal 2:
```
ros2 rviz2 rviz2 -d `ros2 pkg prefix –share stretch_core`/rviz/stretch_simple_test.rviz
```
By monitoring the /aruco/marker_array and /aruco/axes topics we can visualize the markers. The detection node also publishes the tf pose of the detected markers. This can be visualized by using the TF plugin and selecting the detected marker to inspect the pose. Next, we will use exactly that to compute the transform between the detected marker and the base_link of the robot.
If you have not already done so, now might be a good time to review the tf_tranformation tutorial. Go on, we can wait…
Now that we know how to program stretch to return the transform between known reference frames, we can use this knowledge to compute the transform between the detected marker and the robot base_link.
Since we want Stretch to align with respect to the marker we define a 0.5m offset in the marker y-axis where Stretch would come to a stop. At the same time, we also want Stretch to point the arm towards the marker so as to make the subsequent manipulation tasks easier to accomplish. This would result in the end pose of the base_link as shown below. Sweet! The next task is to plan a trajectory for the mobile base to reach this end pose. We do this in three steps:
Turn theta degrees towards the goal position
Travel straight to the goal position
Turn phi degrees to attain the goal orientation
Luckily, we know how to command Stretch to execute a trajectory using the joint trajectory server. If not, have a look at this tutorial to know how.

+ 21
- 0
deep_perception.md View File

@ -0,0 +1,21 @@
PyTorch is an open source end-to-end machine learning framework that makes many pretrained production quality neural networks available for general use. In this tutorial we will use the YOLOv5s model trained on the COCO dataset.
YOLOv5 is a popular object detection model that divides the supplied image into a grid and detects objects in each cell of the grid recursively. The YOLOv5s model that we have deployed on Stretch has been pretrained on the COCO dataset which allows Stretch to detect a wide range of day to day objects. However, that’s not all, in this demo we want to go a step further and use this extremely versatile object detection model to extract useful information about the scene.
Often, it’s not enough to simply identify an object. Stretch is a mobile manipulator and its job is to manipulate objects in its environment. But before we do that, it needs information of where exactly the object is located with respect to itself so that a motion plan to reach the object can be generated. This is possible by knowing which pixels correspond to the object of interest in the image frame and then using that to extract the depth information in the camera frame. Once we have this information, it is possible to compute a transform of these points in the end effector frame for Stretch to generate a motion plan.
For the sake of brevity, we will limit the scope of this tutorial to drawing bounding boxes around objects of interest to point to pixels in the image frame and drawing a detection plane corresponding to depth pixels in the camera frame. Go ahead and execute the following command to run the inference and visualize the detections in RViz:
```
ros2 launch stretch_deep_perception stretch_detect_objects.launch.py
```
Voila! You just executed the first deep learning model on Stretch!
That’s not it. Detecting objects is just one thing Stretch can do well, it can also detect people and their faces. We will be using Intel’s OpenVINO toolkit with OpenCV to achieve this. Like PyTorch, OpenVINO is a toolkit to optimize and deploy machine learning inference popularized by Intel that can utilize hardware acceleration dongles such as the Intel Neural Compute Stick with Intel based compute architectures. More convenient is the fact that most of the neural network models in the Open Model Zoo are accessible and configurable using the familiar OpenCV API with the opencv-python-inference-engine library extension. Fortunately, these packages come preinstalled with Stretch to make it easy for us to hit the ground running!
With that, let’s jump right into it! The cool thing about the model we are using is that it not only detects human faces, but also detects important features of the human face such as the eyes, nose and the lips. This is important in the context of precise assistive tasks such as feeding and combing the hair where we want to know the exact location of the facial features the end effector must reach. Alright! Let’s execute the following command to see what it looks like:
```
ros2 launch stretch_deep_perception stretch_detect_faces.launch.py
```

+ 0
- 0
View File


+ 22
- 0
obstacle_avoider.md View File

@ -0,0 +1,22 @@
In this tutorial we will work with Stretch to detect and avoid obstacles using the onboard RPlidar A1 laser scanner and talk a bit about filtering laser scan data. If you want to know more about the laser scanner setup on Stretch and how to get it up and running, we recommend visiting the previous tutorials on filtering laser scans and mobile base collision avoidance.
A major drawback of using any ToF (Time of Flight) sensor is the inherent inaccuracies as a result of occlusions and weird reflection and diffraction phenomena the light pulses are subject to in an unstructured environment. This results in unexpected and undesired noise that can get in the way of an otherwise extremely useful sensor. Fortunately, it is easy to account for and eliminate these inaccuracies with a ROS package called laser_filters that comes prebuilt with some pretty handy scan message filters.
We will look at three filters from this package that have been tuned to work well with Stretch in an array of scenarios. By the end of this tutorial, you will be able to tweak them for your particular use case and publish and visualize them on the /scan_filtered topic using RViz. Let’s jump in!
LaserScanAngularBoundsFilterInPlace - This filter removes laser scans belonging to an angular range. For Stretch we use this filter to discount points that are occluded by the mast because the mast being a part of Stretch’s body is not an object we need to account for as an obstacle while navigating the mobile base.
LaserScanSpeckleFilter - We use this filter to remove phantom detections in the middle of empty space that are a result of reflections around corners. These disjoint speckles can be detected as false positives and result in jerky motion of the base through empty space. Removing them returns a noise-free scan.
LaserScanBoxFilter - Stretch is prone to returning false detections right over the mobile base. While navigating, since it’s safe to assume that Stretch is not standing right above an obstacle, we filter out any detections that are in a box shape over the mobile base.
If you want to tweak the values for your end application, you could do so by changing the values in the laser_filter_params.yaml file. Also, if you want to use the unfiltered scans from the laser scanner, simply subscribe to the /scan topic instead of the /scan_filtered topic.
Now, let’s use what we have learned so far to upgrade the collision avoidance demo in a way that Stretch is able to scan an entire room autonomously without bumping into things or people. To account for people getting too close to the robot, we will define a keepout distance of 0.4 m and to keep Stretch from getting too close to static obstacles we will define another variable called turning distance. We set this as 0.75 m - it enables Stretch to start turning if a static obstacle is less than 0.75 m away.
Building up on the teleoperation tutorial that enables Stretch’s mobile base to be controlled using velocity commands, we implement a simple logic for obstacle avoidance. The logic can be broken down into three steps:
If the minimum value from the frontal scans is greater than 0.75 m then continue to move forward
If the minimum value from the frontal scans is less than 0.75 m then turn to the right until this is no longer true
If the minimum value from the overall scans is less than 0.4 m then stop the robot
This simple algorithm is sufficient to account for both static and dynamic obstacles

Loading…
Cancel
Save