Browse Source

Update align aruco & obstacle avoidance tutorials

galactic
hello-chintan 2 years ago
parent
commit
c2dced5006
2 changed files with 168 additions and 27 deletions
  1. +94
    -15
      align_to_aruco.md
  2. +74
    -12
      obstacle_avoider.md

+ 94
- 15
align_to_aruco.md View File

@ -1,27 +1,106 @@
## Align to ArUco
ArUco markers are a type of fiducials that are used extensively in robotics for identification and pose estimation. In this tutorial we will learn how to identify ArUco markers with the ArUco detection node and enable Stretch to navigate and align itself with respect to the marker.
Stretch uses the OpenCV ArUco detection library and is configured to identify a specific set of ArUco markers belonging to the 6x6, 250 dictionary. To understand why this is important please refer to this handy guide provided by OpenCV.
## ArUco Detection
Stretch uses the OpenCV ArUco detection library and is configured to identify a specific set of ArUco markers belonging to the 6x6, 250 dictionary. To understand why this is important, please refer to this handy guide provided by OpenCV.
Stretch comes preconfigured to identify ArUco markers and the ROS node that enables this is the detect_aruco_markers node in the stretch_core package. Thanks to this, identifying and estimating the pose of a marker is as easy as pointing the camera to the marker and running the detection node. It is also possible and quite convenient to visualize the detections with RViz.
Stretch comes preconfigured to identify ArUco markers. The ROS node that enables this is the detect_aruco_markers node in the stretch_core package. Thanks to this node, identifying and estimating the pose of a marker is as easy as pointing the camera at the marker and running the detection node. It is also possible and quite convenient to visualize the detections with RViz.
To do this, simply point the camera towards a marker and execute the following commands:
Terminal 1:
## Computing Transformations
If you have not already done so, now might be a good time to review the tf_tranformation tutorial. Go on, we can wait…
Now that we know how to program stretch to return the transform between known reference frames, we can use this knowledge to compute the transform between the detected marker and the robot base_link.. Enter TF transformations! From its current pose, for Stretch to align itself in front of the marker, we need to command it to reach there. But even before that, we need to program Stretch to know the goal pose. We define the goal pose to be 0.5 metre outward from the marker in the marker negative y-axis. This is easier to visualize throught the figures below.
<!-- Add images to show alignment and tranformations -->
By monitoring the /aruco/marker_array and /aruco/axes topics, we can visualize the markers in RViz. The detection node also publishes the tf pose of the detected markers. This can be visualized by using the TF plugin and selecting the detected marker to inspect the pose. Next, we will use exactly that to compute the transform between the detected marker and the base_link of the robot.
Now, we can compute the transformation from the robot base_link frame to the goal pose and pass this is an SE3 pose to the mobile base.
Since we want Stretch to align with respect to the marker we define a 0.5m offset in the marker y-axis where Stretch would come to a stop. At the same time, we also want Stretch to point the arm towards the marker so as to make the subsequent manipulation tasks easier to accomplish. This would result in the end pose of the base_link as shown below. Sweet! The next task is to plan a trajectory for the mobile base to reach this end pose. We do this in three steps:
1. Turn theta degrees towards the goal position
2. Travel straight to the goal position
3. Turn phi degrees to attain the goal orientation
Luckily, we know how to command Stretch to execute a trajectory using the joint trajectory server. If you are just starting, have a look at the tutorial to know how to command Stretch using the Joint trajectory Server.
# Warnings
Since we won't be using the arm for this demo, it's safer to stow Stretch's arm in. Execute the command:
```bash
stretch_robot_stow.py
```
ros2 run stretch_core detect_aruco_markers
## See It In Action
First, we need to point the camera towards the marker. To do this, you could use the keyboard teleop node.
When you are ready, execute the following command:
```bash
ros2 launch stretch_core align_to_aruco.launch.py
```
Terminal 2:
## Code Breakdown
We make use of two separate Python classes for this demo. The FrameListener() class is derived from the Node class and is the place where we compute the TF transformations. For an explantion of this class, you can refer to the TF listener tutorial.
```python
class FrameListener(Node):
```
ros2 rviz2 rviz2 -d `ros2 pkg prefix –share stretch_core`/rviz/stretch_simple_test.rviz
The AlignToAruco() class is where we command Stretch to the pose goal. This class is derived from the FrameListener class so that they can both share the node instance.
```python
class AlignToAruco(FrameListener):
```
By monitoring the /aruco/marker_array and /aruco/axes topics we can visualize the markers. The detection node also publishes the tf pose of the detected markers. This can be visualized by using the TF plugin and selecting the detected marker to inspect the pose. Next, we will use exactly that to compute the transform between the detected marker and the base_link of the robot.
The constructor initializes the Joint trajectory action client. It also initialized the attribute called offset that determines the end distance between the marker and the robot.
```python
def __init__(self, node, offset=0.75):
self.trans_base = TransformStamped()
self.trans_camera = TransformStamped()
self.joint_state = JointState()
self.offset = offset
self.node = node
If you have not already done so, now might be a good time to review the tf_tranformation tutorial. Go on, we can wait…
Now that we know how to program stretch to return the transform between known reference frames, we can use this knowledge to compute the transform between the detected marker and the robot base_link.
self.trajectory_client = ActionClient(self.node, FollowJointTrajectory, '/stretch_controller/follow_joint_trajectory')
server_reached = self.trajectory_client.wait_for_server(timeout_sec=60.0)
if not server_reached:
self.node.get_logger().error('Unable to connect to arm action server. Timeout exceeded.')
sys.exit()
```
Since we want Stretch to align with respect to the marker we define a 0.5m offset in the marker y-axis where Stretch would come to a stop. At the same time, we also want Stretch to point the arm towards the marker so as to make the subsequent manipulation tasks easier to accomplish. This would result in the end pose of the base_link as shown below. Sweet! The next task is to plan a trajectory for the mobile base to reach this end pose. We do this in three steps:
Turn theta degrees towards the goal position
Travel straight to the goal position
Turn phi degrees to attain the goal orientation
The joint_states_callback is the callback method that receives the most recent joint state messages published on the /stretch/joint_states topic.
```python
def joint_states_callback(self, joint_state):
self.joint_state = joint_state
```
The copute_difference() method is where we call the get_transform() method from the FrameListener class to compute the difference between the base_link and base_right frame with an offset of 0.5 m in the negative y-axis.
```python
def compute_difference(self):
self.trans_base, self.trans_camera = self.node.get_transforms()
```
To compute the (x, y) coordinates of the SE3 pose goal, we compute the transformation as explained above here.
```python
R = quaternion_matrix((x, y, z, w))
P_dash = np.array([[0], [-self.offset], [0], [1]])
P = np.array([[self.trans_base.transform.translation.x], [self.trans_base.transform.translation.y], [0], [1]])
X = np.matmul(R, P_dash)
P_base = X + P
base_position_x = P_base[0, 0]
base_position_y = P_base[1, 0]
```
From this, it is relatively straightforward to compute the angle phi and the euclidean distance dist. We then compute the angle z_rot_base to perform the last angle correction.
```python
phi = atan2(base_position_y, base_position_x)
dist = sqrt(pow(base_position_x, 2) + pow(base_position_y, 2))
x_rot_base, y_rot_base, z_rot_base = euler_from_quaternion([x, y, z, w])
z_rot_base = -phi + z_rot_base + 3.14159
```
The align_to_marker() method is where we command Stretch to the pose goal in three steps using the Joint Trajectory action server. For an explanation on how to form the trajectory goal, you can refer to the Follow Joint Trajectory tutorial.
```python
def align_to_marker(self):
```
Luckily, we know how to command Stretch to execute a trajectory using the joint trajectory server. If not, have a look at this tutorial to know how.
The End! If you want to work with a different ArUco marker than the one we used in this tutorial, you can do so by changing line 44 in the code to the one you wish to detect. Also, don't forget to add the marker in the ArUco marker dictionary.

+ 74
- 12
obstacle_avoider.md View File

@ -1,22 +1,84 @@
In this tutorial we will work with Stretch to detect and avoid obstacles using the onboard RPlidar A1 laser scanner and talk a bit about filtering laser scan data. If you want to know more about the laser scanner setup on Stretch and how to get it up and running, we recommend visiting the previous tutorials on filtering laser scans and mobile base collision avoidance.
## Obstacle Avoider
In this tutorial we will work with Stretch to detect and avoid obstacles using the onboard RPlidar A1 laser scanner and learn how to filter laser scan data. If you want to know more about the laser scanner setup on Stretch and how to get it up and running, we recommend visiting the previous tutorials on filtering laser scans and mobile base collision avoidance.
A major drawback of using any ToF (Time of Flight) sensor is the inherent inaccuracies as a result of occlusions and weird reflection and diffraction phenomena the light pulses are subject to in an unstructured environment. This results in unexpected and undesired noise that can get in the way of an otherwise extremely useful sensor. Fortunately, it is easy to account for and eliminate these inaccuracies with a ROS package called laser_filters that comes prebuilt with some pretty handy scan message filters.
A major drawback of using any ToF (Time of Flight) sensor is the inherent inaccuracies as a result of occlusions and weird reflection and diffraction phenomena the light pulses are subject to in an unstructured environment. This results in unexpected and undesired noise that can get in the way of an otherwise extremely useful sensor. Fortunately, it is easy to account for and eliminate these inaccuracies to a great extent by filering out the noise. We will do this with a ROS package called laser_filters that comes prebuilt with some pretty handy laser scan message filters.
We will look at three filters from this package that have been tuned to work well with Stretch in an array of scenarios. By the end of this tutorial, you will be able to tweak them for your particular use case and publish and visualize them on the /scan_filtered topic using RViz. Let’s jump in!
By the end of this tutorial, you will be able to tweak them for your particular use case and publish and visualize them on the /scan_filtered topic using RViz. So let’s jump in! We will look at three filters from this package that have been tuned to work well with Stretch in an array of scenarios.
LaserScanAngularBoundsFilterInPlace - This filter removes laser scans belonging to an angular range. For Stretch we use this filter to discount points that are occluded by the mast because the mast being a part of Stretch’s body is not an object we need to account for as an obstacle while navigating the mobile base.
# LaserScan Filtering
LaserScanAngularBoundsFilterInPlace - This filter removes laser scans belonging to an angular range. For Stretch, we use this filter to discount points that are occluded by the mast because it is a part of Stretch’s body and not really an object we need to account for as an obstacle while navigating the mobile base.
LaserScanSpeckleFilter - We use this filter to remove phantom detections in the middle of empty space that are a result of reflections around corners. These disjoint speckles can be detected as false positives and result in jerky motion of the base through empty space. Removing them returns a noise-free scan.
LaserScanSpeckleFilter - We use this filter to remove phantom detections in the middle of empty space that are a result of reflections around corners. These disjoint speckles can be detected as false positives and result in jerky motion of the base through empty space. Removing them returns a relatively noise-free scan.
LaserScanBoxFilter - Stretch is prone to returning false detections right over the mobile base. While navigating, since it’s safe to assume that Stretch is not standing right above an obstacle, we filter out any detections that are in a box shape over the mobile base.
If you want to tweak the values for your end application, you could do so by changing the values in the laser_filter_params.yaml file. Also, if you want to use the unfiltered scans from the laser scanner, simply subscribe to the /scan topic instead of the /scan_filtered topic.
However, beware that filtering laser scans comes at the cost of a sparser scan that might not be ideal for all applications. If you want to tweak the values for your end application, you could do so by changing the values in the laser_filter_params.yaml file and by following the laser_filters package wiki. Also, if you are feeling zany and want to use the raw unfiltered scans from the laser scanner, simply subscribe to the /scan topic instead of the /scan_filtered topic.
Now, let’s use what we have learned so far to upgrade the collision avoidance demo in a way that Stretch is able to scan an entire room autonomously without bumping into things or people. To account for people getting too close to the robot, we will define a keepout distance of 0.4 m and to keep Stretch from getting too close to static obstacles we will define another variable called turning distance. We set this as 0.75 m - it enables Stretch to start turning if a static obstacle is less than 0.75 m away.
# Avoidance logic
Now, let’s use what we have learned so far to upgrade the collision avoidance demo in a way that Stretch is able to scan an entire room autonomously without bumping into things or people. To account for dynamic obstacles getting too close to the robot, we will define a keepout distance of 0.4 m - detections below this value stop the robot. To keep Stretch from getting too close to static obstacles, we will define another variable called turning distance of 0.75 m - frontal detections below this value make Stretch turn to the left until it sees a clear path ahead.
Building up on the teleoperation tutorial that enables Stretch’s mobile base to be controlled using velocity commands, we implement a simple logic for obstacle avoidance. The logic can be broken down into three steps:
If the minimum value from the frontal scans is greater than 0.75 m then continue to move forward
If the minimum value from the frontal scans is less than 0.75 m then turn to the right until this is no longer true
If the minimum value from the overall scans is less than 0.4 m then stop the robot
Building up on the teleoperation using velocity commands tutorial, let's implement a simple logic for obstacle avoidance. The logic can be broken down into three steps:
1. If the minimum value from the frontal scans is greater than 0.75 m then continue to move forward
2. If the minimum value from the frontal scans is less than 0.75 m then turn to the right until this is no longer true
3. If the minimum value from the overall scans is less than 0.4 m then stop the robot
This simple algorithm is sufficient to account for both static and dynamic obstacles
# Warnings
If you see Stretch try to run over your lazy cat or headbutt a wall, just press the bright runstop button on Stretch's head to calm it down. For pure navigation tasks, it's also safer to stow Stretch's arm in. Execute the command:
```bash
stretch_robot_stow.py
```
## See It In Action
Alright, let's see it in action! Execute the following command to run the scripts:
```bash
ros2 launch stretch_core rplidar_keepout.launch.py
```
## Code Breakdown:
The turning distance is defined by the distance attribute and the keepout distance is defined by the keepout attribute.
```python
self.distance = 0.75 # robot turns at this distance
self.keepout = 0.4 # robot stops at this distance
```
To pass velocity commands to the mobile base we publish the translational and rotational velocities to the /stretch/cmd_vel topic. To subscribe to the filtered laser scans from the laser scanner, we subscribe to the /scan_filtered topic. While you are at it, go ahead and check the behavior by switching to the /scan topic instead. You're welcome!
```python
self.publisher_ = self.create_publisher(Twist, '/stretch/cmd_vel', 1) #/stretch_diff_drive_controller/cmd_vel for gazebo
self.subscriber_ = self.create_subscription(LaserScan, '/scan_filtered', self.lidar_callback, 10)
```
This is the callback function for the laser scanner that gets called every time a new message is received.
```python
def lidar_callback(self, msg):
```
When the scan message is filtered, all the ranges that are filtered out are assigned the nan (not a number) value. This can get in the way of computing the minimum. Therefore, we reassign these values to inf (infinity).
```python
all_points = [r if (not isnan(r)) else inf for r in msg.ranges]
```
Next, we compute the two minimums that are neccessary for the avoidance logic to work - the overall minimum and the frontal minimum named min_all and min_front respectively.
```python
front_points = [r * sin(theta) if (theta < -2.5 or theta > 2.5) else inf for r,theta in zip(msg.ranges, angles)]
front_ranges = [r if abs(y) < self.extent else inf for r,y in zip(msg.ranges, front_points)]
min_front = min(front_ranges)
min_all = min(all_points)
```
Finally, we check the minimum values against the distance and keepout attributes to set the rotational and linear velocities of the mobile base with the set_speed() method.
```python
if(min_all < self.keepout):
lin_vel = 0.0
rot_vel = 0.0
elif(min_front < self.distance):
lin_vel = 0.0
rot_vel = 0.25
else:
lin_vel = 0.5
rot_vel = 0.0
self.set_speed(lin_vel, rot_vel)
```
That wasn't too hard, was it? Feel free to play with this code and change the attributes to see how it affects Stretch's behavior.

Loading…
Cancel
Save