From 40bb0e9ed690d3bb62cb0f9c15941c1fe0ab8af7 Mon Sep 17 00:00:00 2001 From: hello-jesus <141784078+hello-jesus@users.noreply.github.com> Date: Wed, 23 Aug 2023 14:07:16 -0700 Subject: [PATCH] Create perception tutorial for ROS2.md --- ros2/perception.md | 48 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) create mode 100644 ros2/perception.md diff --git a/ros2/perception.md b/ros2/perception.md new file mode 100644 index 0000000..7f1569d --- /dev/null +++ b/ros2/perception.md @@ -0,0 +1,48 @@ +## Perception Introduction + +The Stretch robot is equipped with the [Intel RealSense D435i camera](https://www.intelrealsense.com/depth-camera-d435i/), an essential component that allows the robot to measure and analyze the world around it. In this tutorial, we are going to showcase how to visualize the various topics published by the camera. + +Begin by running the stretch `driver.launch` file. + +```{.bash .shell-prompt} +ros2 launch stretch_core stretch_driver.launch.py +``` + +To activate the [RealSense camera](https://www.intelrealsense.com/depth-camera-d435i/) and publish topics to be visualized, run the following launch file in a new terminal. + +```{.bash .shell-prompt} +ros2 launch stretch_core d435i_low_resolution.launch.py +``` + +Within this tutorial package, there is an [RViz config file](https://github.com/hello-robot/stretch_tutorials/blob/noetic/rviz/perception_example.rviz) with the topics for perception already in the Display tree. You can visualize these topics and the robot model by running the command below in a new terminal. + +```{.bash .shell-prompt} +ros2 run rviz2 rviz2 -d /home/hello-robot/ament_ws/src/stretch_tutorials/rviz/perception_example.rviz +``` + +### PointCloud2 Display + +A list of displays on the left side of the interface can visualize the camera data. Each display has its properties and status that notify a user if topic messages are received. + +For the `PointCloud2` display, a [sensor_msgs/pointCloud2](http://docs.ros.org/en/lunar/api/sensor_msgs/html/msg/PointCloud2.html) message named `/camera/depth/color/points` is received and the GIF below demonstrates the various display properties when visualizing the data. + +

+ +

+ +### Image Display +The `Image` display when toggled creates a new rendering window that visualizes a [sensor_msgs/Image](http://docs.ros.org/en/lunar/api/sensor_msgs/html/msg/Image.html) messaged, */camera/color/image_raw*. This feature shows the image data from the camera; however, the image comes out sideways. Thus, you can select the */camera/color/image_raw_upright_view* from the **Image Topic** options to get an upright view of the image. + +

+ +

+ +### DepthCloud Display +The `DepthCloud` display is visualized in the main RViz window. This display takes in the depth image and RGB image provided by RealSense to visualize and register a point cloud. + +

+ +

+ +## Deep Perception +Hello Robot also has a ROS package that uses deep learning models for various detection demos. A link to the tutorials to try and run this models is provided: [stretch_deep_perception](https://docs.hello-robot.com/0.2/stretch-tutorials/ros2/deep_perception/).