You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

56 lines
3.3 KiB

2 years ago
  1. ## Perception Introduction
  2. The Stretch robot is equipped with the [Intel RealSense D435i camera](https://www.intelrealsense.com/depth-camera-d435i/), an essential component that allows the robot to measure and analyze the world around it. In this tutorial, we are going to showcase how to visualize the various topics published from the camera.
  3. Begin by running the stretch `driver.launch` file.
  4. ```bash
  5. # Terminal 1
  6. roslaunch stretch_core stretch_driver.launch
  7. ```
  8. To activate the [RealSense camera](https://www.intelrealsense.com/depth-camera-d435i/) and publish topics to be visualized, run the following launch file in a new terminal.
  9. ```bash
  10. # Terminal 2
  11. roslaunch stretch_core d435i_low_resolution.launch
  12. ```
  13. Within this tutorial package, there is an [RViz config file](https://github.com/hello-robot/stretch_tutorials/blob/main/rviz/perception_example.rviz) with the topics for perception already in the Display tree. You can visualize these topics and the robot model by running the command below in a new terminal.
  14. ```bash
  15. # Terminal 3
  16. rosrun rviz rviz -d /home/hello-robot/catkin_ws/src/stretch_tutorials/rviz/perception_example.rviz
  17. ```
  18. ### PointCloud2 Display
  19. A list of displays on the left side of the interface can visualize the camera data. Each display has its properties and status that notify a user if topic messages are received.
  20. For the `PointCloud2` display, a [sensor_msgs/pointCloud2](http://docs.ros.org/en/lunar/api/sensor_msgs/html/msg/PointCloud2.html) message named */camera/depth/color/points*, is received and the gif below demonstrates the various display properties when visualizing the data.
  21. <p align="center">
  22. <img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/main/images/perception_rviz.gif"/>
  23. </p>
  24. ### Image Display
  25. The `Image` display when toggled creates a new rendering window that visualizes a [sensor_msgs/Image](http://docs.ros.org/en/lunar/api/sensor_msgs/html/msg/Image.html) messaged, */camera/color/image_raw*. This feature shows the image data from the camera; however, the image comes out sideways. Thus, you can select the */camera/color/image_raw_upright_view* from the **Image Topic** options to get an upright view of the image.
  26. <p align="center">
  27. <img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/main/images/perception_image.gif"/>
  28. </p>
  29. ### Camera Display
  30. The `Camera` display is similar to that of the `Image` display. In this setting, the rendering window also visualizes other displays, such as the PointCloud2, the RobotModel, and Grid Displays. The **visibility** property can toggle what displays your are interested in visualizing.
  31. <p align="center">
  32. <img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/main/images/perception_camera.gif"/>
  33. </p>
  34. ### DepthCloud Display
  35. The `DepthCloud` display is visualized in the main RViz window. This display takes in the depth image and RGB image, provided by the RealSense, to visualize and register a point cloud.
  36. <p align="center">
  37. <img src="https://raw.githubusercontent.com/hello-robot/stretch_tutorials/main/images/perception_depth.gif"/>
  38. </p>
  39. ## Deep Perception
  40. Hello Robot also has a ROS package that uses deep learning models for various detection demos. A link to the package is provided: [stretch_deep_perception](https://github.com/hello-robot/stretch_ros/tree/master/stretch_deep_perception).