diff --git a/example_1.md b/example_1.md
index 3edb527..e4fce85 100644
--- a/example_1.md
+++ b/example_1.md
@@ -12,7 +12,7 @@ Begin by running the following command in a new terminal.
roslaunch stretch_core stretch_driver.launch
```
-Switch the mode to *navigation* mode using a rosservice call. Then drive the robot forward with the `move` node.
+Switch the mode to *navigation* mode using a rosservice call. Then drive the robot forward with the [move.py](https://github.com/hello-robot/stretch_tutorials/tree/noetic/src) node.
```bash
# Terminal 2
diff --git a/example_10.md b/example_10.md
index cea0163..748806e 100644
--- a/example_10.md
+++ b/example_10.md
@@ -18,7 +18,7 @@ Within this tutorial package, there is an RViz config file with the topics for t
# Terminal 2
rosrun rviz rviz -d /home/hello-robot/catkin_ws/src/stretch_tutorials/rviz/tf2_broadcaster_example.rviz
```
-Then run the tf2 broadcaster node to visualize three static frames.
+Then run the [tf2_broadcaster.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/tf2_broadcaster.py) node to visualize three static frames.
```bash
# Terminal 3
@@ -31,7 +31,7 @@ The gif below visualizes what happens when running the previous node.
-**OPTIONAL**: If you would like to see how the static frames update while the robot is in motion, run the stow command node and observe the tf frames in RViz.
+**OPTIONAL**: If you would like to see how the static frames update while the robot is in motion, run the [stow_command_node.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/stow_command.py) and observe the tf frames in RViz.
```bash
# Terminal 4
@@ -202,14 +202,14 @@ Begin by starting up the stretch driver launch file.
# Terminal 1
roslaunch stretch_core stretch_driver.launch
```
-Then run the tf2 broadcaster node to create the three static frames.
+Then run the [tf2_broadcaster.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/tf2_broadcaster.py) node to create the three static frames.
```bash
# Terminal 2
cd catkin_ws/src/stretch_tutorials/src/
python3 tf2_broadcaster.py
```
-Finally, run the tf2 listener node to print the transform between two links.
+Finally, run the [tf2_listener.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/tf2_listener.py) node to print the transform between two links.
```bash
# Terminal 3
diff --git a/example_11.md b/example_11.md
index a35d832..06bae3d 100644
--- a/example_11.md
+++ b/example_11.md
@@ -14,7 +14,7 @@ To activate the [RealSense camera](https://www.intelrealsense.com/depth-camera-d
# Terminal 2
roslaunch stretch_core d435i_low_resolution.launch
```
-Then run the `PointCloud` transformer node.
+Then run the [pointCloud_transformer.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/pointcloud_transformer.py) node.
```bash
# Terminal 3
diff --git a/example_12.md b/example_12.md
index 64ae220..6378875 100644
--- a/example_12.md
+++ b/example_12.md
@@ -46,7 +46,7 @@ Within this tutorial package, there is an RViz config file with the topics for t
rosrun rviz rviz -d /home/hello-robot/catkin_ws/src/stretch_tutorials/rviz/aruco_detector_example.rviz
```
-Then run the aruco tag locator node.
+Then run the [aruco_tag_locator.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/aruco_tag_locator.py) node.
```bash
# Terminal 5
diff --git a/example_13.md b/example_13.md
index c2e6923..4410e79 100644
--- a/example_13.md
+++ b/example_13.md
@@ -21,7 +21,7 @@ Where `${HELLO_FLEET_PATH}` is the path of the `.yaml` file.
-Now we are going to use a node to send a a move base goal half a meter in front of the map's origin. run the following command to execute the [navigation.py](https://github.com/hello-robot/stretch_tutorials/blob/main/src/navigation.py) node.
+Now we are going to use a node to send a a move base goal half a meter in front of the map's origin. run the following command to execute the [navigation.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/navigation.py) node.
```bash
# Terminal 2
diff --git a/example_2.md b/example_2.md
index 8b71bcd..a89c302 100644
--- a/example_2.md
+++ b/example_2.md
@@ -50,7 +50,7 @@ Then in a new terminal run the `rplidar.launch` file from `stretch_core`.
roslaunch stretch_core rplidar.launch
```
-To filter the lidar scans for ranges that are directly in front of Stretch (width of 1 meter) run the scan filter node by typing the following in a new terminal.
+To filter the lidar scans for ranges that are directly in front of Stretch (width of 1 meter) run the [scan_filter.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/scan_filter.py) node by typing the following in a new terminal.
```bash
# Terminal 3
diff --git a/example_3.md b/example_3.md
index 7e469bb..cf2c90d 100644
--- a/example_3.md
+++ b/example_3.md
@@ -14,7 +14,7 @@ Then in a new terminal type the following to activate the LiDAR sensor.
roslaunch stretch_core rplidar.launch
```
-To set *navigation* mode and to activate the avoider node, type the following in a new terminal.
+To set *navigation* mode and to activate the [avoider.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/avoider.py) node, type the following in a new terminal.
```bash
# Terminal 3
diff --git a/example_4.md b/example_4.md
index 89f4978..1432d84 100644
--- a/example_4.md
+++ b/example_4.md
@@ -9,7 +9,7 @@ Let's bringup stretch in the willowgarage world from the [gazebo basics tutorial
# Terminal 1
roslaunch stretch_gazebo gazebo.launch world:=worlds/willowgarage.world rviz:=true
```
-the `rviz` flag will open an RViz window to visualize a variety of ROS topics. In a new terminal run the following commands to create a marker.
+the `rviz` flag will open an RViz window to visualize a variety of ROS topics. In a new terminal run the following commands to execute the [marker.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/marker.py) node.
```bash
# Terminal 2
diff --git a/example_5.md b/example_5.md
index 754a772..2cf7c5b 100644
--- a/example_5.md
+++ b/example_5.md
@@ -10,7 +10,7 @@ Begin by starting up the stretch driver launch file.
# Terminal 1
roslaunch stretch_core stretch_driver.launch
```
-You can then hit the run-stop button (you should hear a beep and the LED light in the button blink) and move the robot's joints to a desired configuration. Once you are satisfied with the configuration, hold the run-stop button until you hear a beep. Then run the following command to print the joint positions of the lift, arm, and wrist.
+You can then hit the run-stop button (you should hear a beep and the LED light in the button blink) and move the robot's joints to a desired configuration. Once you are satisfied with the configuration, hold the run-stop button until you hear a beep. Then run the following command to excecute the [joint_state_printer.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/joint_state_printer.py) which will print the joint positions of the lift, arm, and wrist.
```bash
cd catkin_ws/src/stretch_tutorials/src/
diff --git a/example_6.md b/example_6.md
index 11aaede..04db622 100644
--- a/example_6.md
+++ b/example_6.md
@@ -12,7 +12,7 @@ Begin by running the following command in the terminal in a terminal.
# Terminal 1
roslaunch stretch_core stretch_driver.launch
```
-Switch the mode to *position* mode using a rosservice call. Then run the single effort sensing node.
+Switch the mode to *position* mode using a rosservice call. Then run the [effort_sensing.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/effort_sensing.py) node.
```bash
# Terminal 2
diff --git a/example_7.md b/example_7.md
index bd7b93f..ea7a7c1 100644
--- a/example_7.md
+++ b/example_7.md
@@ -46,7 +46,7 @@ rosrun image_view image_saver image:=/camera/color/image_raw_upright_view
## Capture Image with Python Script
-In this section, you can use a Python node to capture an image from the [RealSense camera](https://www.intelrealsense.com/depth-camera-d435i/). Run the following commands to save a .jpeg image of the image topic */camera/color/image_raw_upright_view*.
+In this section, you can use a Python node to capture an image from the [RealSense camera](https://www.intelrealsense.com/depth-camera-d435i/). Execute the [capture_image.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/capture_image.py) node to save a .jpeg image of the image topic */camera/color/image_raw_upright_view*.
```bash
# Terminal 4
diff --git a/example_8.md b/example_8.md
index da1e8c6..3efad5f 100644
--- a/example_8.md
+++ b/example_8.md
@@ -11,7 +11,7 @@ Begin by running the `respeaker.launch` file in a terminal.
# Terminal 1
roslaunch respeaker_ros sample_respeaker.launch
```
-Then run the speech_text node.
+Then run the [speech_text.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/speech_text.py) node.
```bash
# Terminal 2
diff --git a/example_9.md b/example_9.md
index 2946a17..adfc5aa 100644
--- a/example_9.md
+++ b/example_9.md
@@ -8,14 +8,14 @@ Begin by running the following command in a new terminal.
# Terminal 1
roslaunch stretch_core stretch_driver.launch
```
-Switch the mode to *position* mode using a rosservice call. Then run the `respeaker.launch`.
+Switch the mode to *position* mode using a rosservice call. Then run the `respeaker.launch` file.
```bash
# Terminal 2
rosservice call /switch_to_position_mode
roslaunch stretch_core respeaker.launch
```
-Then run the voice teleoperation base node in a new terminal.
+Then run the [voice_teleoperation_base.py](https://github.com/hello-robot/stretch_tutorials/blob/noetic/src/voice_teleoperation_base.py) node in a new terminal.
```bash
# Terminal 3