@ -6,7 +6,7 @@ The OpenAI API is a sophisticated language model, design to assist users with a
OpenAI's text generation models, such as GPT-4 and GPT-3.5, have undergone extensive training to comprehend both natural and formal language. These models, like GPT-4, are capable of generating text based on given inputs, often referred to as "prompts". To effectively utilize a model like GPT-4, the process involves designing prompts which essentially serve as instructions or examples to guide the model in successfully completing a given task. GPT-4 can be applied to a wide range of tasks including content or code generation, summarization, conversation, creative writing, and more.
OpenAI's text generation models, such as GPT-4 and GPT-3.5, have undergone extensive training to comprehend both natural and formal language. These models, like GPT-4, are capable of generating text based on given inputs, often referred to as "prompts". To effectively utilize a model like GPT-4, the process involves designing prompts which essentially serve as instructions or examples to guide the model in successfully completing a given task. GPT-4 can be applied to a wide range of tasks including content or code generation, summarization, conversation, creative writing, and more.
This text generation, process text in chunks called tokens. Tokens represent commonly ocurring sequences of characters. You can checkout the OpenAI's [tokenizer tool](https://platform.openai.com/tokenizer) to test specific strings and see how they are translated into tokens. Why are tokens so important? It's simple, because depending in the number of tokens you use, as well as the model (text generation, image or audio models), it will cost money. The good news, it's not that expensive and it's really useful. You can take a look at the [pricing](https://openai.com/pricing) page from OpenAI for more information.
This text generation, process text in chunks called tokens. Tokens represent commonly ocurring sequences of characters. You can checkout the OpenAI's [tokenizer tool](https://platform.openai.com/tokenizer) to test specific strings and see how they are translated into tokens. Why are tokens so important? It's simple, because depending in the number of tokens you use, as well as the model (text generation, image or audio models), it will cost money. The good news, it's not that expensive and it's really useful, just be careful when dealing with image generation, as the cost is calculated per image, in contrast to text, which is priced per 1,000 tokens, or audio, which is billed per minute. You can take a look at the [pricing](https://openai.com/pricing) page from OpenAI for more information.
In this tutorial, we are using the GPT-3.5-turbo model, one of the newer text generation models alongside GPT-4 and GPT-4-turbo, we will use this model alongside the [Chat Completion API](https://platform.openai.com/docs/guides/text-generation/chat-completions-api), this will help us with our model,so that we can "chat" with Stretch and command some basic movements using natural language. If you want to know more about this and maybe create some applications, take a look at [this examples](https://platform.openai.com/examples).
In this tutorial, we are using the GPT-3.5-turbo model, one of the newer text generation models alongside GPT-4 and GPT-4-turbo, we will use this model alongside the [Chat Completion API](https://platform.openai.com/docs/guides/text-generation/chat-completions-api), this will help us with our model,so that we can "chat" with Stretch and command some basic movements using natural language. If you want to know more about this and maybe create some applications, take a look at [this examples](https://platform.openai.com/examples).
As you can see, there are different roles in the chat completion, the system, the user and the assistant, each one with it's own content. This is the base for the Chat Completion and even the base to create your own chatbot, this roles are simple to understand:
As you can see, there are different roles in the chat completion, the system, the user and the assistant, each one with it's own content. This is the base for the Chat Completion and even the base to create your own chatbot, take a look at the roles:
1. The system: You will write direct instructions, sometimes the shorter and clearer is better, but if you have a long context for the AI to understand you need to be more specific (you'll see it in the tutorial)
1. The system: You will write direct instructions, sometimes the shorter and clearer is better, but if you have a long context for the AI to understand you need to be more specific (you'll see it in the tutorial)
2. The user: This will be your input text for the model to do something, it can be questions or even normal conversations, it depends on the context of the system as well, if you want the model to know everything about robotics and you ask something about chemistry or biotechnology it will output a message that it cannot process your request.
2. The user: This will be your input text for the model to do something, it can be questions or even normal conversations, it depends on the context of the system as well, if you want the model to know everything about robotics and you ask something about chemistry or biotechnology it will output a message that it cannot process your request.
3. The assistant: Here you can help the model to understand what you are going to do, this can also be a pre crafted bot response, take a look at the tutorial to understand this better.
3. The assistant: Here you can help the model to understand what you are going to do, this can also be a pre crafted bot response, take a look at the tutorial to understand this better.
@ -35,7 +35,7 @@ As you can see, there are different roles in the chat completion, the system, th
!!! note
!!! note
For your safety, put stretch in an open area when you try this tutorial.
For your safety, put stretch in an open area when you try this tutorial.
For this tutorial, we are going to make Stretch move around, writing using natural language what we want to do in the terminal, copy the next python code and paste it in your own folder, we are only going to use Stretch body and the OpenAI python library, if you haven't installed it yet don't worry, follow [this link](https://platform.openai.com/docs/quickstart/developer-quickstart) and read the quickstart guide, there you can create an OpenAI account and setup your API key as well, this is important and it's only yours so be careful where you save it! To install the library just write down in your terminal:
For this tutorial, we'll guide Stretch to move around and perform actions by writing our instructions in natural language within the terminal, copy the next python code and paste it in your own folder, we are only going to use Stretch body and the OpenAI python library, if you haven't installed it yet don't worry, follow [this link](https://platform.openai.com/docs/quickstart/developer-quickstart) and read the quickstart guide, there you can create an OpenAI account and setup your API key as well, this is important and it's only yours so be careful where you save it! To install the library just write down in your terminal:
```{.bash .shell-prompt}
```{.bash .shell-prompt}
pip3 install --upgrade openai
pip3 install --upgrade openai
```
```
@ -209,7 +209,7 @@ def turn_left(robot):
robot.base.wait_until_at_setpoint()
robot.base.wait_until_at_setpoint()
time.sleep(0.1)
time.sleep(0.1)
```
```
We will need to make methods for every movement, for the base we will need Stretch to move Forward, Backward, turn right 90 degrees or turn left 90 degrees.
We will need to make methods for every movement, for the base we will need Stretch to move Forward, Backward, turn right 90 degrees or turn left 90 degrees. Keep in mind that rotations are measured in radians, if you wish to make adjustments, ensure to perform the necessary conversion.
We will need the LLM to know what are the actions for each movement with the description, we want an explanation from the LLM about the movement made, that's why we want this description.
We will need the Large Language Model (LLM) to know what are the actions for each movement with the description, we want an explanation from the LLM about the movement made, that's why we want this description.
```python
```python
def chatter(input_text):
def chatter(input_text):
@ -282,7 +282,7 @@ def chatter(input_text):
print(f"CHATGPT RESPONSE: {response_text}")
print(f"CHATGPT RESPONSE: {response_text}")
return response_text
return response_text
```
```
Let's commence with the chatter method. Here we initialize the Large Language Model (LLM) by specifying the system message and the assistant message. Precision in these initializations is crucial for the correct execution of our code. For the user role, we provide input via the terminal. To print the response, we utilize `response.choices[0].message.content` as outlined in the documentation. To ensure uniformity and ease of handling, we employ the `strip()` and `lower()` methods. The `strip()` function removes any trailing whitespaces, including spaces or tabs. Simultaneously, `lower()` converts the response to lowercase, For instance, if the LLM outputs "MOVE_FORWARD," we transform it into "move_forward." This standardization enhances consistency in handling the model's outputs.
Let's commence with the chatter method. Here we initialize the LLM by specifying the system message and the assistant message. Precision in these initializations is crucial for the correct execution of our code. For the user role, we provide input via the terminal, to print the response, we utilize `response.choices[0].message.content` as outlined in the documentation. To ensure uniformity and ease of handling, we employ the `strip()` and `lower()` methods. The `strip()` function removes any trailing whitespaces, including spaces or tabs, simultaneously, `lower()` converts the response to lowercase, for instance, if the LLM outputs "MOVE_FORWARD," we transform it into "move_forward." This way we enhance consistency in handling the model's outputs.
```python
```python
def extract_action_sequence(response_text):
def extract_action_sequence(response_text):
@ -335,3 +335,11 @@ move forward 0.2m, turn right and move the lift up 2 times, turn left and move t
- Ocassionally, the model can skip one movement (normally the final ones), like those involving the arm or lift. For instance, if you instruct it to move up/down or front/back twice and then return to the starting position, there's a possibility it might execute only one backward movement instead of two. Unfortunately, there isn't a real solution to this, however, you can mitigate this type of error by providing more specific instructions.
- Sometimes the list of actions can appear different, what the code does is to look inside the response text and find the list that starts and ends with the square brackets ‘[]’ and then proceeds to make the movements, but it can occur a case like this:
In this particular scenario, observe that the list of movements appears as [‘arm_front”, if our code searches for the list of movements inside brackets it will not find anything, While this is a rare occurrence, it happened once. In such cases, the only thing to do is to stop the code and run it again.
- As mentioned in the tutorial, issues may arise regarding the movement of Stretch when you want to make a geometric figure for example, it can take a wrong turn and keep moving, keep in mind this when you try moving Stretch around, try having it in an open area just in case.