Spaces:
Running
on
Zero
Running
on
Zero
title: Arguments | |
<Card | |
title="New: Streaming responses in Python" | |
icon="arrow-up-right" | |
href="/usage/python/streaming-response" | |
> | |
Learn how to build Open Interpreter into your application. | |
</Card> | |
#### `messages` | |
This property holds a list of `messages` between the user and the interpreter. | |
You can use it to restore a conversation: | |
```python | |
interpreter.chat("Hi! Can you print hello world?") | |
print(interpreter.messages) | |
# This would output: | |
[ | |
{ | |
"role": "user", | |
"message": "Hi! Can you print hello world?" | |
}, | |
{ | |
"role": "assistant", | |
"message": "Sure!" | |
} | |
{ | |
"role": "assistant", | |
"language": "python", | |
"code": "print('Hello, World!')", | |
"output": "Hello, World!" | |
} | |
] | |
``` | |
You can use this to restore `interpreter` to a previous conversation. | |
```python | |
interpreter.messages = messages # A list that resembles the one above | |
``` | |
#### `offline` | |
<Info>This replaced `interpreter.local` in the New Computer Update (`0.2.0`).</Info> | |
This boolean flag determines whether to enable or disable some offline features like [open procedures](https://open-procedures.replit.app/). | |
```python | |
interpreter.offline = True # Check for updates, use procedures | |
interpreter.offline = False # Don't check for updates, don't use procedures | |
``` | |
Use this in conjunction with the `model` parameter to set your language model. | |
#### `auto_run` | |
Setting this flag to `True` allows Open Interpreter to automatically run the generated code without user confirmation. | |
```python | |
interpreter.auto_run = True # Don't require user confirmation | |
interpreter.auto_run = False # Require user confirmation (default) | |
``` | |
#### `verbose` | |
Use this boolean flag to toggle verbose mode on or off. Verbose mode will print information at every step to help diagnose problems. | |
```python | |
interpreter.verbose = True # Turns on verbose mode | |
interpreter.verbose = False # Turns off verbose mode | |
``` | |
#### `max_output` | |
This property sets the maximum number of tokens for the output response. | |
```python | |
interpreter.max_output = 2000 | |
``` | |
#### `conversation_history` | |
A boolean flag to indicate if the conversation history should be stored or not. | |
```python | |
interpreter.conversation_history = True # To store history | |
interpreter.conversation_history = False # To not store history | |
``` | |
#### `conversation_filename` | |
This property sets the filename where the conversation history will be stored. | |
```python | |
interpreter.conversation_filename = "my_conversation.json" | |
``` | |
#### `conversation_history_path` | |
You can set the path where the conversation history will be stored. | |
```python | |
import os | |
interpreter.conversation_history_path = os.path.join("my_folder", "conversations") | |
``` | |
#### `model` | |
Specifies the language model to be used. | |
```python | |
interpreter.llm.model = "gpt-3.5-turbo" | |
``` | |
#### `temperature` | |
Sets the randomness level of the model's output. | |
```python | |
interpreter.llm.temperature = 0.7 | |
``` | |
#### `system_message` | |
This stores the model's system message as a string. Explore or modify it: | |
```python | |
interpreter.system_message += "\nRun all shell commands with -y." | |
``` | |
#### `context_window` | |
This manually sets the context window size in tokens. | |
We try to guess the right context window size for you model, but you can override it with this parameter. | |
```python | |
interpreter.llm.context_window = 16000 | |
``` | |
#### `max_tokens` | |
Sets the maximum number of tokens the model can generate in a single response. | |
```python | |
interpreter.llm.max_tokens = 100 | |
``` | |
#### `api_base` | |
If you are using a custom API, you can specify its base URL here. | |
```python | |
interpreter.llm.api_base = "https://api.example.com" | |
``` | |
#### `api_key` | |
Set your API key for authentication. | |
```python | |
interpreter.llm.api_key = "your_api_key_here" | |
``` | |
#### `max_budget` | |
This property sets the maximum budget limit for the session in USD. | |
```python | |
interpreter.max_budget = 0.01 # 1 cent | |
``` | |