File size: 4,010 Bytes
886d8e9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
---
title: Arguments
---

<Card
  title="New: Streaming responses in Python"
  icon="arrow-up-right"
  href="/usage/python/streaming-response"
>
  Learn how to build Open Interpreter into your application.
</Card>

#### `messages`

This property holds a list of `messages` between the user and the interpreter.

You can use it to restore a conversation:

```python
interpreter.chat("Hi! Can you print hello world?")

print(interpreter.messages)

# This would output:

[
   {
      "role": "user",
      "message": "Hi! Can you print hello world?"
   },
   {
      "role": "assistant",
      "message": "Sure!"
   }
   {
      "role": "assistant",
      "language": "python",
      "code": "print('Hello, World!')",
      "output": "Hello, World!"
   }
]
```

You can use this to restore `interpreter` to a previous conversation.

```python
interpreter.messages = messages # A list that resembles the one above
```

---

#### `offline`

<Info>This replaced `interpreter.local` in the New Computer Update (`0.2.0`).</Info>

This boolean flag determines whether to enable or disable some offline features like [open procedures](https://open-procedures.replit.app/).

```python
interpreter.offline = True  # Check for updates, use procedures
interpreter.offline = False  # Don't check for updates, don't use procedures
```

Use this in conjunction with the `model` parameter to set your language model.

---

#### `auto_run`

Setting this flag to `True` allows Open Interpreter to automatically run the generated code without user confirmation.

```python
interpreter.auto_run = True  # Don't require user confirmation
interpreter.auto_run = False  # Require user confirmation (default)
```

---

#### `verbose`

Use this boolean flag to toggle verbose mode on or off. Verbose mode will print information at every step to help diagnose problems.

```python
interpreter.verbose = True  # Turns on verbose mode
interpreter.verbose = False  # Turns off verbose mode
```

---

#### `max_output`

This property sets the maximum number of tokens for the output response.

```python
interpreter.max_output = 2000
```

---

#### `conversation_history`

A boolean flag to indicate if the conversation history should be stored or not.

```python
interpreter.conversation_history = True  # To store history
interpreter.conversation_history = False  # To not store history
```

---

#### `conversation_filename`

This property sets the filename where the conversation history will be stored.

```python
interpreter.conversation_filename = "my_conversation.json"
```

---

#### `conversation_history_path`

You can set the path where the conversation history will be stored.

```python
import os
interpreter.conversation_history_path = os.path.join("my_folder", "conversations")
```

---

#### `model`

Specifies the language model to be used.

```python
interpreter.llm.model = "gpt-3.5-turbo"
```

---

#### `temperature`

Sets the randomness level of the model's output.

```python
interpreter.llm.temperature = 0.7
```

---

#### `system_message`

This stores the model's system message as a string. Explore or modify it:

```python
interpreter.system_message += "\nRun all shell commands with -y."
```

---

#### `context_window`

This manually sets the context window size in tokens.

We try to guess the right context window size for you model, but you can override it with this parameter.

```python
interpreter.llm.context_window = 16000
```

---

#### `max_tokens`

Sets the maximum number of tokens the model can generate in a single response.

```python
interpreter.llm.max_tokens = 100
```

---

#### `api_base`

If you are using a custom API, you can specify its base URL here.

```python
interpreter.llm.api_base = "https://api.example.com"
```

---

#### `api_key`

Set your API key for authentication.

```python
interpreter.llm.api_key = "your_api_key_here"
```

---

#### `max_budget`

This property sets the maximum budget limit for the session in USD.

```python
interpreter.max_budget = 0.01 # 1 cent
```