---
title: All Settings
---
Set your `model`, `api_key`, `temperature`, etc.
Change your `system_message`, set your interpreter to run `offline`, etc.
Modify the `interpreter.computer`, which handles code execution.
# Language Model
### Model Selection
Specifies which language model to use. Check out the [models](/language-models/) section for a list of available models. Open Interpreter uses [LiteLLM](https://github.com/BerriAI/litellm) under the hood to support over 100+ models.
```bash Terminal
interpreter --model "gpt-3.5-turbo"
```
```python Python
interpreter.llm.model = "gpt-3.5-turbo"
```
```yaml Profile
llm:
model: gpt-3.5-turbo
```
### Temperature
Sets the randomness level of the model's output. The default temperature is 0, you can set it to any value between 0 and 1. The higher the temperature, the more random and creative the output will be.
```bash Terminal
interpreter --temperature 0.7
```
```python Python
interpreter.llm.temperature = 0.7
```
```yaml Profile
llm:
temperature: 0.7
```
### Context Window
Manually set the context window size in tokens for the model. For local models, using a smaller context window will use less RAM, which is more suitable for most devices.
```bash Terminal
interpreter --context_window 16000
```
```python Python
interpreter.llm.context_window = 16000
```
```yaml Profile
llm:
context_window: 16000
```
### Max Tokens
Sets the maximum number of tokens that the model can generate in a single response.
```bash Terminal
interpreter --max_tokens 100
```
```python Python
interpreter.llm.max_tokens = 100
```
```yaml Profile
llm:
max_tokens: 100
```
### Max Output
Set the maximum number of characters for code outputs.
```bash Terminal
interpreter --max_output 1000
```
```python Python
interpreter.llm.max_output = 1000
```
```yaml Profile
llm:
max_output: 1000
```
### API Base
If you are using a custom API, specify its base URL with this argument.
```bash Terminal
interpreter --api_base "https://api.example.com"
```
```python Python
interpreter.llm.api_base = "https://api.example.com"
```
```yaml Profile
llm:
api_base: https://api.example.com
```
### API Key
Set your API key for authentication when making API calls. For OpenAI models, you can get your API key [here](https://platform.openai.com/api-keys).
```bash Terminal
interpreter --api_key "your_api_key_here"
```
```python Python
interpreter.llm.api_key = "your_api_key_here"
```
```yaml Profile
llm:
api_key: your_api_key_here
```
### API Version
Optionally set the API version to use with your selected model. (This will override environment variables)
```bash Terminal
interpreter --api_version 2.0.2
```
```python Python
interpreter.llm.api_version = '2.0.2'
```
```yaml Profile
llm:
api_version: 2.0.2
```
### LLM Supports Functions
Inform Open Interpreter that the language model you're using supports function calling.
```bash Terminal
interpreter --llm_supports_functions
```
```python Python
interpreter.llm.supports_functions = True
```
```yaml Profile
llm:
supports_functions: true
```
### LLM Does Not Support Functions
Inform Open Interpreter that the language model you're using does not support function calling.
```bash Terminal
interpreter --no-llm_supports_functions
```
```python Python
interpreter.llm.supports_functions = False
```
```yaml Profile
llm:
supports_functions: false
```
### LLM Supports Vision
Inform Open Interpreter that the language model you're using supports vision. Defaults to `False`.
```bash Terminal
interpreter --llm_supports_vision
```
```python Python
interpreter.llm.supports_vision = True
```
```yaml Profile
llm:
supports_vision: true
```
# Interpreter
### Vision Mode
Enables vision mode, which adds some special instructions to the prompt and switches to `gpt-4-vision-preview`.
```bash Terminal
interpreter --vision
```
```python Python
interpreter.llm.model = "gpt-4-vision-preview" # Any vision supporting model
interpreter.llm.supports_vision = True
interpreter.llm.supports_functions = False # If model doesn't support functions, which is the case with gpt-4-vision.
interpreter.custom_instructions = """The user will show you an image of the code you write. You can view images directly.
For HTML: This will be run STATELESSLY. You may NEVER write '' or `` or anything like that. It is CRITICAL TO NEVER WRITE PLACEHOLDERS. Placeholders will BREAK it. You must write the FULL HTML CODE EVERY TIME. Therefore you cannot write HTML piecemeal—write all the HTML, CSS, and possibly Javascript **in one step, in one code block**. The user will help you review it visually.
If the user submits a filepath, you will also see the image. The filepath and user image will both be in the user's message.
If you use `plt.show()`, the resulting image will be sent to you. However, if you use `PIL.Image.show()`, the resulting image will NOT be sent to you."""
```
```yaml Profile
force_task_completion: True
llm:
model: "gpt-4-vision-preview"
temperature: 0
supports_vision: True
supports_functions: False
context_window: 110000
max_tokens: 4096
custom_instructions: >
The user will show you an image of the code you write. You can view images directly.
For HTML: This will be run STATELESSLY. You may NEVER write '' or `` or anything like that. It is CRITICAL TO NEVER WRITE PLACEHOLDERS. Placeholders will BREAK it. You must write the FULL HTML CODE EVERY TIME. Therefore you cannot write HTML piecemeal—write all the HTML, CSS, and possibly Javascript **in one step, in one code block**. The user will help you review it visually.
If the user submits a filepath, you will also see the image. The filepath and user image will both be in the user's message.
If you use `plt.show()`, the resulting image will be sent to you. However, if you use `PIL.Image.show()`, the resulting image will NOT be sent to you.
```
### OS Mode
Enables OS mode for multimodal models. Currently not available in Python. Check out more information on OS mode [here](/guides/os-mode).
```bash Terminal
interpreter --os
```
```yaml Profile
os: true
```
### Version
Get the current installed version number of Open Interpreter.
```bash Terminal
interpreter --version
```
### Open Local Models Directory
Opens the models directory. All downloaded Llamafiles are saved here.
```bash Terminal
interpreter --local_models
```
### Open Profiles Directory
Opens the profiles directory. New yaml profile files can be added to this directory.
```bash Terminal
interpreter --profiles
```
### Select Profile
Select a profile to use. If no profile is specified, the default profile will be used.
```bash Terminal
interpreter --profile local.yaml
```
### Help
Display all available terminal arguments.
```bash Terminal
interpreter --help
```
### Force Task Completion
Runs Open Interpreter in a loop, requiring it to admit to completing or failing every task.
```bash Terminal
interpreter --force_task_completion
```
```python Python
interpreter.force_task_completion = True
```
```yaml Profile
force_task_completion: true
```
### Verbose
Run the interpreter in verbose mode. Debug information will be printed at each step to help diagnose issues.
```bash Terminal
interpreter --verbose
```
```python Python
interpreter.verbose = True
```
```yaml Profile
verbose: true
```
### Safe Mode
Enable or disable experimental safety mechanisms like code scanning. Valid options are `off`, `ask`, and `auto`.
```bash Terminal
interpreter --safe_mode ask
```
```python Python
interpreter.safe_mode = 'ask'
```
```yaml Profile
safe_mode: ask
```
### Auto Run
Automatically run the interpreter without requiring user confirmation.
```bash Terminal
interpreter --auto_run
```
```python Python
interpreter.auto_run = True
```
```yaml Profile
auto_run: true
```
### Max Budget
Sets the maximum budget limit for the session in USD.
```bash Terminal
interpreter --max_budget 0.01
```
```python Python
interpreter.max_budget = 0.01
```
```yaml Profile
max_budget: 0.01
```
### Local Mode
Run the model locally. Check the [models page](/language-models/local-models/lm-studio) for more information.
```bash Terminal
interpreter --local
```
```python Python
from interpreter import interpreter
interpreter.offline = True # Disables online features like Open Procedures
interpreter.llm.model = "openai/x" # Tells OI to send messages in OpenAI's format
interpreter.llm.api_key = "fake_key" # LiteLLM, which we use to talk to local models, requires this
interpreter.llm.api_base = "http://localhost:1234/v1" # Point this at any OpenAI compatible server
interpreter.chat()
```
```yaml Profile
local: true
```
### Fast Mode
Sets the model to gpt-3.5-turbo and encourages it to only write code without confirmation.
```bash Terminal
interpreter --fast
```
```yaml Profile
fast: true
```
### Custom Instructions
Appends custom instructions to the system message. This is useful for adding information about your system, preferred languages, etc.
```bash Terminal
interpreter --custom_instructions "This is a custom instruction."
```
```python Python
interpreter.custom_instructions = "This is a custom instruction."
```
```yaml Profile
custom_instructions: "This is a custom instruction."
```
### System Message
We don't recommend modifying the system message, as doing so opts you out of future updates to the core system message. Use `--custom_instructions` instead, to add relevant information to the system message. If you must modify the system message, you can do so by using this argument, or by changing a profile file.
```bash Terminal
interpreter --system_message "You are Open Interpreter..."
```
```python Python
interpreter.system_message = "You are Open Interpreter..."
```
```yaml Profile
system_message: "You are Open Interpreter..."
```
### Disable Telemetry
Opt out of [telemetry](telemetry/telemetry).
```bash Terminal
interpreter --disable_telemetry
```
```python Python
interpreter.anonymized_telemetry = False
```
```yaml Profile
disable_telemetry: true
```
### Offline
This boolean flag determines whether to enable or disable some offline features like [open procedures](https://open-procedures.replit.app/). Use this in conjunction with the `model` parameter to set your language model.
```python Python
interpreter.offline = True
```
```bash Terminal
interpreter --offline true
```
```yaml Profile
offline: true
```
### Messages
This property holds a list of `messages` between the user and the interpreter.
You can use it to restore a conversation:
```python
interpreter.chat("Hi! Can you print hello world?")
print(interpreter.messages)
# This would output:
# [
# {
# "role": "user",
# "message": "Hi! Can you print hello world?"
# },
# {
# "role": "assistant",
# "message": "Sure!"
# }
# {
# "role": "assistant",
# "language": "python",
# "code": "print('Hello, World!')",
# "output": "Hello, World!"
# }
# ]
#You can use this to restore `interpreter` to a previous conversation.
interpreter.messages = messages # A list that resembles the one above
```
# Computer
The `computer` object in `interpreter.computer` is a virtual computer that the AI controls. Its primary interface/function is to execute code and return the output in real-time.
### Offline
Running the `computer` in offline mode will disable some online features, like the hosted [Computer API](https://api.openinterpreter.com/). Inherits from `interpreter.offline`.
```python Python
interpreter.computer.offline = True
```
```yaml Profile
computer.offline: True
```
### Verbose
This is primarily used for debugging `interpreter.computer`. Inherits from `interpreter.verbose`.
```python Python
interpreter.computer.verbose = True
```
```yaml Profile
computer.verbose: True
```
### Emit Images
The `emit_images` attribute in `interpreter.computer` controls whether the computer should emit images or not. This is inherited from `interpreter.llm.supports_vision`.
This is used for multimodel vs. text only models. Running `computer.display.view()` will return an actual screenshot for multimodal models if `emit_images` is True. If it's False, `computer.display.view()` will return all the text on the screen.
Many other functions of the computer can produce image/text outputs, and this parameter controls that.
```python Python
interpreter.computer.emit_images = True
```
```yaml Profile
computer.emit_images: True
```