Spaces:
Running
on
Zero
Running
on
Zero
--- | |
title: LM Studio | |
--- | |
Open Interpreter can use OpenAI-compatible server to run models locally. (LM Studio, jan.ai, ollama etc) | |
Simply run `interpreter` with the api_base URL of your inference server (for LM studio it is `http://localhost:1234/v1` by default): | |
```shell | |
interpreter --api_base "http://localhost:1234/v1" --api_key "fake_key" | |
``` | |
Alternatively you can use Llamafile without installing any third party software just by running | |
```shell | |
interpreter --local | |
``` | |
for a more detailed guide check out [this video by Mike Bird](https://www.youtube.com/watch?v=CEs51hGWuGU?si=cN7f6QhfT4edfG5H) | |
**How to run LM Studio in the background.** | |
1. Download [https://lmstudio.ai/](https://lmstudio.ai/) then start it. | |
2. Select a model then click **β Download**. | |
3. Click the **βοΈ** button on the left (below π¬). | |
4. Select your model at the top, then click **Start Server**. | |
Once the server is running, you can begin your conversation with Open Interpreter. | |
(When you run the command `interpreter --local` and select LMStudio, these steps will be displayed.) | |
<Info> | |
Local mode sets your `context_window` to 3000, and your `max_tokens` to 1000. | |
If your model has different requirements, [set these parameters | |
manually.](/settings#language-model) | |
</Info> | |
# Python | |
Compared to the terminal interface, our Python package gives you more granular control over each setting. | |
You can point `interpreter.llm.api_base` at any OpenAI compatible server (including one running locally). | |
For example, to connect to [LM Studio](https://lmstudio.ai/), use these settings: | |
```python | |
from interpreter import interpreter | |
interpreter.offline = True # Disables online features like Open Procedures | |
interpreter.llm.model = "openai/x" # Tells OI to send messages in OpenAI's format | |
interpreter.llm.api_key = "fake_key" # LiteLLM, which we use to talk to LM Studio, requires this | |
interpreter.llm.api_base = "http://localhost:1234/v1" # Point this at any OpenAI compatible server | |
interpreter.chat() | |
``` | |
Simply ensure that **LM Studio**, or any other OpenAI compatible server, is running at `api_base`. | |