Spaces:
Running
on
Zero
Running
on
Zero
--- | |
title: Introduction | |
--- | |
**Open Interpreter** works with both hosted and local language models. | |
Hosted models are faster and more capable, but require payment. Local models are private and free, but are often less capable. | |
For this reason, we recommend starting with a **hosted** model, then switching to a local model once you've explored Open Interpreter's capabilities. | |
<CardGroup> | |
<Card | |
title="Hosted setup" | |
icon="cloud" | |
href="/language-models/hosted-models" | |
> | |
Connect to a hosted language model like GPT-4 **(recommended)** | |
</Card> | |
<Card | |
title="Local setup" | |
icon="microchip" | |
href="/language-models/local-models" | |
> | |
Setup a local language model like Mistral | |
</Card> | |
</CardGroup> | |
<br/> | |
<br/> | |
<span class="opacity-50">Thank you to the incredible [LiteLLM](https://litellm.ai/) team for their efforts in connecting Open Interpreter to hosted providers.</span> |