Spaces:
Running
on
Zero
Running
on
Zero
--- | |
title: Anyscale | |
--- | |
To use Open Interpreter with a model from Anyscale, set the `model` flag: | |
<CodeGroup> | |
```bash Terminal | |
interpreter --model anyscale/<model-name> | |
``` | |
```python Python | |
from interpreter import interpreter | |
# Set the model to use from AWS Bedrock: | |
interpreter.llm.model = "anyscale/<model-name>" | |
interpreter.chat() | |
``` | |
</CodeGroup> | |
# Supported Models | |
We support the following completion models from Anyscale: | |
- Llama 2 7B Chat | |
- Llama 2 13B Chat | |
- Llama 2 70B Chat | |
- Mistral 7B Instruct | |
- CodeLlama 34b Instruct | |
<CodeGroup> | |
```bash Terminal | |
interpreter --model anyscale/meta-llama/Llama-2-7b-chat-hf | |
interpreter --model anyscale/meta-llama/Llama-2-13b-chat-hf | |
interpreter --model anyscale/meta-llama/Llama-2-70b-chat-hf | |
interpreter --model anyscale/mistralai/Mistral-7B-Instruct-v0.1 | |
interpreter --model anyscale/codellama/CodeLlama-34b-Instruct-hf | |
``` | |
```python Python | |
interpreter.llm.model = "anyscale/meta-llama/Llama-2-7b-chat-hf" | |
interpreter.llm.model = "anyscale/meta-llama/Llama-2-13b-chat-hf" | |
interpreter.llm.model = "anyscale/meta-llama/Llama-2-70b-chat-hf" | |
interpreter.llm.model = "anyscale/mistralai/Mistral-7B-Instruct-v0.1" | |
interpreter.llm.model = "anyscale/codellama/CodeLlama-34b-Instruct-hf" | |
``` | |
</CodeGroup> | |
# Required Environment Variables | |
Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models. | |
| Environment Variable | Description | Where to Find | | |
| -------------------- | -------------------------------------- | --------------------------------------------------------------------------- | | |
| `ANYSCALE_API_KEY` | The API key for your Anyscale account. | [Anyscale Account Settings](https://app.endpoints.anyscale.com/credentials) | | |