aya-expanse-8b-4bit / README.md
datageek's picture
961a5033b99cf8336c39555af226826d9a7493303faeef383279449c620e1fcc
f5fad68 verified
|
raw
history blame
1.57 kB
---
inference: false
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
extra_gated_prompt: By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and
acknowledge that the information you provide will be collected, used, and shared
in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy). You’ll
receive email updates about C4AI and Cohere research, events, products and services.
You can unsubscribe at any time.
extra_gated_fields:
Name: text
Affiliation: text
Country: country
I agree to use this model for non-commercial use ONLY: checkbox
base_model: CohereForAI/aya-expanse-8b
tags:
- mlx
---
# mlx-community/aya-expanse-8b
The Model [mlx-community/aya-expanse-8b](https://huggingface.co/mlx-community/aya-expanse-8b) was converted to MLX format from [CohereForAI/aya-expanse-8b](https://huggingface.co/CohereForAI/aya-expanse-8b) using mlx-lm version **0.19.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/aya-expanse-8b")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```