--- language: - en license: apache-2.0 tags: - function calling - on-device language model - android - mlx base_model: google/gemma-2b inference: false space: false spaces: false model-index: - name: Octopus-V2-2B results: [] --- # mlx-community/Octopus-v2-4bit This model was converted to MLX format from [`NexaAIDev/Octopus-v2`]() using mlx-lm version **0.7.0**. Refer to the [original model card](https://huggingface.co/NexaAIDev/Octopus-v2) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Octopus-v2-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```