Triangle104's picture
Update README.md
0a5befb verified
|
raw
history blame
4.96 kB
---
base_model: Spestly/Ava-1.0-12B
library_name: transformers
license: apache-2.0
datasets:
- nvidia/HelpSteer2
tags:
- unsloth
- llama-cpp
- gguf-my-repo
---
# Triangle104/Ava-1.0-12B-Q4_K_S-GGUF
This model was converted to GGUF format from [`Spestly/Ava-1.0-12B`](https://huggingface.co/Spestly/Ava-1.0-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Spestly/Ava-1.0-12B) for more details on the model.
---
Model details:
-
Ava 1.0
Ava 1.0 is a cutting-edge conversational AI model, fine-tuned from Mistral's NeMo to deliver exceptional conversational capabilities. Designed to be your go-to AI for engaging, accurate, and context-aware dialogues, Ava 1.0 incorporates updated knowledge and enhanced natural language understanding to provide an unparalleled user experience.
Key Features
-
Enhanced Conversational Skills: Ava 1.0 demonstrates fluid and human-like dialogue generation with improved contextual understanding.
Updated Knowledge Base: Trained on the latest datasets, Ava 1.0 ensures responses are relevant and informed.
Multi-Turn Conversation: Handles complex, multi-turn interactions seamlessly, maintaining coherence and focus.
Personalized Assistance: Adapts responses based on user preferences and context.
Multilingual Support: Capable of understanding and responding in multiple languages with high accuracy.
Why Ava 1.0?
-
Ava 1.0 is built to excel in a wide range of applications:
Customer Support: Provides intelligent, empathetic, and accurate responses to customer queries.
Education: Acts as an interactive tutor, offering explanations and personalized guidance.
Personal Assistance: Supports daily tasks, scheduling, and answering general queries with ease.
Creative Collaboration: Assists with brainstorming, writing, and other creative processes.
Usage
-
Using Ava 1.0 in your project is straightforward. Here’s a quick setup guide:
Installation
Ensure you have the necessary libraries and dependencies installed. Use the following command:
pip install transformers
Implementation
-
Here’s a sample Python script to interact with Ava 1.0:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Spestly/Ava-12B")
#OR
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Spestly/Ava-12B")
model = AutoModelForCausalLM.from_pretrained("Spestly/Ava-12B")
Training Highlights
-
Ava 1.0 was fine-tuned with the following enhancements:
Extensive Conversational Dataset: Leveraging a wide array of open-domain and specialized conversational datasets.
Knowledge Integration: Incorporating recent advancements and updates to provide cutting-edge insights.
Fine-Tuning on Mistral NeMo: Utilizing the powerful Mistral NeMo framework for robust and efficient training.
Limitations
-
Contextual Challenges: In rare cases, Ava 1.0 may misinterpret ambiguous inputs.
Hardware Requirements: Optimal performance requires a robust system with GPU acceleration.
Roadmap
-
Ava 2.0: Introducing real-time learning capabilities and broader conversational adaptability.
Lightweight Model: Developing a lightweight version optimized for edge devices.
Domain-Specific Fine-Tunes: Specialized versions for industries like healthcare, education, and finance.
License
-
Ava 1.0 is released under the Apache 2.0 license.
Contact
-
For inquiries, feedback, or support, feel free to reach out:
Email: aayan.mishra@proton.me
GitHub: Spestly
Website: Ava Project Page
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Ava-1.0-12B-Q4_K_S-GGUF --hf-file ava-1.0-12b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Ava-1.0-12B-Q4_K_S-GGUF --hf-file ava-1.0-12b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Ava-1.0-12B-Q4_K_S-GGUF --hf-file ava-1.0-12b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Ava-1.0-12B-Q4_K_S-GGUF --hf-file ava-1.0-12b-q4_k_s.gguf -c 2048
```