Edit model card

Llama 3 Verus 8b (name subject to change) is an LLM trained specifically to help people understand Verus β€” an open-source multi-chain blockchain protocol with unlimited scale. It's trained mainly on QA-style data produced by Verustoolkit.

Everything about Llama 3 Verus is open-source, from the data generation code to the datasets themselves to the training configs.

This model has been trained both to memorize facts about Verus, and to use RAG. It should achieve good performance without RAG so long as enough latent space activation is supplied via the system prompt (shown below) -- but RAG will be an extra guard against hallucinations, especially at the lower quantization levels.

Axolotl configs are provided, as are a number of GGUF quants created using the imatrix method. The imatrix itself is also provided with this repo.

Prompt template: ChatML.

Recommended: use the lmstudio config in the files section of this repo!

Temperature recommendation: 0.05 or something else that's low.

Recommended: if you're on a computer, you should probably use ggml-model-Q8_0.gguf -- this is the quantized version with the highest precision and best quality.

Example conversation (No RAG used!)

image/png

Created with πŸ’™ by Evan Armstrong for the Verus Community

Built with Meta Llama 3

If you're curious about Verus and its mission, feel free to check out https://verus.io/

Downloads last month
644
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using VerusCommunity/llama-3-verus-8-epochs-revision-1 6