dranger003
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ license_link: https://ai.google.dev/gemma/terms
|
|
5 |
---
|
6 |
GGUF quants for https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1
|
7 |
|
8 |
-
> Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 7B Gemma is the third model in the series, and is a fine-tuned version of google/gemma-7b that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). You can reproduce the training of this model via the recipe provided in the Alignment Handbook.
|
9 |
|
10 |
| Layers | Context | [Template](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1/blob/19186e70e5679c47aaef473ae2fd56e20765088d/tokenizer_config.json#L59) |
|
11 |
| --- | --- | --- |
|
|
|
5 |
---
|
6 |
GGUF quants for https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1
|
7 |
|
8 |
+
> Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 7B Gemma is the third model in the series, and is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). You can reproduce the training of this model via the recipe provided in the [Alignment Handbook](https://github.com/huggingface/alignment-handbook).
|
9 |
|
10 |
| Layers | Context | [Template](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1/blob/19186e70e5679c47aaef473ae2fd56e20765088d/tokenizer_config.json#L59) |
|
11 |
| --- | --- | --- |
|