Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ pipeline_tag: text-generation
|
|
6 |
---
|
7 |
---
|
8 |
|
9 |
-
<div align="center"><h1 align="center">~ GenZ ~</h1><img src="https://
|
10 |
|
11 |
|
12 |
<p align="center"><i>Democratizing access to LLMs for the open-source community.<br>Let's advance AI, together. </i></p>
|
@@ -18,7 +18,7 @@ pipeline_tag: text-generation
|
|
18 |
|
19 |
Welcome to **GenZ**, an advanced Large Language Model (LLM) fine-tuned on the foundation of Meta's open-source Llama V2 13B parameter model. At Bud Ecosystem, we believe in the power of open-source collaboration to drive the advancement of technology at an accelerated pace. Our vision is to democratize access to fine-tuned LLMs, and to that end, we will be releasing a series of models across different parameter counts (7B, 13B, and 70B) and quantizations (32-bit and 4-bit) for the open-source community to use, enhance, and build upon.
|
20 |
|
21 |
-
<p align="center"><img src="https://
|
22 |
|
23 |
The smaller quantization version of our models makes them more accessible, enabling their use even on personal computers. This opens up a world of possibilities for developers, researchers, and enthusiasts to experiment with these models and contribute to the collective advancement of language model technology.
|
24 |
|
@@ -45,14 +45,14 @@ And this isn't the end. It's just the beginning of a journey towards creating mo
|
|
45 |
---
|
46 |
|
47 |
|
48 |
-
<img src="https://
|
49 |
|
50 |
-
| ![Python](https://
|
51 |
|:--:|:--:|:--:|
|
52 |
| *Code Generation* | *Poem Generation* | *Email Generation* |
|
53 |
|
54 |
<!--
|
55 |
-
<p align="center"><img src="https://
|
56 |
-->
|
57 |
|
58 |
|
@@ -233,7 +233,7 @@ A key evaluation metric we use is the MT Bench score. This score provides a comp
|
|
233 |
|
234 |
We're proud to say that our model performs at a level that's close to the Llama-70B-chat model on the MT Bench and top of the list among 13B models.
|
235 |
|
236 |
-
<p align="center"><img src="https://
|
237 |
|
238 |
In the transition from GenZ V1 to V2, we noticed some fascinating performance shifts. While we saw a slight dip in coding performance, two other areas, Roleplay and Math, saw noticeable improvements.
|
239 |
|
@@ -248,4 +248,4 @@ Remember, we're just getting started. This is just the beginning of a journey th
|
|
248 |
---
|
249 |
|
250 |
|
251 |
-
Check the GitHub for the code -> [GenZ](https://
|
|
|
6 |
---
|
7 |
---
|
8 |
|
9 |
+
<div align="center"><h1 align="center">~ GenZ ~</h1><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/blob/main/assets/genz-logo.png" width=150></div>
|
10 |
|
11 |
|
12 |
<p align="center"><i>Democratizing access to LLMs for the open-source community.<br>Let's advance AI, together. </i></p>
|
|
|
18 |
|
19 |
Welcome to **GenZ**, an advanced Large Language Model (LLM) fine-tuned on the foundation of Meta's open-source Llama V2 13B parameter model. At Bud Ecosystem, we believe in the power of open-source collaboration to drive the advancement of technology at an accelerated pace. Our vision is to democratize access to fine-tuned LLMs, and to that end, we will be releasing a series of models across different parameter counts (7B, 13B, and 70B) and quantizations (32-bit and 4-bit) for the open-source community to use, enhance, and build upon.
|
20 |
|
21 |
+
<p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/blob/main/assets/MTBench_CompareChart_28July2023.png" width="500"></p>
|
22 |
|
23 |
The smaller quantization version of our models makes them more accessible, enabling their use even on personal computers. This opens up a world of possibilities for developers, researchers, and enthusiasts to experiment with these models and contribute to the collective advancement of language model technology.
|
24 |
|
|
|
45 |
---
|
46 |
|
47 |
|
48 |
+
<img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/blob/main/assets/screenshot_genz13bv2.png" width="100%">
|
49 |
|
50 |
+
| ![Python](https://raw.githubusercontent.com/BudEcosystem/GenZ/blob/main/assets/Python.gif) | ![Poem](https://raw.githubusercontent.com/BudEcosystem/GenZ/blob/main/assets/Poem.gif) | ![Email](https://raw.githubusercontent.com/BudEcosystem/GenZ/blob/main/assets/Email.gif)
|
51 |
|:--:|:--:|:--:|
|
52 |
| *Code Generation* | *Poem Generation* | *Email Generation* |
|
53 |
|
54 |
<!--
|
55 |
+
<p align="center"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Python.gif" width="33%" alt="Python Code"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Poem.gif" width="33%"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Email.gif" width="33%"></p>
|
56 |
-->
|
57 |
|
58 |
|
|
|
233 |
|
234 |
We're proud to say that our model performs at a level that's close to the Llama-70B-chat model on the MT Bench and top of the list among 13B models.
|
235 |
|
236 |
+
<p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/blob/main/assets/mt_bench_score.png" width="500"></p>
|
237 |
|
238 |
In the transition from GenZ V1 to V2, we noticed some fascinating performance shifts. While we saw a slight dip in coding performance, two other areas, Roleplay and Math, saw noticeable improvements.
|
239 |
|
|
|
248 |
---
|
249 |
|
250 |
|
251 |
+
Check the GitHub for the code -> [GenZ](https://raw.githubusercontent.com/BudEcosystem/GenZ)
|