Update README.md
Browse files
README.md
CHANGED
@@ -21,15 +21,15 @@ This is an experiment. Later, I will test the [9:1](https://huggingface.co/huihu
|
|
21 |
|
22 |
## Model Details
|
23 |
- **Base Models:**
|
24 |
-
- [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) (
|
25 |
-
- [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) (
|
26 |
- **Model Size:** 8B parameters
|
27 |
- **Architecture:** Llama 3.1
|
28 |
- **Mixing Ratio:** 6:4 (SuperNova-Lite:Meta-Llama-3.1-8B-Instruct-abliterated)
|
29 |
|
30 |
## Key Features
|
31 |
-
- **SuperNova-Lite Contributions (
|
32 |
-
- **Meta-Llama-3.1-8B-Instruct-abliterated Contributions (
|
33 |
|
34 |
## Usage
|
35 |
You can use this mixed model in your applications by loading it with Hugging Face's `transformers` library:
|
|
|
21 |
|
22 |
## Model Details
|
23 |
- **Base Models:**
|
24 |
+
- [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) (60%)
|
25 |
+
- [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) (60%)
|
26 |
- **Model Size:** 8B parameters
|
27 |
- **Architecture:** Llama 3.1
|
28 |
- **Mixing Ratio:** 6:4 (SuperNova-Lite:Meta-Llama-3.1-8B-Instruct-abliterated)
|
29 |
|
30 |
## Key Features
|
31 |
+
- **SuperNova-Lite Contributions (60%):** Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture.
|
32 |
+
- **Meta-Llama-3.1-8B-Instruct-abliterated Contributions (60%):** This is an uncensored version of Llama 3.1 8B Instruct created with abliteration.
|
33 |
|
34 |
## Usage
|
35 |
You can use this mixed model in your applications by loading it with Hugging Face's `transformers` library:
|