GGUF
English
unsloth
Inference Endpoints
conversational
aashish1904 commited on
Commit
5b5457c
1 Parent(s): 176829d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +144 -0
README.md ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: apache-2.0
5
+ base_model:
6
+ - Qwen/Qwen2-7B
7
+ datasets:
8
+ - Replete-AI/Everything_Instruct_8k_context_filtered
9
+ tags:
10
+ - unsloth
11
+ language:
12
+ - en
13
+
14
+ ---
15
+
16
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
17
+
18
+ # QuantFactory/Replete-LLM-Qwen2-7b-GGUF
19
+ This is quantized version of [Replete-AI/Replete-LLM-Qwen2-7b](https://huggingface.co/Replete-AI/Replete-LLM-Qwen2-7b) created using llama.cpp
20
+
21
+ # Original Model Card
22
+
23
+ Replete-LLM-Qwen2-7b
24
+
25
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/q9gC-_O4huL2pK4nY-Y2x.png)
26
+
27
+ Thank you to TensorDock for sponsoring **Replete-LLM**
28
+ you can check out their website for cloud compute rental below.
29
+ - https://tensordock.com
30
+ _____________________________________________________________
31
+ **Replete-LLM** is **Replete-AI**'s flagship model. We take pride in releasing a fully open-source, low parameter, and competitive AI model that not only surpasses its predecessor **Qwen2-7B-Instruct** in performance, but also competes with (if not surpasses) other flagship models from closed source like **gpt-3.5-turbo**, but also open source models such as **gemma-2-9b-it**
32
+ and **Meta-Llama-3.1-8B-Instruct** in terms of overall performance across all fields and categories. You can find the dataset that this model was trained on linked bellow:
33
+
34
+ - https://huggingface.co/datasets/Replete-AI/Everything_Instruct_8k_context_filtered
35
+
36
+ Try bartowski's quantizations:
37
+
38
+ - https://huggingface.co/bartowski/Replete-LLM-Qwen2-7b-exl2
39
+
40
+ - https://huggingface.co/bartowski/Replete-LLM-Qwen2-7b-GGUF
41
+
42
+ Cant run the model locally? Well then use the huggingface space instead:
43
+
44
+ - https://huggingface.co/spaces/rombodawg/Replete-LLM-Qwen2-7b
45
+
46
+ Some statistics about the data the model was trained on can be found in the image and details bellow, while a more comprehensive look can be found in the model card for the dataset. (linked above):
47
+
48
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/75SR21J3-zbTGKYbeoBzX.png)
49
+
50
+ **Replete-LLM-Qwen2-7b** is a versatile model fine-tuned to excel on any imaginable task. The following types of generations were included in the fine-tuning process:
51
+
52
+ - **Science**: (General, Physical Reasoning)
53
+ - **Social Media**: (Reddit, Twitter)
54
+ - **General Knowledge**: (Character-Codex), (Famous Quotes), (Steam Video Games), (How-To? Explanations)
55
+ - **Cooking**: (Cooking Preferences, Recipes)
56
+ - **Writing**: (Poetry, Essays, General Writing)
57
+ - **Medicine**: (General Medical Data)
58
+ - **History**: (General Historical Data)
59
+ - **Law**: (Legal Q&A)
60
+ - **Role-Play**: (Couple-RP, Roleplay Conversations)
61
+ - **News**: (News Generation)
62
+ - **Coding**: (3 million rows of coding data in over 100 coding languages)
63
+ - **Math**: (Math data from TIGER-Lab/MathInstruct)
64
+ - **Function Calling**: (Function calling data from "glaiveai/glaive-function-calling-v2")
65
+ - **General Instruction**: (All of teknium/OpenHermes-2.5 fully filtered and uncensored)
66
+ ______________________________________________________________________________________________
67
+ ## Prompt Template: ChatML
68
+ ```
69
+ <|im_start|>system
70
+ {}<|im_end|>
71
+ <|im_start|>user
72
+ {}<|im_end|>
73
+ <|im_start|>assistant
74
+ {}
75
+ ```
76
+
77
+ ## End token (eot_token)
78
+ ```
79
+ <|endoftext|>
80
+ ```
81
+ ______________________________________________________________________________________________
82
+ Want to know the secret sause of how this model was made? Find the write up bellow
83
+
84
+ **Continuous Fine-tuning Without Loss Using Lora and Mergekit**
85
+
86
+ https://docs.google.com/document/d/1OjbjU5AOz4Ftn9xHQrX3oFQGhQ6RDUuXQipnQ9gn6tU/edit?usp=sharing
87
+ ______________________________________________________________________________________________
88
+
89
+ The code to finetune this AI model can be found bellow
90
+
91
+ - https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing
92
+
93
+ - Note this model in particular was finetuned using an h100 using Tensordock.com using the Pytorch OS. In order to use Unsloth code with TensorDock you need to run the following code (Bellow) to reinstall drivers on TensorDock before unsloth works. After running the code bellow, your Virtual Machine will reset, and you will have to SSH back into it. And then you can run the normal unsloth code in order.
94
+
95
+ ```python
96
+ # Check Current Size
97
+ !df -h /dev/shm
98
+
99
+ # Increase Size Temporarily
100
+ !sudo mount -o remount,size=16G /dev/shm
101
+
102
+ # Increase Size Permanently
103
+ !echo "tmpfs /dev/shm tmpfs defaults,size=16G 0 0" | sudo tee -a /etc/fstab
104
+
105
+ # Remount /dev/shm
106
+ !sudo mount -o remount /dev/shm
107
+
108
+
109
+ # Verify the Changes
110
+ !df -h /dev/shm
111
+
112
+ !nvcc --version
113
+
114
+ !export TORCH_DISTRIBUTED_DEBUG=DETAIL
115
+ !export NCCL_DEBUG=INFO
116
+ !python -c "import torch; print(torch.version.cuda)"
117
+ !export PATH=/usr/local/cuda/bin:$PATH
118
+ !export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
119
+ !export NCCL_P2P_LEVEL=NVL
120
+ !export NCCL_DEBUG=INFO
121
+ !export NCCL_DEBUG_SUBSYS=ALL
122
+ !export TORCH_DISTRIBUTED_DEBUG=INFO
123
+ !export TORCHELASTIC_ERROR_FILE=/PATH/TO/torcherror.log
124
+ !sudo apt-get remove --purge -y '^nvidia-.*'
125
+ !sudo apt-get remove --purge -y '^cuda-.*'
126
+ !sudo apt-get autoremove -y
127
+ !sudo apt-get autoclean -y
128
+ !sudo apt-get update -y
129
+ !sudo apt-get install -y nvidia-driver-535 cuda-12-1
130
+ !sudo add-apt-repository ppa:graphics-drivers/ppa -y
131
+ !sudo apt-get update -y
132
+ !sudo apt-get update -y
133
+ !sudo apt-get install -y software-properties-common
134
+ !sudo add-apt-repository ppa:graphics-drivers/ppa -y
135
+ !sudo apt-get update -y
136
+ !latest_driver=$(apt-cache search '^nvidia-driver-[0-9]' | grep -oP 'nvidia-driver-\K[0-9]+' | sort -n | tail -1) && sudo apt-get install -y nvidia-driver-$latest_driver
137
+ !sudo reboot
138
+ ```
139
+ _______________________________________________________________________________
140
+
141
+ ## Join the Replete-Ai discord! We are a great and Loving community!
142
+
143
+ - https://discord.gg/ZZbnsmVnjD
144
+