akridge commited on
Commit
0d70670
1 Parent(s): 576a06b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -4
README.md CHANGED
@@ -2,10 +2,24 @@
2
  tags:
3
  - llama2
4
  - llama-2-7b-chat-hf
 
 
5
  ---
6
  # Llama-2-7b-chat-hf-GGUF
7
- Based on Llama-2-7b-chat-hf.
 
 
 
 
 
8
 
9
- GGML_VERSION = "gguf"
10
- Conversion = float16
11
- Quantization method = q4_k_s (Uses Q4_K for all tensors - "q" + the number of bits + the variant used )
 
 
 
 
 
 
 
 
2
  tags:
3
  - llama2
4
  - llama-2-7b-chat-hf
5
+ language:
6
+ - en
7
  ---
8
  # Llama-2-7b-chat-hf-GGUF
9
+ Based on Llama-2-7b-chat-hf by Meta. This version has been converted to:
10
+ - GGML_VERSION = "gguf"
11
+ - Conversion = float16
12
+ - Quantization method = q4_k_s (Uses Q4_K for all tensors - "q" + the number of bits + the variant used )
13
+ -
14
+ Learn More:
15
 
16
+ Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters.
17
+ - This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format.
18
+
19
+ Model Details
20
+ - Model Developers: Meta
21
+ - Input: Models input text only.
22
+ - Output: Models generate text only.
23
+ - Model Dates: Llama 2 was trained between January 2023 and July 2023.
24
+ - Status: This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
25
+ - Model Architecture: Llama 2 is an auto-regressive language model that uses an optimized transformer architecture.The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.