bartowski commited on
Commit
4e7d98c
1 Parent(s): f89e2c7

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -11
README.md CHANGED
@@ -1,17 +1,11 @@
1
  ---
2
  quantized_by: bartowski
3
  pipeline_tag: text-generation
4
- tags:
5
- - language
6
- - granite-3.1
7
- license: apache-2.0
8
- inference: false
9
- base_model: ibm-granite/granite-3.1-8b-instruct
10
  ---
11
 
12
  ## Llamacpp imatrix Quantizations of granite-3.1-8b-instruct
13
 
14
- Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4369">b4369</a> for quantization.
15
 
16
  Original model: https://huggingface.co/ibm-granite/granite-3.1-8b-instruct
17
 
@@ -22,14 +16,12 @@ Run them in [LM Studio](https://lmstudio.ai/)
22
  ## Prompt format
23
 
24
  ```
25
- <|start_of_role|>system<|end_of_role|>{system_prompt}<|end_of_text|>
26
- <|start_of_role|>user<|end_of_role|>{prompt}<|end_of_text|>
27
- <|start_of_role|>assistant<|end_of_role|>
28
  ```
29
 
30
  ## What's new:
31
 
32
- Fix tokenizer
33
 
34
  ## Download a file (not the whole branch) from below:
35
 
 
1
  ---
2
  quantized_by: bartowski
3
  pipeline_tag: text-generation
 
 
 
 
 
 
4
  ---
5
 
6
  ## Llamacpp imatrix Quantizations of granite-3.1-8b-instruct
7
 
8
+ Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4381">b4381</a> for quantization.
9
 
10
  Original model: https://huggingface.co/ibm-granite/granite-3.1-8b-instruct
11
 
 
16
  ## Prompt format
17
 
18
  ```
19
+ <|start_of_role|>system<|end_of_role|>{system_prompt}<|end_of_text|> <|start_of_role|>user<|end_of_role|>{prompt}<|end_of_text|> <|start_of_role|>assistant<|end_of_role|>
 
 
20
  ```
21
 
22
  ## What's new:
23
 
24
+ Fix chat template
25
 
26
  ## Download a file (not the whole branch) from below:
27