AtakanTekparmak commited on
Commit
bca4a03
1 Parent(s): dcb5992

feat: Updated README to add disclaimer

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -13,6 +13,9 @@ base_model:
13
  # AtakanTekparmak/llama-3-20b-instruct-Q8_0-GGUF
14
  This model was converted to GGUF format from [`AtakanTekparmak/llama-3-20b-instruct`](https://huggingface.co/AtakanTekparmak/llama-3-20b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
15
  Refer to the [original model card](https://huggingface.co/AtakanTekparmak/llama-3-20b-instruct) for more details on the model.
 
 
 
16
  ## Use with llama.cpp
17
 
18
  Install llama.cpp through brew.
 
13
  # AtakanTekparmak/llama-3-20b-instruct-Q8_0-GGUF
14
  This model was converted to GGUF format from [`AtakanTekparmak/llama-3-20b-instruct`](https://huggingface.co/AtakanTekparmak/llama-3-20b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
15
  Refer to the [original model card](https://huggingface.co/AtakanTekparmak/llama-3-20b-instruct) for more details on the model.
16
+
17
+ # DISCLAIMER: THIS MODEL DOES NOT WORK VERY WELL
18
+ This model is a part of me learning about and experimenting with model merging, should not be used for ANY use, as you'll see yourself from the outputs if you do decide to do so.
19
  ## Use with llama.cpp
20
 
21
  Install llama.cpp through brew.