nisten commited on
Commit
855814c
1 Parent(s): 092f48f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -10,6 +10,12 @@ license: llama3
10
  ---
11
  GGUF Files of nisten/llamagnific-3-87b - a 99layer llama-3-70b frankenstein
12
 
 
 
 
 
 
 
13
  make a prompt file named prompt.txt and put this in it
14
 
15
  ```bash
@@ -21,7 +27,7 @@ make a prompt file named prompt.txt and put this in it
21
  That's it that's your prompt template, to run it in conversaion do this (add -ngl 99 or less if you have a 24gb gpu, i.e. add -ngl 50 for a 16gb etc, the model itself is 98 layers so this determines how many layers you offload to gpu, by default its 0 ):
22
 
23
  ```c
24
- ./llama-cli --temp 0.4 -m 1bitllama.gguf -fa -co -cnv -i -f prompt.txt
25
  ```
26
 
27
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/K-ZX8HE_ph5eRlieFbEQj.png)
 
10
  ---
11
  GGUF Files of nisten/llamagnific-3-87b - a 99layer llama-3-70b frankenstein
12
 
13
+ for fun use i recommend this file:
14
+
15
+ ```c
16
+ wget https://huggingface.co/nisten/llamagnific-3-87b-gguf/resolve/main/llamagnific_OPTIMAL_IQ_4_XS.gguf
17
+ ```
18
+
19
  make a prompt file named prompt.txt and put this in it
20
 
21
  ```bash
 
27
  That's it that's your prompt template, to run it in conversaion do this (add -ngl 99 or less if you have a 24gb gpu, i.e. add -ngl 50 for a 16gb etc, the model itself is 98 layers so this determines how many layers you offload to gpu, by default its 0 ):
28
 
29
  ```c
30
+ ./llama-cli --temp 0.4 -m llamagnific_OPTIMAL_IQ_4_XS.gguf -fa -co -cnv -i -f prompt.txt
31
  ```
32
 
33
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/K-ZX8HE_ph5eRlieFbEQj.png)