The model often enters infinite generation loops
13
#32 opened 2 months ago
by
sszymczyk
unable to load 4-bit quantized varient with llama.cpp
#31 opened 2 months ago
by
sunnykusawa
Garbage output ?
10
#30 opened 2 months ago
by
danielus
Question about chat template and fine-tuning
3
#23 opened 2 months ago
by
tblattner
Issues loading model with ooabooga textgenwebui
5
#20 opened 2 months ago
by
Kenji776
what is the right tokenizer should I use for llama 3.1 8B?
2
#19 opened 2 months ago
by
calebl
The sample code on the model card page is not right
#18 opened 2 months ago
by
kmtao
My alternative quantizations.
7
#16 opened 2 months ago
by
ZeroWw
ValueError: `rope_scaling` must be a dictionary with two fields
41
#15 opened 2 months ago
by
jsemrau
Independently Benchmarked Humaneval and Evalplus scores
2
#13 opened 2 months ago
by
VaibhavSahai
DO NOT MERGE v2 make sure vllm and transformers work
#12 opened 2 months ago
by
ArthurZ
DO NOT MERGE test for vllm
2
#11 opened 2 months ago
by
ArthurZ
Please do not include original PTH files.
4
#10 opened 2 months ago
by
Qubitium
Utterly based
1
#9 opened 2 months ago
by
llama-anon