Why do various companies keep using hard-coded system prompt in the chat template?
1
#17 opened 4 months ago
by
pseudotensor
how do i erase this after downloading it locally??
#16 opened 4 months ago
by
malihos
[AUTOMATED] Model Memory Requirements
#15 opened 6 months ago
by
model-sizer-bot
The Model Stop Engaging in conversation
2
#14 opened 6 months ago
by
Albihany
generation_config.json adds a mapping with the special token '<|im_end|>' to solve the problem of non-stop generation when <|im_end|> is encountered.
#13 opened 6 months ago
by
zjyhf
The tokenizer adds a special token '<|im_end|>' to solve the problem of non-stop generation when encountering <|im_end|>.
#12 opened 6 months ago
by
zjyhf
About tokens used in this model.
1
#8 opened 6 months ago
by
icoicqico
Multi-lang?
1
#6 opened 6 months ago
by
DalyD
Upload to ollama
#5 opened 6 months ago
by
nonetrix
Adding `safetensors` variant of this model
#4 opened 6 months ago
by
lucataco
🚩 Report: Legal issue(s)
3
#3 opened 6 months ago
by
deleted
Should be "Llama 3ChatQA-1.5-70B"
3
#2 opened 6 months ago
by
just1moremodel
Concerns regarding Prompt Format
6
#1 opened 6 months ago
by
wolfram