Why does it say 4.98b params when the original model is 34b? Was that a typo?
#9 opened 12 months ago
by
lambdac
experiencing empty output if text input is long
#8 opened 12 months ago
by
lambdac
[AUTOMATED] Model Memory Requirements
#7 opened 12 months ago
by
model-sizer-bot
Running into issues when trying to run with TGI
1
#6 opened about 1 year ago
by
viraniaman
main branch has problem using infill
#5 opened about 1 year ago
by
jy00520336
Can I run this model on two NVIDIA RTX A5000 GPUs with 24 GB each?
3
#4 opened about 1 year ago
by
nashid
Is the 34B llama2 actually GPTQ working?
4
#3 opened over 1 year ago
by
mzbac
Contradiction in model description
1
#2 opened over 1 year ago
by
m9e
Could you please specify which database was used for quantization finetuning?
2
#1 opened over 1 year ago
by
Badal