gitllama
#11 opened 11 days ago
by
thiageshhs
Code used to generate the model
#10 opened 4 months ago
by
Proryanator
How to use provided model?
1
#9 opened 9 months ago
by
Vader20FF
maxContextLength of just 64 tokens
#8 opened about 1 year ago
by
ronaldmannak
Unable to load model in SwitChat example
6
#7 opened about 1 year ago
by
ltouati
M1 8G RAM macbook air run at 0.01token/second, something is wrong right?
5
#6 opened over 1 year ago
by
DanielCL
When i try to load the model
4
#5 opened over 1 year ago
by
SriBalaaji
Understanding CoreML conversion of llama 2 7b
15
#4 opened over 1 year ago
by
kharish89