Lavanya KV
lkv
·
AI & ML interests
None yet
Recent Activity
New activity
1 day ago
google/gemma-2-27b-it:Does mac book pro m3max 48GB can load gemma2-27b?
New activity
2 days ago
google/gemma-2b-it:Extract only output
New activity
2 days ago
google/gemma-2-2b:Gemma2-2b training uses much more momory!
Organizations
lkv's activity
Does mac book pro m3max 48GB can load gemma2-27b?
2
#23 opened 5 months ago
by
omenlyd
Extract only output
3
#55 opened 13 days ago
by
tkaintura
Gemma2-2b training uses much more momory!
2
#23 opened 3 months ago
by
bubbleseller
Conversion to onnx
3
#29 opened 4 months ago
by
Parma7876
Extract output
1
#49 opened 13 days ago
by
tkaintura
Does the processor contain apply_chat_template??
1
#4 opened 6 months ago
by
damerajee
Batch Inference causes degraded performance
3
#43 opened 3 months ago
by
tanliboy
Using KV Cache when the new input is more than one token
1
#2 opened 6 months ago
by
skoneru
Training version 896
5
#9 opened 4 months ago
by
lcolonn
What is the context length?
2
#11 opened 4 months ago
by
Embered
finetunr error. "triu_tril_cuda_template" not implemented for 'BFloat16'
2
#17 opened 8 months ago
by
Saicy
float32 vs bf16
5
#5 opened 8 months ago
by
janimo
How did you convert to gguf?
2
#3 opened 7 months ago
by
scott0x
Transformers version for recurrent_gemma?
2
#4 opened 8 months ago
by
mitkox
Question about Gemma2:2b system prompt template and usage in Langflow
1
#26 opened 3 months ago
by
MarcusWey
Does Gemma 2 9B Support All Listed Languages on the Gemini 1.5 Page?
2
#33 opened 4 months ago
by
i18n-site
RuntimeError: cutlassF: no kernel found to launch!
2
#32 opened 8 months ago
by
kehkok
How to get a different responce from the model using the same input
5
#59 opened 9 months ago
by
mans-0987
Pending request
2
#13 opened 5 months ago
by
shiron8bit