NIkita Balakin
Kotokin
AI & ML interests
None yet
Recent Activity
New activity
5 days ago
ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3:The question of training the model.
New activity
8 days ago
MikeRoz/mistralai_Mistral-Large-Instruct-2411-3.0bpw-h6-exl2:Request for a 3.5 bpw
liked
a model
9 days ago
MikeRoz/mistralai_Mistral-Large-Instruct-2411-4.0bpw-h6-exl2
Organizations
None yet
Kotokin's activity
The question of training the model.
1
#2 opened 5 days ago
by
Kotokin
Request for a 3.5 bpw
#1 opened 9 days ago
by
Kotokin
Where config.json?
10
#1 opened 10 days ago
by
TheDrummer
Release behemoth.
2
#4 opened about 2 months ago
by
Kotokin
Update README.md
#1 opened about 2 months ago
by
rombodawg
question about the next model
2
#1 opened 4 months ago
by
Kotokin
The question is about the next model.
2
#8 opened 4 months ago
by
Kotokin
What does exl2-4bpw-rpcal in the model name mean?
1
#1 opened 7 months ago
by
BigDeeper
frankenmerge
9
#4 opened 8 months ago
by
ryzen88
A request for quantization.
#1 opened 8 months ago
by
Kotokin
Hi, I made gptq quant.
12
#3 opened 9 months ago
by
Kotokin
A request for quantization.
3
#1 opened 10 months ago
by
Kotokin
A request for quantization.
1
#1 opened 10 months ago
by
Kotokin
iMatrix, IQ2_XS & IQ2_XXS
13
#2 opened 10 months ago
by
Nexesenex
A request for quantization.
1
#1 opened 10 months ago
by
Kotokin
Hi, could you please add miquliz-120b?
3
#4 opened 10 months ago
by
Kotokin
quantize to 4bit
1
#2 opened 10 months ago
by
Kotokin
More quant?
2
#1 opened 10 months ago
by
Kotokin