Metal
MetaIX
AI & ML interests
None yet
Organizations
None yet
MetaIX's activity
Pweh-pweh
3
#1 opened about 1 year ago
by
MetaIX
Psst, You are divine
2
#1 opened over 1 year ago
by
MetaIX
llama.cpp breaks quantized ggml file format
4
#11 opened over 1 year ago
by
Waldschrat
Prompt Format?
1
#4 opened over 1 year ago
by
gsaivinay
Please reconvert to new GGML format
4
#6 opened over 1 year ago
by
Delta36652
Best model I tested, but seems to have an issue on some tokens
2
#12 opened over 1 year ago
by
kbrkbr
Any chance of a 4_2 or 4_0 ggml quantization?
4
#4 opened over 1 year ago
by
spanielrassler
Upload gpt4-x-alpasta-4bit.safetensors
#5 opened over 1 year ago
by
MetaIX
Performance worse than plan alpaca lora 30b?
2
#2 opened over 1 year ago
by
Davidliudev
What is this?
6
#1 opened over 1 year ago
by
vdruts
Upload 8 files
#1 opened over 1 year ago
by
MetaIX
Model size for int4 fine tuning on rtx 3090
3
#2 opened over 1 year ago
by
KnutJaegersberg
Loaded the model but it wont respond and is stuck saying "typing" meanwhile gpu usage at 100%
1
#6 opened over 1 year ago
by
barncroft
Please, help :<
17
#7 opened over 1 year ago
by
ANGIPO
Error: Internal: src/sentencepiece_processor.cc in Ooba and KAI 4bit
4
#8 opened over 1 year ago
by
Co0ode
LORA model or standard finetune?
2
#1 opened over 1 year ago
by
ghogan42