Hallucinated reply if you prompt same prompt several times. Ollama and phi3.5
#25 opened 12 days ago
by
sepihala
curl run on colab
1
#23 opened about 2 months ago
by
sdyy
KeyError: 'phi3' when attempting to train in Oogabooga
1
#21 opened 2 months ago
by
Bob4444
Please support phi-3.5-mini-instruct in llama.cpp
2
#20 opened 3 months ago
by
ThiloteE
Can't install flash_attn package on WSL2
#19 opened 3 months ago
by
aytugkaya
How to add new token or special token?
#16 opened 3 months ago
by
hahaMiao
Evaluation of Phi-3.5 on long-context BABILong bench
1
#12 opened 3 months ago
by
yurakuratov
Phi3ForCausalLM.forward() got an unexpected keyword argument 'decoder_input_ids'
#11 opened 3 months ago
by
Tejarao
tokenizer.model_max_length=2048 in sample_finetune.py
#10 opened 3 months ago
by
anakin87
Very safe model!
9
#6 opened 3 months ago
by
SicariusSicariiStuff
Will there be int 4 ONNX DirectML versions in the future?
2
#5 opened 3 months ago
by
PaulTheHuman
My quants and my "silly" version.
#4 opened 3 months ago
by
ZeroWw
Why not include MedQA in your benchmarks?
3
#1 opened 3 months ago
by
Hugman2345