Quant Infos
- quants done with an importance matrix for improved quantization loss
- gguf & imatrix generated from bf16 for "optimal" accuracy loss (some say this is snake oil, but it can't hurt)
- Wide coverage of different gguf quant types from Q_8_0 down to IQ1_S
- Quantized with llama.cpp commit dc685be46622a8fabfd57cfa804237c8f15679b8 (master as of 2024-05-12)
- Imatrix generated with this multi-purpose dataset.
./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
Original Model Card:
π GitHub β’
πΎ Discord β’
π€ Twitter β’
π¬ WeChat
π Paper β’
π FAQ β’
π Learning Hub
Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
Model | Context Length | Pre-trained Tokens |
---|---|---|
Yi-1.5 | 4K | 3.6T |
Models
Chat models
Name Download Yi-1.5-34B-Chat β’ π€ Hugging Face β’ π€ ModelScope Yi-1.5-9B-Chat β’ π€ Hugging Face β’ π€ ModelScope Yi-1.5-6B-Chat β’ π€ Hugging Face β’ π€ ModelScope Base models
Name Download Yi-1.5-34B β’ π€ Hugging Face β’ π€ ModelScope Yi-1.5-9B β’ π€ Hugging Face β’ π€ ModelScope Yi-1.5-6B β’ π€ Hugging Face β’ π€ ModelScope
Benchmarks
Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.
Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.
Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.
Yi-1.5-9B is the top performer among similarly sized open-source models.
Quick Start
For getting up and running with Yi-1.5 models quickly, see README.
- Downloads last month
- 180
Model tree for qwp4w3hyb/Yi-1.5-34B-Chat-iMat-GGUF
Base model
01-ai/Yi-1.5-34B-Chat