wizardLM-7B-HF / README.md
TheBloke's picture
Update README.md
cbd46f2
|
raw
history blame
1.93 kB
metadata
license: other

WizardLM: An Instruction-following LLM Using Evol-Instruct

These files are the result of merging the delta weights with the original Llama7B model.

The code for merging is provided in the WizardLM official Github repo.

The original WizardLM deltas are in float32, and this results in producing an HF repo that is also float32, and so much larger than a normal 7B Llama model.

Therefore for this repo I converted the model to float16, to produce a standard size 7B model.

This was done achieved by running model = model.half() prior to saving.

WizardLM-7B HF

This repo contains the full unquantised model files in HF format for GPU inference and as a base for quantisation/conversion.

Other repositories available

Original model info

Full details in the model's Github page

WizardLM official Github repo.

Overview of Evol-Instruct

Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs.

Although on our complexity-balanced test set, WizardLM-7B outperforms ChatGPT in the high-complexity instructions, it still lag behind ChatGPT on the entire test set, and we also consider WizardLM to still be in a baby state. This repository will continue to improve WizardLM, train on larger scales, add more training data, and innovate more advanced large-model training methods.

info info