rwkv-4-pile-14b / README.md
BlinkDL's picture
Update README.md
cebc887
|
raw
history blame
1.36 kB
---
language:
- en
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: apache-2.0
datasets:
- the_pile
---
# RWKV-4 14B
## Model Description
RWKV-4 14B is a L40-D5120 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
args.n_layer = 40
args.n_embd = 5120
Use https://github.com/BlinkDL/ChatRWKV to run it.
RWKV-4-Pile-14B-2023xxxx-ctx8192-testxxx.pth : Fine-tuned to ctx_len 8192.
* The best general model.
################################
RWKV 14B Alpaca test model (both are finetuned from ctx8192. i think the ctx1024 version should be better for usual Q&A. please test & compare)
https://huggingface.co/BlinkDL/rwkv-4-pile-14b/blob/main/RWKV-4-Pile-14B-Instruct-test4-20230327-ctx1024.pth
https://huggingface.co/BlinkDL/rwkv-4-pile-14b/blob/main/RWKV-4-Pile-14B-Instruct-test4-20230327-ctx4096.pth
(Update ChatRWKV v2 to latest version first) It's recommended to use +i for "Alpaca Instruct". Examples:
```
+i Explain the following metaphor: "Life is like cats".
+i write a python function to read data from an excel file.
```
################################
RWKV-4-Pile-14B-20230213-8019.pth : Trained on the Pile for 331B tokens
* Pile loss 1.7579 (ctx_len 1024)
* LAMBADA ppl 3.81, acc 71.05%
* PIQA acc 77.42%
* SC2016 acc 75.57%
* Hellaswag acc_norm 70.24%
* WinoGrande acc 62.98%