|
--- |
|
language: |
|
- en |
|
tags: |
|
- pytorch |
|
- text-generation |
|
- causal-lm |
|
- rwkv |
|
license: apache-2.0 |
|
datasets: |
|
- the_pile |
|
|
|
--- |
|
|
|
# RWKV-4 14B |
|
|
|
## Model Description |
|
|
|
RWKV-4 14B is a L40-D5120 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details. |
|
|
|
** Note: It's a BF16 model, and it may overflow if you are using FP16 (fixable by rescaling the weights). ** |
|
|
|
At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-LM) to run it. |
|
|
|
ctx_len = 1024 |
|
n_layer = 40 |
|
n_embd = 5120 |
|
|
|
Preview checkpoint: RWKV-4-Pile-14B-20221231-4514.pth : Trained on the Pile for 186B tokens. |
|
* Pile loss 1.8014 |
|
* LAMBADA ppl 3.92, acc 69.96% |
|
* PIQA acc 77.42% |
|
* SC2016 acc 74.18% |
|
* Hellaswag acc_norm 68.47% |
|
* WinoGrande acc 62.35% |
|
|