LLM training
Collection
small-scale pretraining experiments of mine
•
11 items
•
Updated
•
1
The API widget is off as it isn't supported by hf yet - try the Colab
This is a pretraining experiment on the jamba
arch as a "smol MoE".
Details:
achieves the following results on the evaluation set (most recent dataset):
if I pretrain it further, other versions will be in new repos with incremented version (this is v0.13)
Quick eval for: pszemraj/jamba-H1024_L12-v0.13-KIx2
hf (pretrained=pszemraj/jamba-H1024_L12-v0.13-KIx2,trust_remote_code=True,dtype=float), gen_kwargs: (None), limit: 0.9999, num_fewshot: None, batch_size: 8
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | |
---|---|---|---|---|---|---|---|
winogrande | 1 | none | 0 | acc | 0.5067 | ± | 0.0141 |
piqa | 1 | none | 0 | acc | 0.5912 | ± | 0.0138 |
none | 0 | acc_norm | 0.5951 | ± | 0.0138 | ||
openbookqa | 1 | none | 0 | acc | 0.1800 | ± | 0.0172 |
none | 0 | acc_norm | 0.2920 | ± | 0.0204 | ||
lambada_openai | 1 | none | 0 | perplexity | 103.1241 | ± | 8.5843 |
none | 0 | acc | 0.2502 | ± | 0.0122 | ||
boolq | 2 | none | 0 | acc | 0.6196 | ± | 0.0136 |
arc_easy | 1 | none | 0 | acc | 0.3836 | ± | 0.0137 |
none | 0 | acc_norm | 0.3694 | ± | 0.0136 |
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen |
---|---|---|---|---|---|
3.2013 | 0.4241 | 200 | 3.0653 | 0.4479 | 419430400 |
3.1976 | 0.8481 | 400 | 3.0434 | 0.4506 | 838860800 |
3.1485 | 1.2722 | 600 | 3.0375 | 0.4513 | 1258291200 |
3.1871 | 1.6963 | 800 | 3.0366 | 0.4514 | 1677721600 |