Stack_TokLlama / README.md
Trevor Dohm
Tokenized ada
7ac2a99 verified
|
raw
history blame
338 Bytes
---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1830108115
num_examples: 60325
download_size: 237289718
dataset_size: 1830108115
configs:
- config_name: default
data_files:
- split: train
path: data/ada/train-*
---