File size: 492 Bytes
991aac7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
---
dataset_info:
features:
- name: bert_token
sequence: int64
- name: gpt2_token
sequence: int64
splits:
- name: train
num_bytes: 173553456.7202345
num_examples: 551455
- name: test
num_bytes: 261864.0
num_examples: 1000
download_size: 42652803
dataset_size: 173815320.7202345
---
# Dataset Card for "amazon_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |