Streaming dataset generation
The current wikipedia.py script spawns a process for every raw data file and keeps the entire dataset in memory until the end of the process. With 100+ files in the input dataset, it's near impossible to build this with lower resourced machines.
This PR modifies wikipedia.py to support streaming dataset generation as follows:
- Yield each Wikipedia article vs reading entire files into memory
- Only spawn up to os.cpu_count() processes
- Use shared queues, with a limited buffer, between the main process and worker processes
- Yield each processed article to iteratively build the dataset
With these changes, the process will scale based on the resources available.
Thank you so much for making these changes @davidmezzetti . The original process kept killing my kernal/machine even though I tried to run it on a n1-highcpu-96 (96 vCPU, 48 core, 86.4 GB memory) on GCP. After using your branch, everything worked perfectly with no issues at all. @Tristan please consider merging.
Thanks for tagging me. Hugging Face actually cancelled this project and I don't work for them anymore, so I haven't been paying attention. Merged! In the future, you could tag someone on the HF team for a faster response probably. Thanks for bearing with my slowness for this one!
Glad to hear it's helped, thank you for merging!