|
--- |
|
license: mit |
|
dataset_info: |
|
features: |
|
- name: type |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: created_at |
|
dtype: string |
|
- name: author |
|
dtype: string |
|
- name: author_did |
|
dtype: string |
|
- name: uri |
|
dtype: string |
|
- name: embedded_array |
|
list: |
|
- name: alt |
|
dtype: string |
|
- name: blob |
|
dtype: string |
|
- name: type |
|
dtype: string |
|
- name: langs |
|
sequence: string |
|
- name: reply_to |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 1754581344 |
|
num_examples: 5000000 |
|
download_size: 740945960 |
|
dataset_size: 1754581344 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
|
|
# Five Million bluesky posts |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/674783a7c6317bfd72b33659/bkAI0CJNPcZMrrP7VgGIC.png) |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
This dataset contains 5 million public posts collected from Bluesky Social's firehose API, intended for machine learning research and experimentation with social media data. |
|
|
|
This dataset was inspired by the Alpindales original 2 million posts dataset, this dataset expands on that dataset with much more data. |
|
|
|
Alpins dataset did not get author handles or image urls & metadata that was included in the posts. The images and their captions could potenically be invaluble for training so they have been collected. |
|
|
|
This is the small version of the dataset to come for testing with formatting/smaller projects. |
|
|
|
This dataset is my own and is unaffiliated with bluesky or any potential employer. |
|
|
|
|
|
## Dataset Structure |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/674783a7c6317bfd72b33659/9FA7LTPkffQDwrSL4F2z5.png) |
|
|
|
- **Curated by:** Roro |
|
- **License:** MIT |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
|
|
The dataset could be used for: |
|
- Study social media trends |
|
- Research on social media content moderation |
|
- Studying conversation structures and reply networks |
|
|
|
I have not been able to figure out how to parse the atproto image ref bytes into a image or blob url. I would appreciate a PR for that. |
|
|
|
The dataset is meant to be downloaded with the huggingface load_dataset() function. From there you can either run the dataset as a iterable stream so you do not have to worry about memory or you can convert to a pandas dataframe. |
|
|
|
Note that you will need the to install the following libraries: |
|
```bash |
|
pip install pandas pyarrow datasets huggingface_hub |
|
``` |
|
|
|
To download/load the huggingface dataset: |
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("Roronotalt/bluesky") |
|
``` |
|
|
|
To pandas: |
|
```python |
|
new_dataset = dataset.to_pandas() |
|
``` |
|
|
|
You can then save the pandas dataframe as a csv. |
|
|
|
Alternativley if you download the provided dataset parquet file in /data, you can convert the file to a csv using the following python code: |
|
|
|
```bash |
|
python -c "import pandas as pd; |
|
df = http://pd.read_parquet('train-0000.parquet', engine='pyarrow'); |
|
http://df.to_csv('output_file.csv', index=False) |
|
" |
|
``` |
|
Credit to @TyrantsMuse on twitter for the code snippet |
|
|
|
## Dataset Curation |
|
|
|
The dataset not is filtered, sorting the dataset for quality or moderation may make it more valuable for your use cases. The dataset is as-is and no liablity is provided. |
|
|
|
Deduping was done based on the post URIs. The dataset is sorted by the author column. |