Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:
Search is not available for this dataset
dataset_name
stringlengths
9
80
table
stringlengths
2.47k
148M
[kaggle]depression
"{'Survey_id': [926, 747, 1190, 1065, 806, 483, 849, 1386, 930, 390, 540, 557, 1280, 1195, 603, 729,(...TRUNCATED)
[kaggle]Collection of Classification & Regression Datasets(Movie Classification)
"{'Marketingexpense': [20.1264, 20.5462, 20.5458, 20.6474, 21.381, 20.597, 21.7658, 22.891, 24.2248,(...TRUNCATED)
[openml]weather_izmir
"{'Max_temperature': [88.2, 88.0, 91.6, 64.4, 94.1, 81.3, 62.6, 53.6, 53.4, 91.4, 61.2, 47.1, 101.0,(...TRUNCATED)
[openml]Heart_disease_classification
"{'age': [67, 67, 37, 41, 56, 62, 57, 63, 53, 57, 56, 56, 44, 52, 57, 48, 54, 48, 49, 64, 58, 58, 58(...TRUNCATED)
[openml]penguins
"{'island': ['Torgersen', 'Torgersen', 'Torgersen', 'Torgersen', 'Torgersen', 'Torgersen', 'Torgerse(...TRUNCATED)
[kaggle]Predicting Pulsar Star
"{'Meanoftheintegratedprofile': [121.15625, 76.96875, 130.5859375, 156.3984375, 84.8046875, 121.0078(...TRUNCATED)
[openml]mc2
"{'LOC_BLANK': [12, 10, 15, 5, 50, 14, 0, 8, 0, 4, 16, 3, 1, 35, 0, 11, 10, 24, 17, 9, 24, 10, 5, 1,(...TRUNCATED)
[openml]analcatdata_boxing1
"{'Judge': ['E._Williams', 'E._Williams', 'E._Williams', 'E._Williams', 'E._Williams', 'E._Williams'(...TRUNCATED)
[openml]sleuth_case2002
"{'FM': [0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0(...TRUNCATED)
[openml]ecoli
"{'mcg': [0.49, 0.07, 0.56, 0.59, 0.23, 0.67, 0.29, 0.21, 0.2, 0.42, 0.42, 0.25, 0.39, 0.51, 0.22, 0(...TRUNCATED)

This repository contains a total of 483 tabular datasets with meaningful column names collected from OpenML, UCI, and Kaggle platforms. The last column of each dataset is the label column. For more details, please refer to our paper https://arxiv.org/abs/2305.09696. You can use the code to load all the datasets into a dictionary of pd.DataFrame.

An example script can be found below:

from datasets import load_dataset
import pandas as pd
import numpy as np

data = {}
dataset = load_dataset(path='ztphs980/taptap_datasets')
dataset = dataset['train'].to_dict()
for table_name, table in zip(dataset['dataset_name'], dataset['table']):
    table = pd.DataFrame.from_dict(eval(table, {'nan': np.nan}))
    data[table_name] = table
Downloads last month
107