Dataset not loading
dataset = load_dataset('shawhin/imdb-truncated')
The above code is not able to load the dataset; it keeps throwing the below error.
Downloading and preparing dataset None/None to /root/.cache/huggingface/datasets/shawhin___parquet/shawhin--imdb-truncated-8ce402c82689e29d/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
TypeError: expected str, bytes or os.PathLike object, not NoneType
Kindly point out if I'm missing something here.
I'm not able to replicate the error. What version of datasets are you using?
Here is the env file for the conda env I used: https://github.com/ShawhinT/YouTube-Blog/blob/main/LLMs/fine-tuning/ft-env.yml
I got the same issue when using Google Colab
Downloading and preparing dataset None/None to /root/.cache/huggingface/datasets/shawhin___parquet/shawhin--imdb-truncated-8ce402c82689e29d/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data files: 100%
3/3 [00:00<00:00, 120.03it/s]
Extracting data files: 100%
3/3 [00:00<00:00, 120.12it/s]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-11-7e098517d633> in <cell line: 4>()
2 dataset_name = "shawhin/imdb-truncated"
3
----> 4 dataset = load_dataset(dataset_name)
5
6 dataset
9 frames
/usr/lib/python3.10/posixpath.py in basename(p)
140 def basename(p):
141 """Returns the final component of a pathname"""
--> 142 p = os.fspath(p)
143 sep = _get_sep(p)
144 i = p.rfind(sep) + 1
TypeError: expected str, bytes or os.PathLike object, not NoneType
Try using datasets version 2.14.4
You can try using the conda env file linked below. If that doesn't fit it, you can use the original dataset: https://huggingface.co/datasets/imdb
https://github.com/ShawhinT/YouTube-Blog/blob/main/LLMs/fine-tuning/ft-env.yml
Hi, I had this issue resolve by manually downloading the parquet files provided and uploading it to colab. PFB the code below.
!curl -X GET
"https://huggingface.co/api/datasets/shawhin/imdb-truncated/parquet/default/train"
import pandas as pd
from datasets import Dataset
Load Parquet file into a pandas DataFrame
parquet_file_path = "0000.parquet"
df = pd.read_parquet(parquet_file_path)
Convert DataFrame to datasets Dataset
dataset = Dataset.from_pandas(df)
Glad it's working, thanks for sharing your solution!