Dataset Viewer
Full Screen Viewer
Full Screen
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 499, in get_dataset_config_info for split_generator in builder._split_generators( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/arrow/arrow.py", line 46, in _split_generators self.info.features = datasets.Features.from_arrow_schema(pa.ipc.open_stream(f).schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/ipc.py", line 190, in open_stream return RecordBatchStreamReader(source, options=options, File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/ipc.py", line 52, in __init__ self._open(source, options=options, memory_pool=memory_pool) File "pyarrow/ipc.pxi", line 974, in pyarrow.lib._RecordBatchStreamReader._open File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Expected to read 1330795073 metadata bytes, but only read 547357470 The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response for split in get_dataset_split_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 572, in get_dataset_split_names info = get_dataset_config_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 504, in get_dataset_config_info raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for Dataset Name
This dataset contains 1 billion observation-action pairs to clone the behavior of LaCAM approach to solve MAPF problem via transformer (GPT) model.
Dataset Details
- Curated by: Anton Andreychuk (@aandreychuk), Alexey Skrynnik (@tviskaron)
- License: MIT
- Repository: GitHub
- Model: MAPF-GPT
- Paper: ArXiv
Dataset Structure
Dataset contains train
and validation
parts.
train
part contains 1,000 * 2^20 obsevation-action pairs. They are divided into 500.arrow
files.validation
part contains 2^20 observation-action pairs. They are saved into a single.arrow
file. Dataset requires 256 GB of disk space.
More details about creation of the dataset, the source data, the structure of observation, etc. are provided in the paper
- Downloads last month
- 65