url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.41B
2.44B
| node_id
stringlengths 18
19
| number
int64 5.13k
7.09k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
3
| milestone
dict | comments
sequencelengths 0
30
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 4
values | active_lock_reason
float64 | body
stringlengths 1
33.9k
โ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | draft
float64 0
1
โ | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7085/comments | https://api.github.com/repos/huggingface/datasets/issues/7085/events | https://github.com/huggingface/datasets/issues/7085 | 2,440,008,618 | I_kwDODunzps6Rb5Oq | 7,085 | [Regression] IterableDataset is broken on 2.20.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4",
"events_url": "https://api.github.com/users/AjayP13/events{/privacy}",
"followers_url": "https://api.github.com/users/AjayP13/followers",
"following_url": "https://api.github.com/users/AjayP13/following{/other_user}",
"gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AjayP13",
"id": 5404177,
"login": "AjayP13",
"node_id": "MDQ6VXNlcjU0MDQxNzc=",
"organizations_url": "https://api.github.com/users/AjayP13/orgs",
"received_events_url": "https://api.github.com/users/AjayP13/received_events",
"repos_url": "https://api.github.com/users/AjayP13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AjayP13"
} | [] | open | false | null | [] | null | [
"@lhoestq I detected this regression over on [DataDreamer](https://github.com/datadreamer-dev/DataDreamer)'s test suite. I put in these [monkey patches](https://github.com/datadreamer-dev/DataDreamer/blob/4cbaf9f39cf7bedde72bbaa68346e169788fbecb/src/_patches/datasets_reset_state_hack.py) in case that fixed it our tests failing in case it helps you figure out where this is coming from. I found it hard to reason through the resumable IterableDataset code though, so hopefully you have more intuition to implement a proper fix."
] | "2024-07-31T13:01:59" | "2024-07-31T13:36:34" | null | NONE | null | ### Describe the bug
In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times.
The issue seems to stem from the recent addition of "resumable IterableDatasets" (#6658) (@lhoestq). It seems like it's keeping state when it shouldn't.
### Steps to reproduce the bug
Minimal Reproducible Example (comparing `datasets==2.17.0` and `datasets==2.20.0`)
```
#!/bin/bash
# List of dataset versions to test
versions=("2.17.0" "2.20.0")
# Loop through each version
for version in "${versions[@]}"; do
# Install the specific version of the datasets library
pip3 install -q datasets=="$version" 2>/dev/null
# Run the Python script
python3 - <<EOF
from datasets import IterableDataset
from datasets.features.features import Features, Value
def test_gen():
yield from [{"foo": i} for i in range(10)]
features = Features([("foo", Value("int64"))])
d = IterableDataset.from_generator(test_gen, features=features)
mapped = d.map(lambda row: {"foo": row["foo"] * 2})
column = mapped.select_columns(["foo"])
print("Version $version - Iterate Once:", list(column))
print("Version $version - Iterate Twice:", list(column))
EOF
done
```
The output looks like this:
```
Version 2.17.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.17.0 - Iterate Twice: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.20.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.20.0 - Iterate Twice: []
```
### Expected behavior
The expected behavior is it version 2.20.0 should behave the same as 2.17.0.
### Environment info
`datasets==2.20.0` on any platform. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7085/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7085/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7084/comments | https://api.github.com/repos/huggingface/datasets/issues/7084/events | https://github.com/huggingface/datasets/issues/7084 | 2,439,519,534 | I_kwDODunzps6RaB0u | 7,084 | More easily support streaming local files | {
"avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4",
"events_url": "https://api.github.com/users/fschlatt/events{/privacy}",
"followers_url": "https://api.github.com/users/fschlatt/followers",
"following_url": "https://api.github.com/users/fschlatt/following{/other_user}",
"gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fschlatt",
"id": 23191892,
"login": "fschlatt",
"node_id": "MDQ6VXNlcjIzMTkxODky",
"organizations_url": "https://api.github.com/users/fschlatt/orgs",
"received_events_url": "https://api.github.com/users/fschlatt/received_events",
"repos_url": "https://api.github.com/users/fschlatt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fschlatt"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | "2024-07-31T09:03:15" | "2024-07-31T09:05:58" | null | NONE | null | ### Feature request
Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files.
### Motivation
I have downloaded FineWeb-edu locally and currently trying to stream the dataset from the local files. I have both the raw parquet files using `hugginface-cli download --repo-type dataset HuggingFaceFW/fineweb-edu` and the processed arrow files using `load_dataset("HuggingFaceFW/fineweb-edu")`.
Streaming the files locally does not work well for both file types for two different reasons.
**Arrow files**
When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/datasets/HuggingFaceFW___fineweb-edu/default/0.0.0/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/fineweb-edu-train-*.arrow"})` resolving the data files is fast, but because `arrow` is not included in the known [extensions file list](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/utils/file_utils.py#L738) , all files are opened and scanned to determine the compression type. Adding `arrow` to the known extension types resolves this issue.
**Parquet files**
When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/hub/dataset-HuggingFaceFW___fineweb-edu/snapshots/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/data/CC-MAIN-*/train-*.parquet"})` the paths do not get resolved because the parquet files are symlinked from the blobs (which contain all files in case there are different versions). This occurs because the [pattern matching](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/data_files.py#L389) checks if the path is a file and does not check for symlinks. Symlinks (at least on my machine) are of type "other".
### Your contribution
I have created a PR for fixing arrow file streaming and symlinks. However, I have not checked locally if the tests work or new tests need to be added.
IMO, the easiest option would be to add a `streaming=download_first` option, but I'm afraid that exceeds my current knowledge of how the datasets library works. https://github.com/huggingface/datasets/pull/7083 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7084/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7084/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7083/comments | https://api.github.com/repos/huggingface/datasets/issues/7083/events | https://github.com/huggingface/datasets/pull/7083 | 2,439,518,466 | PR_kwDODunzps5292hC | 7,083 | fix streaming from arrow files | {
"avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4",
"events_url": "https://api.github.com/users/fschlatt/events{/privacy}",
"followers_url": "https://api.github.com/users/fschlatt/followers",
"following_url": "https://api.github.com/users/fschlatt/following{/other_user}",
"gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fschlatt",
"id": 23191892,
"login": "fschlatt",
"node_id": "MDQ6VXNlcjIzMTkxODky",
"organizations_url": "https://api.github.com/users/fschlatt/orgs",
"received_events_url": "https://api.github.com/users/fschlatt/received_events",
"repos_url": "https://api.github.com/users/fschlatt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fschlatt"
} | [] | open | false | null | [] | null | [] | "2024-07-31T09:02:42" | "2024-07-31T09:02:42" | null | NONE | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7083/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7083/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7083.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7083",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7083.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7083"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7082/comments | https://api.github.com/repos/huggingface/datasets/issues/7082/events | https://github.com/huggingface/datasets/pull/7082 | 2,437,354,975 | PR_kwDODunzps522dTJ | 7,082 | Support HTTP authentication in non-streaming mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7082). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-07-30T09:25:49" | "2024-07-31T08:58:54" | null | MEMBER | null | Support HTTP authentication in non-streaming mode, by support passing HTTP storage_options in non-streaming mode.
- Note that currently, HTTP authentication is supported only in streaming mode.
For example, this is necessary if a remote HTTP host requires authentication to download the data. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7082/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7082/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7082.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7082",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7082.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7082"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7081/comments | https://api.github.com/repos/huggingface/datasets/issues/7081/events | https://github.com/huggingface/datasets/pull/7081 | 2,437,059,657 | PR_kwDODunzps521cGm | 7,081 | Set load_from_disk path type as PathLike | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7081). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005665 / 0.011353 (-0.005688) | 0.004130 / 0.011008 (-0.006878) | 0.064231 / 0.038508 (0.025723) | 0.030738 / 0.023109 (0.007628) | 0.251896 / 0.275898 (-0.024002) | 0.275182 / 0.323480 (-0.048298) | 0.003364 / 0.007986 (-0.004621) | 0.003569 / 0.004328 (-0.000759) | 0.049407 / 0.004250 (0.045157) | 0.048177 / 0.037052 (0.011124) | 0.253739 / 0.258489 (-0.004751) | 0.304087 / 0.293841 (0.010246) | 0.030457 / 0.128546 (-0.098089) | 0.012762 / 0.075646 (-0.062885) | 0.214312 / 0.419271 (-0.204959) | 0.036673 / 0.043533 (-0.006860) | 0.251838 / 0.255139 (-0.003301) | 0.274049 / 0.283200 (-0.009151) | 0.021133 / 0.141683 (-0.120550) | 1.143743 / 1.452155 (-0.308412) | 1.203681 / 1.492716 (-0.289036) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094668 / 0.018006 (0.076662) | 0.300323 / 0.000490 (0.299833) | 0.000222 / 0.000200 (0.000022) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018565 / 0.037411 (-0.018846) | 0.066096 / 0.014526 (0.051570) | 0.075700 / 0.176557 (-0.100857) | 0.122185 / 0.737135 (-0.614950) | 0.077688 / 0.296338 (-0.218651) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288804 / 0.215209 (0.073595) | 2.838336 / 2.077655 (0.760681) | 1.530575 / 1.504120 (0.026455) | 1.406716 / 1.541195 (-0.134478) | 1.438885 / 1.468490 (-0.029605) | 0.744809 / 4.584777 (-3.839968) | 2.447992 / 3.745712 (-1.297721) | 3.126261 / 5.269862 (-2.143601) | 1.999687 / 4.565676 (-2.565990) | 0.081536 / 0.424275 (-0.342739) | 0.005827 / 0.007607 (-0.001780) | 0.346367 / 0.226044 (0.120323) | 3.373268 / 2.268929 (1.104339) | 1.890293 / 55.444624 (-53.554332) | 1.590384 / 6.876477 (-5.286093) | 1.652101 / 2.142072 (-0.489971) | 0.805888 / 4.805227 (-3.999339) | 0.137687 / 6.500664 (-6.362977) | 0.044536 / 0.075469 (-0.030933) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.998393 / 1.841788 (-0.843395) | 12.392241 / 8.074308 (4.317933) | 10.055638 / 10.191392 (-0.135754) | 0.132347 / 0.680424 (-0.548077) | 0.014635 / 0.534201 (-0.519566) | 0.301939 / 0.579283 (-0.277344) | 0.266756 / 0.434364 (-0.167608) | 0.342730 / 0.540337 (-0.197608) | 0.435463 / 1.386936 (-0.951473) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006421 / 0.011353 (-0.004932) | 0.004494 / 0.011008 (-0.006514) | 0.051315 / 0.038508 (0.012806) | 0.035570 / 0.023109 (0.012460) | 0.271635 / 0.275898 (-0.004263) | 0.297082 / 0.323480 (-0.026398) | 0.004572 / 0.007986 (-0.003414) | 0.002886 / 0.004328 (-0.001443) | 0.049152 / 0.004250 (0.044902) | 0.043000 / 0.037052 (0.005948) | 0.281921 / 0.258489 (0.023432) | 0.321097 / 0.293841 (0.027256) | 0.033488 / 0.128546 (-0.095058) | 0.012835 / 0.075646 (-0.062811) | 0.061831 / 0.419271 (-0.357441) | 0.034674 / 0.043533 (-0.008858) | 0.272885 / 0.255139 (0.017746) | 0.292726 / 0.283200 (0.009527) | 0.019906 / 0.141683 (-0.121777) | 1.132234 / 1.452155 (-0.319920) | 1.155359 / 1.492716 (-0.337357) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096943 / 0.018006 (0.078937) | 0.308980 / 0.000490 (0.308490) | 0.000225 / 0.000200 (0.000025) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023551 / 0.037411 (-0.013861) | 0.081682 / 0.014526 (0.067156) | 0.090987 / 0.176557 (-0.085569) | 0.132542 / 0.737135 (-0.604593) | 0.092844 / 0.296338 (-0.203494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304190 / 0.215209 (0.088981) | 2.958591 / 2.077655 (0.880936) | 1.610211 / 1.504120 (0.106091) | 1.488216 / 1.541195 (-0.052978) | 1.525429 / 1.468490 (0.056939) | 0.752811 / 4.584777 (-3.831966) | 0.967887 / 3.745712 (-2.777825) | 2.982760 / 5.269862 (-2.287102) | 1.996623 / 4.565676 (-2.569053) | 0.080783 / 0.424275 (-0.343492) | 0.005337 / 0.007607 (-0.002270) | 0.354996 / 0.226044 (0.128951) | 3.540788 / 2.268929 (1.271860) | 1.997445 / 55.444624 (-53.447179) | 1.682232 / 6.876477 (-5.194245) | 1.883198 / 2.142072 (-0.258875) | 0.814444 / 4.805227 (-3.990783) | 0.135798 / 6.500664 (-6.364867) | 0.041750 / 0.075469 (-0.033719) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.048688 / 1.841788 (-0.793099) | 13.122809 / 8.074308 (5.048501) | 10.893354 / 10.191392 (0.701962) | 0.133710 / 0.680424 (-0.546713) | 0.016357 / 0.534201 (-0.517844) | 0.304364 / 0.579283 (-0.274919) | 0.126457 / 0.434364 (-0.307907) | 0.345747 / 0.540337 (-0.194591) | 0.441620 / 1.386936 (-0.945316) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#27ea8e8ead3e76bb07aa645f882945495d238ef3 \"CML watermark\")\n"
] | "2024-07-30T07:00:38" | "2024-07-30T08:30:37" | "2024-07-30T08:21:50" | MEMBER | null | Set `load_from_disk` path type as `PathLike`. This way it is aligned with `save_to_disk`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7081/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7081/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7081.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7081",
"merged_at": "2024-07-30T08:21:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7081.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7081"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7080/comments | https://api.github.com/repos/huggingface/datasets/issues/7080/events | https://github.com/huggingface/datasets/issues/7080 | 2,434,275,664 | I_kwDODunzps6RGBlQ | 7,080 | Generating train split takes a long time | {
"avatar_url": "https://avatars.githubusercontent.com/u/35648800?v=4",
"events_url": "https://api.github.com/users/alexanderswerdlow/events{/privacy}",
"followers_url": "https://api.github.com/users/alexanderswerdlow/followers",
"following_url": "https://api.github.com/users/alexanderswerdlow/following{/other_user}",
"gists_url": "https://api.github.com/users/alexanderswerdlow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alexanderswerdlow",
"id": 35648800,
"login": "alexanderswerdlow",
"node_id": "MDQ6VXNlcjM1NjQ4ODAw",
"organizations_url": "https://api.github.com/users/alexanderswerdlow/orgs",
"received_events_url": "https://api.github.com/users/alexanderswerdlow/received_events",
"repos_url": "https://api.github.com/users/alexanderswerdlow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alexanderswerdlow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexanderswerdlow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alexanderswerdlow"
} | [] | open | false | null | [] | null | [] | "2024-07-29T01:42:43" | "2024-07-29T01:42:43" | null | NONE | null | ### Describe the bug
Loading a simple webdataset takes ~45 minutes.
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M")
```
### Expected behavior
The dataset should load immediately as it does when loaded through a normal indexed WebDataset loader. Generating splits should be optional and there should be a message showing how to disable it.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.1
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7080/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7080/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7079/comments | https://api.github.com/repos/huggingface/datasets/issues/7079/events | https://github.com/huggingface/datasets/issues/7079 | 2,433,363,298 | I_kwDODunzps6RCi1i | 7,079 | HfHubHTTPError: 500 Server Error: Internal Server Error for url: | {
"avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4",
"events_url": "https://api.github.com/users/neoneye/events{/privacy}",
"followers_url": "https://api.github.com/users/neoneye/followers",
"following_url": "https://api.github.com/users/neoneye/following{/other_user}",
"gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neoneye",
"id": 147971,
"login": "neoneye",
"node_id": "MDQ6VXNlcjE0Nzk3MQ==",
"organizations_url": "https://api.github.com/users/neoneye/orgs",
"received_events_url": "https://api.github.com/users/neoneye/received_events",
"repos_url": "https://api.github.com/users/neoneye/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neoneye/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neoneye"
} | [] | closed | false | null | [] | null | [
"same issue here. @albertvillanova @lhoestq ",
"Also impacted by this issue in many of my datasets (though not all) - in my case, this also seems to affect datasets that have been updated recently. Git cloning and the web interface still work:\r\n- https://huggingface.co/api/datasets/acmc/cheat_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_reuter_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_wp_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_essay_reduced\r\n\r\nOddly enough, the system status looks good: https://status.huggingface.co/",
"Hey how to download these datasets using git cloning?",
"Also reported here\r\nhttps://github.com/huggingface/huggingface_hub/issues/2425",
"I have been getting the same error for the past 8 hours as well",
"Same error since yesterday, fails on any new dataset created",
"Same here. I cannot download the HelpSteer2 dataset: https://huggingface.co/datasets/nvidia/HelpSteer2 which has been uploaded about a month ago",
"> Hey how to download these datasets using git cloning?\n\nYou'll find a guide [here](https://huggingface.co/docs/hub/en/datasets-downloading) ๐๐ป",
"Same here for imdb dataset",
"It also happens with this dataset: https://huggingface.co/datasets/ylacombe/jenny-tts-6h-tagged",
"same here for all datsets in the sentence-tramsformers repo and related collections.\r\n\r\nsame issue with dataset that i recently uploaded on my repo.\r\nseems that the upload date of the datset is not relevat (getting this issue with both old datasets and newer ones)\r\n\r\nfor some reason, i was able to get the dataset by turning it private and accessing it with the id token (accessing it as public while providing the token doesn not work)..... but i can say if that is just a random coincidence.\r\n\r\nseems not much deterministic, for a specific dataset (sentence-transformer nq ) , that was \"down\" since some hours , worked for like 5-10 minutes, then stopped again\r\n\r\nnow even this dataset (that worked since some min ago, and that i'm in the middle of processing steps) stopped working: _https://huggingface.co/datasets/bobox/msmarco-bm25-EduScore/_\r\n\r\nas already pointed out, there are no updates on **_https://status.huggingface.co/_**\r\n\r\n\\n\r\n\\n\r\n\r\nan example of the whole error message:\r\n``` \r\nHfHubHTTPError \r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)\r\n 2592 \r\n 2593 # Create a dataset builder\r\n-> 2594 builder_instance = load_dataset_builder(\r\n 2595 path=path,\r\n 2596 name=name,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)\r\n 2264 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 2265 download_config.storage_options.update(storage_options)\r\n-> 2266 dataset_module = dataset_module_factory(\r\n 2267 path,\r\n 2268 revision=revision,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\r\n 1912 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1913 ) from None\r\n-> 1914 raise e1 from None\r\n 1915 else:\r\n 1916 raise FileNotFoundError(\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\r\n 1832 hf_api = HfApi(config.HF_ENDPOINT)\r\n 1833 try:\r\n-> 1834 dataset_info = hf_api.dataset_info(\r\n 1835 repo_id=path,\r\n 1836 revision=revision,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in _inner_fn(*args, **kwargs)\r\n 112 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\r\n 113 \r\n--> 114 return fn(*args, **kwargs)\r\n 115 \r\n 116 return _inner_fn # type: ignore\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py](https://localhost:8080/#) in dataset_info(self, repo_id, revision, timeout, files_metadata, token)\r\n 2362 \r\n 2363 r = get_session().get(path, headers=headers, timeout=timeout, params=params)\r\n-> 2364 hf_raise_for_status(r)\r\n 2365 data = r.json()\r\n 2366 return DatasetInfo(**data)\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name)\r\n 369 # Convert `HTTPError` into a `HfHubHTTPError` to display request information\r\n 370 # as well (request id and/or server error message)\r\n--> 371 raise HfHubHTTPError(str(e), response=response) from e\r\n 372 \r\n 373 \r\n\r\nHfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/bobox/xSum-processed (Request ID: Root=1-66a527f0-756cfbc35cc466f075382289;7d5dc06a-37e9-4c22-874d-92b0b1023276)\r\n\r\nInternal Error - We're working hard to fix this as soon as possible!\r\n``` ",
"we're working on a fix !",
"We fixed the issue, you can load datasets again, sorry for the inconvenience !",
"I can confirm, it's working now. I can load the dataset, yay. Thank you @lhoestq ",
"@lhoestq thank you so much! "
] | "2024-07-27T08:21:03" | "2024-07-27T20:06:44" | "2024-07-27T19:52:30" | NONE | null | ### Describe the bug
newly uploaded datasets, since yesterday, yields an error.
old datasets, works fine.
Seems like the datasets api server returns a 500
I'm getting the same error, when I invoke `load_dataset` with my dataset.
Long discussion about it here, but I'm not sure anyone from huggingface have seen it.
https://discuss.huggingface.co/t/hfhubhttperror-500-server-error-internal-server-error-for-url/99580/1
### Steps to reproduce the bug
this api url:
https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3
respond with:
```
{"error":"Internal Error - We're working hard to fix this as soon as possible!"}
```
### Expected behavior
return no error with newer datasets.
With older datasets I can load the datasets fine.
### Environment info
# Browser
When I access the api in the browser:
https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3
```
{"error":"Internal Error - We're working hard to fix this as soon as possible!"}
```
### Request headers
```
Accept text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding gzip, deflate, br, zstd
Accept-Language en-US,en;q=0.5
Connection keep-alive
Host huggingface.co
Priority u=1
Sec-Fetch-Dest document
Sec-Fetch-Mode navigate
Sec-Fetch-Site cross-site
Upgrade-Insecure-Requests 1
User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:127.0) Gecko/20100101 Firefox/127.0
```
### Response headers
```
X-Firefox-Spdy h2
access-control-allow-origin https://huggingface.co
access-control-expose-headers X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range
content-length 80
content-type application/json; charset=utf-8
cross-origin-opener-policy same-origin
date Fri, 26 Jul 2024 19:09:45 GMT
etag W/"50-9qrwU+BNI4SD0Fe32p/nofkmv0c"
referrer-policy strict-origin-when-cross-origin
vary Origin
via 1.1 1624c79cd07e6098196697a6a7907e4a.cloudfront.net (CloudFront)
x-amz-cf-id SP8E7n5qRaP6i9c9G83dNAiOzJBU4GXSrDRAcVNTomY895K35H0nJQ==
x-amz-cf-pop CPH50-C1
x-cache Error from cloudfront
x-error-message Internal Error - We're working hard to fix this as soon as possible!
x-powered-by huggingface-moon
x-request-id Root=1-66a3f479-026417465ef42f49349fdca1
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 4,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7079/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7079/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7078/comments | https://api.github.com/repos/huggingface/datasets/issues/7078/events | https://github.com/huggingface/datasets/pull/7078 | 2,433,270,271 | PR_kwDODunzps52oq4n | 7,078 | Fix CI test_convert_to_parquet | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7078). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005262 / 0.011353 (-0.006090) | 0.003733 / 0.011008 (-0.007275) | 0.062619 / 0.038508 (0.024111) | 0.029491 / 0.023109 (0.006382) | 0.248947 / 0.275898 (-0.026951) | 0.278741 / 0.323480 (-0.044739) | 0.003173 / 0.007986 (-0.004813) | 0.002777 / 0.004328 (-0.001551) | 0.049344 / 0.004250 (0.045094) | 0.043103 / 0.037052 (0.006051) | 0.252402 / 0.258489 (-0.006087) | 0.288030 / 0.293841 (-0.005811) | 0.029425 / 0.128546 (-0.099121) | 0.012058 / 0.075646 (-0.063589) | 0.204509 / 0.419271 (-0.214762) | 0.035721 / 0.043533 (-0.007812) | 0.249121 / 0.255139 (-0.006018) | 0.272171 / 0.283200 (-0.011029) | 0.019515 / 0.141683 (-0.122168) | 1.130088 / 1.452155 (-0.322067) | 1.148856 / 1.492716 (-0.343860) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093613 / 0.018006 (0.075607) | 0.300830 / 0.000490 (0.300340) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018381 / 0.037411 (-0.019030) | 0.061515 / 0.014526 (0.046989) | 0.074370 / 0.176557 (-0.102186) | 0.120751 / 0.737135 (-0.616384) | 0.074971 / 0.296338 (-0.221367) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280499 / 0.215209 (0.065290) | 2.763114 / 2.077655 (0.685459) | 1.458696 / 1.504120 (-0.045424) | 1.331214 / 1.541195 (-0.209981) | 1.343157 / 1.468490 (-0.125333) | 0.732775 / 4.584777 (-3.852002) | 2.381485 / 3.745712 (-1.364227) | 2.930117 / 5.269862 (-2.339745) | 1.887617 / 4.565676 (-2.678059) | 0.080543 / 0.424275 (-0.343732) | 0.005136 / 0.007607 (-0.002471) | 0.336924 / 0.226044 (0.110879) | 3.343071 / 2.268929 (1.074142) | 1.823677 / 55.444624 (-53.620948) | 1.572300 / 6.876477 (-5.304176) | 1.564040 / 2.142072 (-0.578032) | 0.802369 / 4.805227 (-4.002858) | 0.135198 / 6.500664 (-6.365466) | 0.041499 / 0.075469 (-0.033970) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.961202 / 1.841788 (-0.880585) | 11.275695 / 8.074308 (3.201387) | 9.508052 / 10.191392 (-0.683340) | 0.136921 / 0.680424 (-0.543503) | 0.014055 / 0.534201 (-0.520146) | 0.300076 / 0.579283 (-0.279208) | 0.263403 / 0.434364 (-0.170961) | 0.340871 / 0.540337 (-0.199466) | 0.433452 / 1.386936 (-0.953484) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005683 / 0.011353 (-0.005670) | 0.003596 / 0.011008 (-0.007412) | 0.049913 / 0.038508 (0.011405) | 0.033275 / 0.023109 (0.010166) | 0.266011 / 0.275898 (-0.009887) | 0.295182 / 0.323480 (-0.028298) | 0.004336 / 0.007986 (-0.003649) | 0.002787 / 0.004328 (-0.001541) | 0.049035 / 0.004250 (0.044784) | 0.039833 / 0.037052 (0.002781) | 0.283520 / 0.258489 (0.025031) | 0.317437 / 0.293841 (0.023596) | 0.032578 / 0.128546 (-0.095968) | 0.011744 / 0.075646 (-0.063902) | 0.060174 / 0.419271 (-0.359097) | 0.034182 / 0.043533 (-0.009351) | 0.271821 / 0.255139 (0.016682) | 0.292189 / 0.283200 (0.008989) | 0.017045 / 0.141683 (-0.124638) | 1.127742 / 1.452155 (-0.324413) | 1.180621 / 1.492716 (-0.312095) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093798 / 0.018006 (0.075792) | 0.310715 / 0.000490 (0.310226) | 0.000213 / 0.000200 (0.000013) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022379 / 0.037411 (-0.015032) | 0.076823 / 0.014526 (0.062298) | 0.088086 / 0.176557 (-0.088471) | 0.128926 / 0.737135 (-0.608210) | 0.089187 / 0.296338 (-0.207151) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293982 / 0.215209 (0.078773) | 2.930932 / 2.077655 (0.853277) | 1.576425 / 1.504120 (0.072305) | 1.445163 / 1.541195 (-0.096031) | 1.462118 / 1.468490 (-0.006372) | 0.725816 / 4.584777 (-3.858961) | 0.949767 / 3.745712 (-2.795945) | 2.832821 / 5.269862 (-2.437041) | 1.897064 / 4.565676 (-2.668612) | 0.079853 / 0.424275 (-0.344423) | 0.005352 / 0.007607 (-0.002255) | 0.344551 / 0.226044 (0.118507) | 3.442506 / 2.268929 (1.173578) | 1.938700 / 55.444624 (-53.505925) | 1.662205 / 6.876477 (-5.214272) | 1.769061 / 2.142072 (-0.373011) | 0.818089 / 4.805227 (-3.987139) | 0.134612 / 6.500664 (-6.366052) | 0.040419 / 0.075469 (-0.035050) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.032267 / 1.841788 (-0.809521) | 11.902598 / 8.074308 (3.828290) | 10.342229 / 10.191392 (0.150837) | 0.140509 / 0.680424 (-0.539915) | 0.015593 / 0.534201 (-0.518608) | 0.303326 / 0.579283 (-0.275957) | 0.127391 / 0.434364 (-0.306973) | 0.342095 / 0.540337 (-0.198243) | 0.438978 / 1.386936 (-0.947958) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#30000fb6ca53126917ee17e1b1987f94f07a1569 \"CML watermark\")\n"
] | "2024-07-27T05:32:40" | "2024-07-27T05:50:57" | "2024-07-27T05:44:32" | MEMBER | null | Fix `test_convert_to_parquet` by patching `HfApi.preupload_lfs_files` and revert temporary fix:
- #7074 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7078/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7078/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7078.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7078",
"merged_at": "2024-07-27T05:44:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7078.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7078"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7077/comments | https://api.github.com/repos/huggingface/datasets/issues/7077/events | https://github.com/huggingface/datasets/issues/7077 | 2,432,345,489 | I_kwDODunzps6Q-qWR | 7,077 | column_names ignored by load_dataset() when loading CSV file | {
"avatar_url": "https://avatars.githubusercontent.com/u/9130265?v=4",
"events_url": "https://api.github.com/users/luismsgomes/events{/privacy}",
"followers_url": "https://api.github.com/users/luismsgomes/followers",
"following_url": "https://api.github.com/users/luismsgomes/following{/other_user}",
"gists_url": "https://api.github.com/users/luismsgomes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/luismsgomes",
"id": 9130265,
"login": "luismsgomes",
"node_id": "MDQ6VXNlcjkxMzAyNjU=",
"organizations_url": "https://api.github.com/users/luismsgomes/orgs",
"received_events_url": "https://api.github.com/users/luismsgomes/received_events",
"repos_url": "https://api.github.com/users/luismsgomes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/luismsgomes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luismsgomes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/luismsgomes"
} | [] | open | false | null | [] | null | [
"I confirm that `column_names` values are not copied to `names` variable because in this case `CsvConfig.__post_init__` is not called: `CsvConfig` is instantiated with default values and afterwards the `config_kwargs` are used to overwrite its attributes.\r\n\r\n@luismsgomes in the meantime, you can avoid the bug if you pass `names` instead of `column_names`."
] | "2024-07-26T14:18:04" | "2024-07-30T07:52:26" | null | NONE | null | ### Describe the bug
load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file.
### Steps to reproduce the bug
Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg.
### Expected behavior
The resulting dataset should have the specified column names **and** the first line of the file should be considered as data values.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.10.0-30-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- `huggingface_hub` version: 0.24.2
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7077/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7077/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7076/comments | https://api.github.com/repos/huggingface/datasets/issues/7076/events | https://github.com/huggingface/datasets/pull/7076 | 2,432,275,393 | PR_kwDODunzps52lTDe | 7,076 | ๐งช Do not mock create_commit | {
"avatar_url": "https://avatars.githubusercontent.com/u/342922?v=4",
"events_url": "https://api.github.com/users/coyotte508/events{/privacy}",
"followers_url": "https://api.github.com/users/coyotte508/followers",
"following_url": "https://api.github.com/users/coyotte508/following{/other_user}",
"gists_url": "https://api.github.com/users/coyotte508/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/coyotte508",
"id": 342922,
"login": "coyotte508",
"node_id": "MDQ6VXNlcjM0MjkyMg==",
"organizations_url": "https://api.github.com/users/coyotte508/orgs",
"received_events_url": "https://api.github.com/users/coyotte508/received_events",
"repos_url": "https://api.github.com/users/coyotte508/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/coyotte508/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coyotte508/subscriptions",
"type": "User",
"url": "https://api.github.com/users/coyotte508"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7076). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-07-26T13:44:42" | "2024-07-27T05:48:17" | "2024-07-27T05:48:17" | MEMBER | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7076/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7076/timeline | null | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7076.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7076",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7076.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7076"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7075/comments | https://api.github.com/repos/huggingface/datasets/issues/7075/events | https://github.com/huggingface/datasets/pull/7075 | 2,432,027,412 | PR_kwDODunzps52kciD | 7,075 | Update required soxr version from pre-release to release | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7075). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005717 / 0.011353 (-0.005636) | 0.004102 / 0.011008 (-0.006906) | 0.064343 / 0.038508 (0.025835) | 0.031510 / 0.023109 (0.008400) | 0.254534 / 0.275898 (-0.021364) | 0.275080 / 0.323480 (-0.048400) | 0.004243 / 0.007986 (-0.003742) | 0.002782 / 0.004328 (-0.001546) | 0.049554 / 0.004250 (0.045303) | 0.045291 / 0.037052 (0.008239) | 0.264118 / 0.258489 (0.005629) | 0.296476 / 0.293841 (0.002635) | 0.030298 / 0.128546 (-0.098248) | 0.012646 / 0.075646 (-0.063000) | 0.208403 / 0.419271 (-0.210869) | 0.036365 / 0.043533 (-0.007168) | 0.250294 / 0.255139 (-0.004845) | 0.276057 / 0.283200 (-0.007143) | 0.018687 / 0.141683 (-0.122996) | 1.128970 / 1.452155 (-0.323184) | 1.170923 / 1.492716 (-0.321793) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.134953 / 0.018006 (0.116947) | 0.301722 / 0.000490 (0.301232) | 0.000242 / 0.000200 (0.000042) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019650 / 0.037411 (-0.017761) | 0.063404 / 0.014526 (0.048878) | 0.074883 / 0.176557 (-0.101674) | 0.122846 / 0.737135 (-0.614289) | 0.077410 / 0.296338 (-0.218928) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287710 / 0.215209 (0.072501) | 2.813834 / 2.077655 (0.736179) | 1.454710 / 1.504120 (-0.049410) | 1.327303 / 1.541195 (-0.213891) | 1.375064 / 1.468490 (-0.093426) | 0.746831 / 4.584777 (-3.837946) | 2.361008 / 3.745712 (-1.384705) | 3.080869 / 5.269862 (-2.188993) | 1.969927 / 4.565676 (-2.595749) | 0.081045 / 0.424275 (-0.343230) | 0.005168 / 0.007607 (-0.002440) | 0.342657 / 0.226044 (0.116613) | 3.404883 / 2.268929 (1.135955) | 1.840761 / 55.444624 (-53.603863) | 1.535400 / 6.876477 (-5.341076) | 1.584613 / 2.142072 (-0.557460) | 0.828003 / 4.805227 (-3.977224) | 0.135564 / 6.500664 (-6.365100) | 0.042717 / 0.075469 (-0.032752) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985301 / 1.841788 (-0.856487) | 11.945913 / 8.074308 (3.871605) | 9.887577 / 10.191392 (-0.303815) | 0.141261 / 0.680424 (-0.539163) | 0.014961 / 0.534201 (-0.519240) | 0.304134 / 0.579283 (-0.275150) | 0.264733 / 0.434364 (-0.169631) | 0.349993 / 0.540337 (-0.190345) | 0.440390 / 1.386936 (-0.946546) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006145 / 0.011353 (-0.005207) | 0.004259 / 0.011008 (-0.006749) | 0.051245 / 0.038508 (0.012737) | 0.034873 / 0.023109 (0.011764) | 0.274149 / 0.275898 (-0.001749) | 0.299761 / 0.323480 (-0.023719) | 0.004457 / 0.007986 (-0.003529) | 0.002938 / 0.004328 (-0.001390) | 0.049547 / 0.004250 (0.045297) | 0.042441 / 0.037052 (0.005389) | 0.284961 / 0.258489 (0.026472) | 0.322197 / 0.293841 (0.028356) | 0.033850 / 0.128546 (-0.094696) | 0.012615 / 0.075646 (-0.063031) | 0.061967 / 0.419271 (-0.357304) | 0.035229 / 0.043533 (-0.008304) | 0.273941 / 0.255139 (0.018802) | 0.293395 / 0.283200 (0.010195) | 0.020566 / 0.141683 (-0.121117) | 1.173423 / 1.452155 (-0.278732) | 1.219948 / 1.492716 (-0.272768) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096131 / 0.018006 (0.078125) | 0.305548 / 0.000490 (0.305059) | 0.000217 / 0.000200 (0.000017) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023847 / 0.037411 (-0.013564) | 0.079536 / 0.014526 (0.065010) | 0.088889 / 0.176557 (-0.087667) | 0.129181 / 0.737135 (-0.607954) | 0.090879 / 0.296338 (-0.205460) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299315 / 0.215209 (0.084106) | 2.952656 / 2.077655 (0.875001) | 1.587354 / 1.504120 (0.083234) | 1.453420 / 1.541195 (-0.087774) | 1.501784 / 1.468490 (0.033294) | 0.711481 / 4.584777 (-3.873296) | 0.971790 / 3.745712 (-2.773922) | 2.897636 / 5.269862 (-2.372226) | 1.947086 / 4.565676 (-2.618591) | 0.079700 / 0.424275 (-0.344575) | 0.005395 / 0.007607 (-0.002212) | 0.351340 / 0.226044 (0.125296) | 3.416472 / 2.268929 (1.147543) | 2.007559 / 55.444624 (-53.437066) | 1.660401 / 6.876477 (-5.216076) | 1.837049 / 2.142072 (-0.305024) | 0.817306 / 4.805227 (-3.987921) | 0.135176 / 6.500664 (-6.365488) | 0.041477 / 0.075469 (-0.033992) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.030033 / 1.841788 (-0.811755) | 12.528661 / 8.074308 (4.454353) | 10.603212 / 10.191392 (0.411820) | 0.142434 / 0.680424 (-0.537989) | 0.015603 / 0.534201 (-0.518598) | 0.304516 / 0.579283 (-0.274767) | 0.125324 / 0.434364 (-0.309040) | 0.343092 / 0.540337 (-0.197245) | 0.443359 / 1.386936 (-0.943577) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#45c5b3daacd7af212cae4c848a56e14d3cac291f \"CML watermark\")\n"
] | "2024-07-26T11:24:35" | "2024-07-26T11:46:52" | "2024-07-26T11:40:49" | MEMBER | null | Update required `soxr` version from pre-release to release 0.4.0: https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7075/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7075/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7075.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7075",
"merged_at": "2024-07-26T11:40:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7075.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7075"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7074/comments | https://api.github.com/repos/huggingface/datasets/issues/7074/events | https://github.com/huggingface/datasets/pull/7074 | 2,431,772,703 | PR_kwDODunzps52jkw4 | 7,074 | Fix CI by temporarily marking test_convert_to_parquet as expected to fail | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7074). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005168 / 0.011353 (-0.006185) | 0.003572 / 0.011008 (-0.007436) | 0.062755 / 0.038508 (0.024247) | 0.030371 / 0.023109 (0.007262) | 0.250240 / 0.275898 (-0.025658) | 0.268091 / 0.323480 (-0.055389) | 0.003260 / 0.007986 (-0.004726) | 0.002706 / 0.004328 (-0.001622) | 0.048957 / 0.004250 (0.044706) | 0.044441 / 0.037052 (0.007389) | 0.251801 / 0.258489 (-0.006688) | 0.289401 / 0.293841 (-0.004440) | 0.028991 / 0.128546 (-0.099555) | 0.011871 / 0.075646 (-0.063775) | 0.203722 / 0.419271 (-0.215549) | 0.035911 / 0.043533 (-0.007622) | 0.248070 / 0.255139 (-0.007069) | 0.266480 / 0.283200 (-0.016720) | 0.019831 / 0.141683 (-0.121852) | 1.143429 / 1.452155 (-0.308726) | 1.160102 / 1.492716 (-0.332614) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096740 / 0.018006 (0.078734) | 0.302473 / 0.000490 (0.301983) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018367 / 0.037411 (-0.019045) | 0.062346 / 0.014526 (0.047820) | 0.074416 / 0.176557 (-0.102140) | 0.120507 / 0.737135 (-0.616628) | 0.076536 / 0.296338 (-0.219802) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284093 / 0.215209 (0.068884) | 2.738805 / 2.077655 (0.661150) | 1.469263 / 1.504120 (-0.034856) | 1.349122 / 1.541195 (-0.192073) | 1.355578 / 1.468490 (-0.112912) | 0.720364 / 4.584777 (-3.864413) | 2.360339 / 3.745712 (-1.385373) | 2.941134 / 5.269862 (-2.328728) | 1.888692 / 4.565676 (-2.676984) | 0.077111 / 0.424275 (-0.347164) | 0.005070 / 0.007607 (-0.002537) | 0.334122 / 0.226044 (0.108078) | 3.298378 / 2.268929 (1.029450) | 1.868514 / 55.444624 (-53.576111) | 1.528561 / 6.876477 (-5.347916) | 1.535319 / 2.142072 (-0.606754) | 0.778591 / 4.805227 (-4.026636) | 0.131364 / 6.500664 (-6.369300) | 0.041697 / 0.075469 (-0.033773) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970243 / 1.841788 (-0.871544) | 11.324752 / 8.074308 (3.250443) | 9.612381 / 10.191392 (-0.579011) | 0.138842 / 0.680424 (-0.541582) | 0.014479 / 0.534201 (-0.519722) | 0.309415 / 0.579283 (-0.269868) | 0.264654 / 0.434364 (-0.169710) | 0.343695 / 0.540337 (-0.196642) | 0.435323 / 1.386936 (-0.951613) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005680 / 0.011353 (-0.005673) | 0.003614 / 0.011008 (-0.007394) | 0.060575 / 0.038508 (0.022067) | 0.031103 / 0.023109 (0.007994) | 0.269083 / 0.275898 (-0.006815) | 0.291556 / 0.323480 (-0.031923) | 0.004354 / 0.007986 (-0.003632) | 0.002739 / 0.004328 (-0.001589) | 0.049056 / 0.004250 (0.044806) | 0.039759 / 0.037052 (0.002707) | 0.280608 / 0.258489 (0.022119) | 0.324798 / 0.293841 (0.030957) | 0.032030 / 0.128546 (-0.096516) | 0.011862 / 0.075646 (-0.063784) | 0.060011 / 0.419271 (-0.359261) | 0.033960 / 0.043533 (-0.009573) | 0.271114 / 0.255139 (0.015975) | 0.293922 / 0.283200 (0.010722) | 0.019497 / 0.141683 (-0.122185) | 1.137871 / 1.452155 (-0.314284) | 1.180656 / 1.492716 (-0.312061) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094201 / 0.018006 (0.076194) | 0.306657 / 0.000490 (0.306167) | 0.000215 / 0.000200 (0.000015) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022562 / 0.037411 (-0.014850) | 0.077170 / 0.014526 (0.062644) | 0.088915 / 0.176557 (-0.087642) | 0.129455 / 0.737135 (-0.607680) | 0.091571 / 0.296338 (-0.204767) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300753 / 0.215209 (0.085544) | 2.941929 / 2.077655 (0.864274) | 1.613451 / 1.504120 (0.109331) | 1.498365 / 1.541195 (-0.042830) | 1.517124 / 1.468490 (0.048634) | 0.709209 / 4.584777 (-3.875568) | 0.950478 / 3.745712 (-2.795235) | 2.799328 / 5.269862 (-2.470533) | 1.872895 / 4.565676 (-2.692782) | 0.078233 / 0.424275 (-0.346042) | 0.005613 / 0.007607 (-0.001994) | 0.349590 / 0.226044 (0.123545) | 3.500213 / 2.268929 (1.231284) | 2.001155 / 55.444624 (-53.443469) | 1.704845 / 6.876477 (-5.171632) | 1.810722 / 2.142072 (-0.331350) | 0.795326 / 4.805227 (-4.009901) | 0.132913 / 6.500664 (-6.367751) | 0.041209 / 0.075469 (-0.034260) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.029513 / 1.841788 (-0.812274) | 12.005617 / 8.074308 (3.931309) | 10.119379 / 10.191392 (-0.072013) | 0.139767 / 0.680424 (-0.540657) | 0.015241 / 0.534201 (-0.518960) | 0.301164 / 0.579283 (-0.278119) | 0.121563 / 0.434364 (-0.312801) | 0.336672 / 0.540337 (-0.203666) | 0.431526 / 1.386936 (-0.955410) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#92bdab56a3c7d5ded10e8ae4134c943e32d3bc86 \"CML watermark\")\n"
] | "2024-07-26T09:03:33" | "2024-07-26T09:23:33" | "2024-07-26T09:16:12" | MEMBER | null | As a hotfix for CI, temporarily mark test_convert_to_parquet as expected to fail.
Fix #7073.
Revert once root cause is fixed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7074/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7074/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7074.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7074",
"merged_at": "2024-07-26T09:16:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7074.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7074"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7073/comments | https://api.github.com/repos/huggingface/datasets/issues/7073/events | https://github.com/huggingface/datasets/issues/7073 | 2,431,706,568 | I_kwDODunzps6Q8OXI | 7,073 | CI is broken for convert_to_parquet: Invalid rev id: refs/pr/1 404 error causes RevisionNotFoundError | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Any recent change in the API backend rejecting parameter `revision=\"refs/pr/1\"` to `HfApi.preupload_lfs_files`?\r\n```\r\nf\"{endpoint}/api/{repo_type}s/{repo_id}/preupload/{revision}\"\r\n\r\nhttps://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs%2Fpr%2F1.\r\nInvalid rev id: refs/pr/1\r\n```\r\n@Wauplin @huggingface/datasets @huggingface/moon-landing @huggingface/moon-landing-back ",
"I have temporarily fixed the CI with:\r\n- #7074\r\n\r\nHowever, the underlying issue must be fixed and #7074 must be reverted.",
"Hmm does it do the preupload call before creating the ref cc @Wauplin ?\r\n\r\n(in that case it should do a preupload call on the base branch with `?create_pr=1`)",
"@coyotte508, the CI test was implemented 2 months ago and it was working OK until yesterday. See the CI status of the commits in the main branch of `datasets`: https://github.com/huggingface/datasets/commits/main/",
"Yes i get that\r\n\r\nWe changed the preupload response to return the commit id in https://github.com/huggingface-internal/moon-landing/pull/10756\r\n\r\nThis line is probably causing the error: https://github.com/huggingface-internal/moon-landing/pull/10756/files#diff-558f6f9865e30bfa091b94d6a4a900138103ddb4eb0bec96b6deec5bf5626fa0R2322\r\n\r\nIt's weird the error is returned, it means that maybe a ref with 0 history (not even the first commit) was created\r\n\r\nDoes this change have any impact in production, or just the CI test? If it's just the CI test it should be fixed on your side, if it impacts production we can look at a solution",
"@coyotte508 it impacts production: `convert_to_parquet` raises the above error when the dataset has more that one configs/subsets:\r\n- First subset calls `push_to_hub` with `create_pr=True`\r\n- Second subset uses the `refs/pr/#` returned by the call above, and calls `push_to_hub` with `revision=\"refs/pr/#\"`",
"I tried removing the `mock_commit` call: https://github.com/huggingface/datasets/pull/7076\r\n\r\nAnd the tests seem to work.\r\n\r\nSo it's probably because the commit is not actually called, it doesn't actually create the pull request on the remote (and the associated `refs/pr/1`). But the `preupload` call is not mocked.\r\n\r\nAnyway it shouldn't impact production, since production isn't mocked",
"@coyotte508 thanks a lot for the investigation and sorry for the noise. \r\nI promise not trying to fix things when I have a slight fever: my head does not work well.\r\n\r\nWe need indeed to mock `preupload_lfs_files`: before it was not necessary, but now it is.",
"I fixed the test in:\r\n- #7078\r\n\r\nThanks again, @coyotte508."
] | "2024-07-26T08:27:41" | "2024-07-27T05:48:02" | "2024-07-26T09:16:13" | MEMBER | null | See: https://github.com/huggingface/datasets/actions/runs/10095313567/job/27915185756
```
FAILED tests/test_hub.py::test_convert_to_parquet - huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Root=1-66a25839-31ce7b475e70e7db1e4d44c2;b0c8870f-d5ef-4bf2-a6ff-0191f3df0f64)
Revision Not Found for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs%2Fpr%2F1.
Invalid rev id: refs/pr/1
```
```
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/hub.py:86: in convert_to_parquet
dataset.push_to_hub(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/dataset_dict.py:1722: in push_to_hub
split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/arrow_dataset.py:5511: in _push_parquet_shards_to_hub
api.preupload_lfs_files(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/hf_api.py:4231: in preupload_lfs_files
_fetch_upload_modes(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:118: in _inner_fn
return fn(*args, **kwargs)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/_commit_api.py:507: in _fetch_upload_modes
hf_raise_for_status(resp)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7073/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7073/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7072/comments | https://api.github.com/repos/huggingface/datasets/issues/7072/events | https://github.com/huggingface/datasets/issues/7072 | 2,430,577,916 | I_kwDODunzps6Q36z8 | 7,072 | nm | {
"avatar_url": "https://avatars.githubusercontent.com/u/26392883?v=4",
"events_url": "https://api.github.com/users/brettdavies/events{/privacy}",
"followers_url": "https://api.github.com/users/brettdavies/followers",
"following_url": "https://api.github.com/users/brettdavies/following{/other_user}",
"gists_url": "https://api.github.com/users/brettdavies/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/brettdavies",
"id": 26392883,
"login": "brettdavies",
"node_id": "MDQ6VXNlcjI2MzkyODgz",
"organizations_url": "https://api.github.com/users/brettdavies/orgs",
"received_events_url": "https://api.github.com/users/brettdavies/received_events",
"repos_url": "https://api.github.com/users/brettdavies/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/brettdavies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brettdavies/subscriptions",
"type": "User",
"url": "https://api.github.com/users/brettdavies"
} | [] | closed | false | null | [] | null | [] | "2024-07-25T17:03:24" | "2024-07-25T20:36:11" | "2024-07-25T20:36:11" | NONE | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7072/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7072/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7071/comments | https://api.github.com/repos/huggingface/datasets/issues/7071/events | https://github.com/huggingface/datasets/issues/7071 | 2,430,313,011 | I_kwDODunzps6Q26Iz | 7,071 | Filter hangs | {
"avatar_url": "https://avatars.githubusercontent.com/u/61711045?v=4",
"events_url": "https://api.github.com/users/lucienwalewski/events{/privacy}",
"followers_url": "https://api.github.com/users/lucienwalewski/followers",
"following_url": "https://api.github.com/users/lucienwalewski/following{/other_user}",
"gists_url": "https://api.github.com/users/lucienwalewski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lucienwalewski",
"id": 61711045,
"login": "lucienwalewski",
"node_id": "MDQ6VXNlcjYxNzExMDQ1",
"organizations_url": "https://api.github.com/users/lucienwalewski/orgs",
"received_events_url": "https://api.github.com/users/lucienwalewski/received_events",
"repos_url": "https://api.github.com/users/lucienwalewski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lucienwalewski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucienwalewski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lucienwalewski"
} | [] | open | false | null | [] | null | [] | "2024-07-25T15:29:05" | "2024-07-25T15:36:59" | null | NONE | null | ### Describe the bug
When trying to filter my custom dataset, the process hangs, regardless of the lambda function used. It appears to be an issue with the way the Images are being handled. The dataset in question is a preprocessed version of https://huggingface.co/datasets/danaaubakirova/patfig where notably, I have converted the data to the Parquet format.
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('lcolonn/patfig', split='test')
ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y')
```
Eventually I ctrl+C and I obtain this stack trace:
```
>>> ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y')
Filter: 0%| | 0/998 [00:00<?, ? examples/s]Filter: 0%| | 0/998 [00:35<?, ? examples/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/fingerprint.py", line 482, in wrapper
out = func(dataset, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3714, in filter
indices = self.map(
^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 602, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3161, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3552, in _map_single
batch = apply_function_on_filtered_inputs(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3421, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 6478, in get_indices_from_mask_function
num_examples = len(batch[next(iter(batch.keys()))])
~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 273, in __getitem__
value = self.format(key)
^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 376, in format
return self.formatter.format_column(self.pa_table.select([key]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 443, in format_column
column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 219, in decode_column
return self.features.decode_column(column, column_name) if self.features else column
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in decode_column
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in <listcomp>
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 1351, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/image.py", line 188, in decode_example
image.load() # to avoid "Too many open files" errors
^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/PIL/ImageFile.py", line 293, in load
n, err_code = decoder.decode(b)
^^^^^^^^^^^^^^^^^
KeyboardInterrupt
```
Warning! This can even seem to cause some computers to crash.
### Expected behavior
Should return the filtered dataset
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.0
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7071/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7071/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7070/comments | https://api.github.com/repos/huggingface/datasets/issues/7070/events | https://github.com/huggingface/datasets/issues/7070 | 2,430,285,235 | I_kwDODunzps6Q2zWz | 7,070 | how set_transform affects batch size? | {
"avatar_url": "https://avatars.githubusercontent.com/u/103993288?v=4",
"events_url": "https://api.github.com/users/VafaKnm/events{/privacy}",
"followers_url": "https://api.github.com/users/VafaKnm/followers",
"following_url": "https://api.github.com/users/VafaKnm/following{/other_user}",
"gists_url": "https://api.github.com/users/VafaKnm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VafaKnm",
"id": 103993288,
"login": "VafaKnm",
"node_id": "U_kgDOBjLPyA",
"organizations_url": "https://api.github.com/users/VafaKnm/orgs",
"received_events_url": "https://api.github.com/users/VafaKnm/received_events",
"repos_url": "https://api.github.com/users/VafaKnm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VafaKnm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VafaKnm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VafaKnm"
} | [] | open | false | null | [] | null | [] | "2024-07-25T15:19:34" | "2024-07-25T15:19:34" | null | NONE | null | ### Describe the bug
I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this:
```
def prepare_dataset(batch):
input_features = processor(batch["audio"], sampling_rate=16000).input_features[0]
input_length = len(input_features)
labels = processor.tokenizer(batch["text"], padding=False).input_ids
batch = {
"input_features": [input_features],
"input_length": [input_length],
"labels": [labels]
}
return batch
train_ds.set_transform(prepare_dataset)
val_ds.set_transform(prepare_dataset)
```
After this, I also had to change the DataCollatorCTCWithPadding class like this:
```
@dataclass
class DataCollatorCTCWithPadding:
processor: Wav2Vec2BertProcessor
padding: Union[bool, str] = True
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# Separate input_features and labels
input_features = [{"input_features": feature["input_features"][0]} for feature in features]
labels = [feature["labels"][0] for feature in features]
# Pad input features
batch = self.processor.pad(
input_features,
padding=self.padding,
return_tensors="pt",
)
# Pad and process labels
label_features = self.processor.tokenizer.pad(
{"input_ids": labels},
padding=self.padding,
return_tensors="pt",
)
labels = label_features["input_ids"]
attention_mask = label_features["attention_mask"]
# Replace padding with -100 to ignore these tokens during loss calculation
labels = labels.masked_fill(attention_mask.ne(1), -100)
batch["labels"] = labels
return batch
```
But now a strange thing is happening, no matter how much I increase the batch size, the amount of V-RAM GPU usage does not change, while the number of total steps in the progress-bar (logging) changes. Is this normal or have I made a mistake?
### Steps to reproduce the bug
i can share my code if needed
### Expected behavior
Equal to the batch size value, the set_transform function is applied to the dataset and given to the model as a batch.
### Environment info
all updated versions | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7070/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7070/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7069/comments | https://api.github.com/repos/huggingface/datasets/issues/7069/events | https://github.com/huggingface/datasets/pull/7069 | 2,429,281,339 | PR_kwDODunzps52betB | 7,069 | Fix push_to_hub by not calling create_branch if PR branch | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7069). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"cc @Wauplin maybe it's a `huggingface_hub` bug ?\r\n\r\nEDIT: ah actually the issue is opened at https://github.com/huggingface/huggingface_hub/issues/2419",
"I think we need to make this fix anyway, ~~unless we pin the lower version of huggingface-hub (once they release the patch)~~.\r\n- Calling create_branch with a PR ref raises an error",
"Comment by @Wauplin: https://github.com/huggingface/huggingface_hub/pull/2426#issuecomment-2257657543\r\n> I think this should be something to fix in datasets directly. Having a 400 Bad request when trying to create the branch refs/pr/1 seems normal to me since it's not a branch.",
"does this mean we should use `create_pull_request()` in that case ?",
"> does this mean we should use create_pull_request() in that case ?\r\n\r\nIf user wants to push some data to a new PR, they can already pass `create_pr=True` which will automatically do the job for you (without using `revision`). If user is passing `revision=\"refs/pr/1\"` explicitly, you should assume the PR already exists.",
"ah yes we do pass create_pr in `preupload_lfs_files()` ! sounds good then",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005806 / 0.011353 (-0.005547) | 0.004082 / 0.011008 (-0.006927) | 0.064277 / 0.038508 (0.025769) | 0.032289 / 0.023109 (0.009180) | 0.242066 / 0.275898 (-0.033832) | 0.272574 / 0.323480 (-0.050906) | 0.003281 / 0.007986 (-0.004705) | 0.002957 / 0.004328 (-0.001371) | 0.049930 / 0.004250 (0.045679) | 0.047306 / 0.037052 (0.010253) | 0.252216 / 0.258489 (-0.006273) | 0.286678 / 0.293841 (-0.007163) | 0.030182 / 0.128546 (-0.098364) | 0.012967 / 0.075646 (-0.062680) | 0.204949 / 0.419271 (-0.214323) | 0.036999 / 0.043533 (-0.006534) | 0.243577 / 0.255139 (-0.011562) | 0.265044 / 0.283200 (-0.018156) | 0.021149 / 0.141683 (-0.120534) | 1.112293 / 1.452155 (-0.339862) | 1.186483 / 1.492716 (-0.306233) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093239 / 0.018006 (0.075233) | 0.286372 / 0.000490 (0.285883) | 0.000224 / 0.000200 (0.000024) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019042 / 0.037411 (-0.018369) | 0.063690 / 0.014526 (0.049164) | 0.075034 / 0.176557 (-0.101523) | 0.123053 / 0.737135 (-0.614083) | 0.076843 / 0.296338 (-0.219495) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276554 / 0.215209 (0.061345) | 2.749338 / 2.077655 (0.671683) | 1.442764 / 1.504120 (-0.061356) | 1.327860 / 1.541195 (-0.213335) | 1.369885 / 1.468490 (-0.098606) | 0.722645 / 4.584777 (-3.862132) | 2.430707 / 3.745712 (-1.315005) | 3.105293 / 5.269862 (-2.164568) | 1.961617 / 4.565676 (-2.604060) | 0.077728 / 0.424275 (-0.346547) | 0.005189 / 0.007607 (-0.002418) | 0.335511 / 0.226044 (0.109467) | 3.315618 / 2.268929 (1.046690) | 1.858254 / 55.444624 (-53.586371) | 1.552173 / 6.876477 (-5.324304) | 1.627086 / 2.142072 (-0.514987) | 0.790871 / 4.805227 (-4.014356) | 0.136958 / 6.500664 (-6.363706) | 0.043207 / 0.075469 (-0.032262) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969314 / 1.841788 (-0.872473) | 12.145318 / 8.074308 (4.071010) | 9.834839 / 10.191392 (-0.356553) | 0.141896 / 0.680424 (-0.538528) | 0.014304 / 0.534201 (-0.519897) | 0.306091 / 0.579283 (-0.273192) | 0.260994 / 0.434364 (-0.173369) | 0.348096 / 0.540337 (-0.192242) | 0.441458 / 1.386936 (-0.945478) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005989 / 0.011353 (-0.005363) | 0.003907 / 0.011008 (-0.007102) | 0.050819 / 0.038508 (0.012310) | 0.033178 / 0.023109 (0.010069) | 0.279059 / 0.275898 (0.003161) | 0.300161 / 0.323480 (-0.023319) | 0.004383 / 0.007986 (-0.003603) | 0.002834 / 0.004328 (-0.001495) | 0.048779 / 0.004250 (0.044528) | 0.040502 / 0.037052 (0.003450) | 0.291786 / 0.258489 (0.033297) | 0.323827 / 0.293841 (0.029986) | 0.032175 / 0.128546 (-0.096371) | 0.012157 / 0.075646 (-0.063489) | 0.060796 / 0.419271 (-0.358476) | 0.033924 / 0.043533 (-0.009609) | 0.278047 / 0.255139 (0.022908) | 0.297878 / 0.283200 (0.014678) | 0.019137 / 0.141683 (-0.122546) | 1.138996 / 1.452155 (-0.313158) | 1.172731 / 1.492716 (-0.319985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.110148 / 0.018006 (0.092142) | 0.307232 / 0.000490 (0.306742) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023082 / 0.037411 (-0.014330) | 0.076590 / 0.014526 (0.062065) | 0.088444 / 0.176557 (-0.088113) | 0.129293 / 0.737135 (-0.607842) | 0.090470 / 0.296338 (-0.205868) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305016 / 0.215209 (0.089807) | 2.931671 / 2.077655 (0.854016) | 1.586055 / 1.504120 (0.081935) | 1.463517 / 1.541195 (-0.077678) | 1.479654 / 1.468490 (0.011164) | 0.726194 / 4.584777 (-3.858583) | 0.970512 / 3.745712 (-2.775200) | 2.850496 / 5.269862 (-2.419365) | 1.920112 / 4.565676 (-2.645564) | 0.079921 / 0.424275 (-0.344354) | 0.005367 / 0.007607 (-0.002240) | 0.347022 / 0.226044 (0.120978) | 3.472425 / 2.268929 (1.203497) | 1.965400 / 55.444624 (-53.479225) | 1.669116 / 6.876477 (-5.207361) | 1.859504 / 2.142072 (-0.282568) | 0.802703 / 4.805227 (-4.002525) | 0.134776 / 6.500664 (-6.365888) | 0.041800 / 0.075469 (-0.033669) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.039665 / 1.841788 (-0.802122) | 12.024071 / 8.074308 (3.949763) | 10.338743 / 10.191392 (0.147351) | 0.139495 / 0.680424 (-0.540929) | 0.015249 / 0.534201 (-0.518952) | 0.298580 / 0.579283 (-0.280703) | 0.124625 / 0.434364 (-0.309739) | 0.341868 / 0.540337 (-0.198470) | 0.431396 / 1.386936 (-0.955540) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#65b9499348fa4c6e5bfa977ee9b5e8574bf64eea \"CML watermark\")\n"
] | "2024-07-25T07:50:04" | "2024-07-31T07:10:07" | "2024-07-30T10:51:01" | MEMBER | null | Fix push_to_hub by not calling create_branch if PR branch (e.g. `refs/pr/1`).
Note that currently create_branch raises a 400 Bad Request error if the user passes a PR branch (e.g. `refs/pr/1`).
EDIT:
~~Fix push_to_hub by not calling create_branch if branch exists.~~
Note that currently create_branch raises a 403 Forbidden error even if all these conditions are met:
- exist_ok is passed
- the branch already exists
- the user does not have WRITE permission
Fix #7067.
Related issue:
- https://github.com/huggingface/huggingface_hub/issues/2419 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7069/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7069/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7069.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7069",
"merged_at": "2024-07-30T10:51:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7069.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7069"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7068/comments | https://api.github.com/repos/huggingface/datasets/issues/7068/events | https://github.com/huggingface/datasets/pull/7068 | 2,426,657,434 | PR_kwDODunzps52SwXS | 7,068 | Fix prepare_single_hop_path_and_storage_options | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7068). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005725 / 0.011353 (-0.005628) | 0.004149 / 0.011008 (-0.006859) | 0.065051 / 0.038508 (0.026543) | 0.030220 / 0.023109 (0.007111) | 0.256768 / 0.275898 (-0.019130) | 0.269767 / 0.323480 (-0.053713) | 0.003256 / 0.007986 (-0.004730) | 0.003378 / 0.004328 (-0.000951) | 0.049407 / 0.004250 (0.045156) | 0.046041 / 0.037052 (0.008988) | 0.270977 / 0.258489 (0.012488) | 0.288771 / 0.293841 (-0.005070) | 0.030401 / 0.128546 (-0.098145) | 0.012203 / 0.075646 (-0.063443) | 0.227365 / 0.419271 (-0.191906) | 0.036356 / 0.043533 (-0.007176) | 0.262763 / 0.255139 (0.007624) | 0.268172 / 0.283200 (-0.015028) | 0.020698 / 0.141683 (-0.120984) | 1.171679 / 1.452155 (-0.280476) | 1.155353 / 1.492716 (-0.337363) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.138740 / 0.018006 (0.120733) | 0.300962 / 0.000490 (0.300473) | 0.000240 / 0.000200 (0.000040) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019056 / 0.037411 (-0.018355) | 0.062922 / 0.014526 (0.048396) | 0.075339 / 0.176557 (-0.101218) | 0.122587 / 0.737135 (-0.614548) | 0.078622 / 0.296338 (-0.217716) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.273878 / 0.215209 (0.058669) | 2.753188 / 2.077655 (0.675533) | 1.446877 / 1.504120 (-0.057243) | 1.325034 / 1.541195 (-0.216160) | 1.332849 / 1.468490 (-0.135641) | 0.721042 / 4.584777 (-3.863735) | 2.457241 / 3.745712 (-1.288471) | 3.008013 / 5.269862 (-2.261848) | 1.925773 / 4.565676 (-2.639903) | 0.077725 / 0.424275 (-0.346550) | 0.005232 / 0.007607 (-0.002375) | 0.331398 / 0.226044 (0.105354) | 3.273689 / 2.268929 (1.004761) | 1.818291 / 55.444624 (-53.626334) | 1.532233 / 6.876477 (-5.344244) | 1.545236 / 2.142072 (-0.596837) | 0.809853 / 4.805227 (-3.995374) | 0.137571 / 6.500664 (-6.363093) | 0.042829 / 0.075469 (-0.032640) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962599 / 1.841788 (-0.879189) | 11.593394 / 8.074308 (3.519086) | 9.564848 / 10.191392 (-0.626544) | 0.131547 / 0.680424 (-0.548876) | 0.014724 / 0.534201 (-0.519477) | 0.309343 / 0.579283 (-0.269940) | 0.263476 / 0.434364 (-0.170888) | 0.350755 / 0.540337 (-0.189582) | 0.445279 / 1.386936 (-0.941657) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005818 / 0.011353 (-0.005534) | 0.004028 / 0.011008 (-0.006980) | 0.050337 / 0.038508 (0.011829) | 0.033234 / 0.023109 (0.010125) | 0.273498 / 0.275898 (-0.002400) | 0.299130 / 0.323480 (-0.024350) | 0.004391 / 0.007986 (-0.003595) | 0.002854 / 0.004328 (-0.001474) | 0.048616 / 0.004250 (0.044365) | 0.040354 / 0.037052 (0.003302) | 0.287980 / 0.258489 (0.029491) | 0.323940 / 0.293841 (0.030099) | 0.033031 / 0.128546 (-0.095515) | 0.012539 / 0.075646 (-0.063108) | 0.061129 / 0.419271 (-0.358143) | 0.034410 / 0.043533 (-0.009123) | 0.276367 / 0.255139 (0.021228) | 0.295266 / 0.283200 (0.012066) | 0.018558 / 0.141683 (-0.123125) | 1.149051 / 1.452155 (-0.303104) | 1.207995 / 1.492716 (-0.284721) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095732 / 0.018006 (0.077726) | 0.305774 / 0.000490 (0.305284) | 0.000222 / 0.000200 (0.000022) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023680 / 0.037411 (-0.013731) | 0.077147 / 0.014526 (0.062621) | 0.088850 / 0.176557 (-0.087706) | 0.130219 / 0.737135 (-0.606917) | 0.090582 / 0.296338 (-0.205756) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.306099 / 0.215209 (0.090890) | 2.952515 / 2.077655 (0.874861) | 1.593090 / 1.504120 (0.088970) | 1.471887 / 1.541195 (-0.069308) | 1.484277 / 1.468490 (0.015787) | 0.741158 / 4.584777 (-3.843619) | 0.976520 / 3.745712 (-2.769192) | 2.904631 / 5.269862 (-2.365231) | 1.940287 / 4.565676 (-2.625389) | 0.079828 / 0.424275 (-0.344447) | 0.005482 / 0.007607 (-0.002125) | 0.353376 / 0.226044 (0.127332) | 3.502412 / 2.268929 (1.233483) | 1.976571 / 55.444624 (-53.468053) | 1.675141 / 6.876477 (-5.201336) | 1.821075 / 2.142072 (-0.320998) | 0.814290 / 4.805227 (-3.990937) | 0.135227 / 6.500664 (-6.365437) | 0.041631 / 0.075469 (-0.033838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.041495 / 1.841788 (-0.800293) | 12.275647 / 8.074308 (4.201339) | 10.569540 / 10.191392 (0.378148) | 0.143136 / 0.680424 (-0.537288) | 0.015010 / 0.534201 (-0.519191) | 0.302177 / 0.579283 (-0.277106) | 0.125924 / 0.434364 (-0.308440) | 0.340977 / 0.540337 (-0.199360) | 0.438467 / 1.386936 (-0.948469) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#baea190799dfa22493621fe06584b006b57f16ce \"CML watermark\")\n"
] | "2024-07-24T05:52:34" | "2024-07-29T07:02:07" | "2024-07-29T06:56:15" | MEMBER | null | Fix `_prepare_single_hop_path_and_storage_options`:
- Do not pass HF authentication headers and HF user-agent to non-HF HTTP URLs
- Do not overwrite passed `storage_options` nested values:
- Before, when passed
```DownloadConfig(storage_options={"https": {"client_kwargs": {"raise_for_status": True}}})```,
it was overwritten to
```{"https": {"client_kwargs": {"trust_env": True}}}```
- Now, the result combines both:
```{"https": {"client_kwargs": {"trust_env": True, "raise_for_status": True}}}``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7068/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7068/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7068.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7068",
"merged_at": "2024-07-29T06:56:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7068.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7068"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7067/comments | https://api.github.com/repos/huggingface/datasets/issues/7067/events | https://github.com/huggingface/datasets/issues/7067 | 2,425,460,168 | I_kwDODunzps6QkZXI | 7,067 | Convert_to_parquet fails for datasets with multiple configs | {
"avatar_url": "https://avatars.githubusercontent.com/u/97585031?v=4",
"events_url": "https://api.github.com/users/HuangZhen02/events{/privacy}",
"followers_url": "https://api.github.com/users/HuangZhen02/followers",
"following_url": "https://api.github.com/users/HuangZhen02/following{/other_user}",
"gists_url": "https://api.github.com/users/HuangZhen02/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HuangZhen02",
"id": 97585031,
"login": "HuangZhen02",
"node_id": "U_kgDOBdEHhw",
"organizations_url": "https://api.github.com/users/HuangZhen02/orgs",
"received_events_url": "https://api.github.com/users/HuangZhen02/received_events",
"repos_url": "https://api.github.com/users/HuangZhen02/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HuangZhen02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuangZhen02/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HuangZhen02"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Many users have encountered the same issue, which has caused inconvenience.\r\n\r\nhttps://discuss.huggingface.co/t/convert-to-parquet-fails-for-datasets-with-multiple-configs/86733\r\n",
"Thanks for reporting.\r\n\r\nI will make the code more robust.",
"I have opened an issue in the huggingface-hub repo:\r\n- https://github.com/huggingface/huggingface_hub/issues/2419\r\n\r\nI am opening a PR to avoid calling `create_branch` if the branch already exists."
] | "2024-07-23T15:09:33" | "2024-07-30T10:51:02" | "2024-07-30T10:51:02" | NONE | null | If the dataset has multiple configs, when using the `datasets-cli convert_to_parquet` command to avoid issues with the data viewer caused by loading scripts, the conversion process only successfully converts the data corresponding to the first config. When it starts converting the second config, it throws an error:
```
Traceback (most recent call last):
File "/opt/anaconda3/envs/dl/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/datasets_cli.py", line 41, in main
service.run()
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/convert_to_parquet.py", line 83, in run
dataset.push_to_hub(
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/dataset_dict.py", line 1713, in push_to_hub
api.create_branch(repo_id, branch=revision, token=token, repo_type="dataset", exist_ok=True)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 5503, in create_branch
hf_raise_for_status(response)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-669fc665-7c2e80d75f4337496ee95402;731fcdc7-0950-4eec-99cf-ce047b8d003f)
Bad request:
Invalid reference for a branch: refs/pr/1
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7067/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7067/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7066/comments | https://api.github.com/repos/huggingface/datasets/issues/7066/events | https://github.com/huggingface/datasets/issues/7066 | 2,425,125,160 | I_kwDODunzps6QjHko | 7,066 | One subset per file in repo ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | [] | "2024-07-23T12:43:59" | "2024-07-23T12:43:59" | null | MEMBER | null | Right now we consider all the files of a dataset to be the same data, e.g.
```
single_subset_dataset/
โโโ train0.jsonl
โโโ train1.jsonl
โโโ train2.jsonl
```
but in cases like this, each file is actually a different subset of the dataset and should be loaded separately
```
many_subsets_dataset/
โโโ animals.jsonl
โโโ trees.jsonl
โโโ metadata.jsonl
```
It would be nice to detect those subsets automatically using a simple heuristic. For example we can group files together if their paths names are the same except some digits ? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7066/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7066/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7065/comments | https://api.github.com/repos/huggingface/datasets/issues/7065/events | https://github.com/huggingface/datasets/issues/7065 | 2,424,734,953 | I_kwDODunzps6QhoTp | 7,065 | Cannot get item after loading from disk and then converting to iterable. | {
"avatar_url": "https://avatars.githubusercontent.com/u/21305646?v=4",
"events_url": "https://api.github.com/users/happyTonakai/events{/privacy}",
"followers_url": "https://api.github.com/users/happyTonakai/followers",
"following_url": "https://api.github.com/users/happyTonakai/following{/other_user}",
"gists_url": "https://api.github.com/users/happyTonakai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/happyTonakai",
"id": 21305646,
"login": "happyTonakai",
"node_id": "MDQ6VXNlcjIxMzA1NjQ2",
"organizations_url": "https://api.github.com/users/happyTonakai/orgs",
"received_events_url": "https://api.github.com/users/happyTonakai/received_events",
"repos_url": "https://api.github.com/users/happyTonakai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/happyTonakai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/happyTonakai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/happyTonakai"
} | [] | open | false | null | [] | null | [] | "2024-07-23T09:37:56" | "2024-07-23T09:37:56" | null | NONE | null | ### Describe the bug
The dataset generated from local file works fine.
```py
root = "/home/data/train"
file_list1 = glob(os.path.join(root, "*part1.flac"))
file_list2 = glob(os.path.join(root, "*part2.flac"))
ds = (
Dataset.from_dict({"part1": file_list1, "part2": file_list2})
.cast_column("part1", Audio(sampling_rate=None, mono=False))
.cast_column("part2", Audio(sampling_rate=None, mono=False))
)
ids = ds.to_iterable_dataset(128)
ids = ids.shuffle(buffer_size=10000, seed=42)
dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True)
for batch in dataloader:
break
```
But after saving it to disk and then loading it from disk, I cannot get data as expected.
```py
root = "/home/data/train"
file_list1 = glob(os.path.join(root, "*part1.flac"))
file_list2 = glob(os.path.join(root, "*part2.flac"))
ds = (
Dataset.from_dict({"part1": file_list1, "part2": file_list2})
.cast_column("part1", Audio(sampling_rate=None, mono=False))
.cast_column("part2", Audio(sampling_rate=None, mono=False))
)
ds.save_to_disk("./train")
ds = datasets.load_from_disk("./train")
ids = ds.to_iterable_dataset(128)
ids = ids.shuffle(buffer_size=10000, seed=42)
dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True)
for batch in dataloader:
break
```
After a long time waiting, an error occurs:
```
Loading dataset from disk: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 165/165 [00:00<00:00, 6422.18it/s]
Traceback (most recent call last):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/queues.py", line 113, in get
if not self._poll(timeout):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 424, in _poll
r = wait([self], timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/selectors.py", line 416, in select
fd_event_list = self._selector.poll(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 3490529) is killed by signal: Killed.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/home/hanzerui/workspace/NetEase/test/test_datasets.py", line 60, in <module>
for batch in dataloader:
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 631, in __next__
data = self._next_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1329, in _next_data
idx, data = self._get_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1295, in _get_data
success, data = self._try_get_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1146, in _try_get_data
raise RuntimeError(f'DataLoader worker (pid(s) {pids_str}) exited unexpectedly') from e
RuntimeError: DataLoader worker (pid(s) 3490529) exited unexpectedly
```
It seems that streaming is not supported by `laod_from_disk`, so does that mean I cannot convert it to iterable?
### Steps to reproduce the bug
1. Create a `Dataset` from local files with `from_dict`
2. Save it to disk with `save_to_disk`
3. Load it from disk with `load_from_disk`
4. Convert to iterable with `to_iterable_dataset`
5. Loop the dataset
### Expected behavior
Get items faster than the original dataset generated from dict.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.23.2
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7065/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7065/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7064/comments | https://api.github.com/repos/huggingface/datasets/issues/7064/events | https://github.com/huggingface/datasets/pull/7064 | 2,424,613,104 | PR_kwDODunzps52Lz2- | 7,064 | Add `batch` method to `Dataset` class | {
"avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4",
"events_url": "https://api.github.com/users/lappemic/events{/privacy}",
"followers_url": "https://api.github.com/users/lappemic/followers",
"following_url": "https://api.github.com/users/lappemic/following{/other_user}",
"gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lappemic",
"id": 61876623,
"login": "lappemic",
"node_id": "MDQ6VXNlcjYxODc2NjIz",
"organizations_url": "https://api.github.com/users/lappemic/orgs",
"received_events_url": "https://api.github.com/users/lappemic/received_events",
"repos_url": "https://api.github.com/users/lappemic/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lappemic/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lappemic"
} | [] | closed | false | null | [] | null | [
"Looks good to me ! :)\r\n\r\nyou might want to add the `map` num_proc argument as well, for people who want to make it run faster",
"Thanks for the feedback @lhoestq! The last commits include:\r\n- Adding the `num_proc` parameter to `batch`\r\n- Adding tests similar to the one done for `IterableDataset.batch()`\r\n- Updated the documentation -> I think they are actually misplaced in the `Stream` page. But could not find a better place atm. Where would you put this documentation?\r\n\r\nWDYT?",
"You can put the documentation in process.mdx :)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7064). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I reset the head to the commit before I added the `Dataset.batch()` documentation to `stream.mdx` and instead added the documentation to `process.mdx`. ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005736 / 0.011353 (-0.005617) | 0.003959 / 0.011008 (-0.007049) | 0.063259 / 0.038508 (0.024751) | 0.030705 / 0.023109 (0.007596) | 0.245706 / 0.275898 (-0.030192) | 0.278766 / 0.323480 (-0.044714) | 0.003354 / 0.007986 (-0.004632) | 0.004246 / 0.004328 (-0.000082) | 0.049346 / 0.004250 (0.045095) | 0.046439 / 0.037052 (0.009386) | 0.257930 / 0.258489 (-0.000559) | 0.295562 / 0.293841 (0.001722) | 0.030529 / 0.128546 (-0.098017) | 0.012465 / 0.075646 (-0.063182) | 0.205595 / 0.419271 (-0.213677) | 0.036319 / 0.043533 (-0.007214) | 0.243872 / 0.255139 (-0.011267) | 0.275834 / 0.283200 (-0.007366) | 0.020330 / 0.141683 (-0.121353) | 1.108337 / 1.452155 (-0.343817) | 1.150406 / 1.492716 (-0.342310) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.113498 / 0.018006 (0.095491) | 0.306654 / 0.000490 (0.306164) | 0.000238 / 0.000200 (0.000038) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019092 / 0.037411 (-0.018319) | 0.063180 / 0.014526 (0.048654) | 0.078244 / 0.176557 (-0.098313) | 0.126106 / 0.737135 (-0.611030) | 0.078651 / 0.296338 (-0.217687) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284132 / 0.215209 (0.068923) | 2.781250 / 2.077655 (0.703595) | 1.471864 / 1.504120 (-0.032256) | 1.354661 / 1.541195 (-0.186534) | 1.362839 / 1.468490 (-0.105651) | 0.719126 / 4.584777 (-3.865651) | 2.396969 / 3.745712 (-1.348743) | 2.987924 / 5.269862 (-2.281938) | 1.910555 / 4.565676 (-2.655121) | 0.078612 / 0.424275 (-0.345663) | 0.005170 / 0.007607 (-0.002437) | 0.333876 / 0.226044 (0.107832) | 3.298340 / 2.268929 (1.029412) | 1.853332 / 55.444624 (-53.591292) | 1.551919 / 6.876477 (-5.324557) | 1.585677 / 2.142072 (-0.556395) | 0.802487 / 4.805227 (-4.002741) | 0.134828 / 6.500664 (-6.365837) | 0.041966 / 0.075469 (-0.033503) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.992277 / 1.841788 (-0.849511) | 11.626887 / 8.074308 (3.552578) | 9.715623 / 10.191392 (-0.475769) | 0.140306 / 0.680424 (-0.540117) | 0.014528 / 0.534201 (-0.519673) | 0.306247 / 0.579283 (-0.273036) | 0.263067 / 0.434364 (-0.171297) | 0.342325 / 0.540337 (-0.198013) | 0.432299 / 1.386936 (-0.954637) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006004 / 0.011353 (-0.005349) | 0.003890 / 0.011008 (-0.007118) | 0.050408 / 0.038508 (0.011900) | 0.031880 / 0.023109 (0.008771) | 0.273114 / 0.275898 (-0.002784) | 0.296653 / 0.323480 (-0.026826) | 0.004569 / 0.007986 (-0.003416) | 0.002831 / 0.004328 (-0.001497) | 0.050032 / 0.004250 (0.045782) | 0.040468 / 0.037052 (0.003415) | 0.284718 / 0.258489 (0.026229) | 0.321754 / 0.293841 (0.027913) | 0.033863 / 0.128546 (-0.094684) | 0.012183 / 0.075646 (-0.063463) | 0.060805 / 0.419271 (-0.358466) | 0.034919 / 0.043533 (-0.008614) | 0.274354 / 0.255139 (0.019215) | 0.293477 / 0.283200 (0.010277) | 0.019418 / 0.141683 (-0.122265) | 1.151571 / 1.452155 (-0.300584) | 1.217174 / 1.492716 (-0.275542) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097326 / 0.018006 (0.079320) | 0.316277 / 0.000490 (0.315787) | 0.000225 / 0.000200 (0.000025) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022932 / 0.037411 (-0.014479) | 0.077455 / 0.014526 (0.062929) | 0.088949 / 0.176557 (-0.087608) | 0.129447 / 0.737135 (-0.607688) | 0.093705 / 0.296338 (-0.202634) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.303918 / 0.215209 (0.088709) | 2.973866 / 2.077655 (0.896211) | 1.593165 / 1.504120 (0.089045) | 1.465312 / 1.541195 (-0.075883) | 1.484503 / 1.468490 (0.016013) | 0.731849 / 4.584777 (-3.852928) | 0.953337 / 3.745712 (-2.792375) | 2.887815 / 5.269862 (-2.382047) | 1.923618 / 4.565676 (-2.642058) | 0.080073 / 0.424275 (-0.344202) | 0.005460 / 0.007607 (-0.002148) | 0.359876 / 0.226044 (0.133832) | 3.532251 / 2.268929 (1.263323) | 1.987778 / 55.444624 (-53.456846) | 1.685572 / 6.876477 (-5.190905) | 1.827141 / 2.142072 (-0.314932) | 0.815953 / 4.805227 (-3.989274) | 0.136698 / 6.500664 (-6.363967) | 0.042185 / 0.075469 (-0.033285) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.032508 / 1.841788 (-0.809280) | 12.526918 / 8.074308 (4.452610) | 10.202942 / 10.191392 (0.011550) | 0.145920 / 0.680424 (-0.534504) | 0.015643 / 0.534201 (-0.518558) | 0.300465 / 0.579283 (-0.278818) | 0.126786 / 0.434364 (-0.307578) | 0.342885 / 0.540337 (-0.197453) | 0.438139 / 1.386936 (-0.948797) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9c98e069b47d40a219b6f27e62ed85a5bb17449e \"CML watermark\")\n"
] | "2024-07-23T08:40:43" | "2024-07-25T13:51:25" | "2024-07-25T13:45:20" | CONTRIBUTOR | null | This PR introduces a new `batch` method to the `Dataset` class, aligning its functionality with the `IterableDataset.batch()` method (implemented in #7054). The implementation uses as well the existing `map` method for efficient batching of examples.
Key changes:
- Add `batch` method to `Dataset` class in `arrow_dataset.py`
- Utilize `map` method for batching
Closes #7063
Once the approach is approved, i will create the tests and update the documentation. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7064/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7064/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7064.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7064",
"merged_at": "2024-07-25T13:45:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7064.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7064"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7063/comments | https://api.github.com/repos/huggingface/datasets/issues/7063/events | https://github.com/huggingface/datasets/issues/7063 | 2,424,488,648 | I_kwDODunzps6QgsLI | 7,063 | Add `batch` method to `Dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4",
"events_url": "https://api.github.com/users/lappemic/events{/privacy}",
"followers_url": "https://api.github.com/users/lappemic/followers",
"following_url": "https://api.github.com/users/lappemic/following{/other_user}",
"gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lappemic",
"id": 61876623,
"login": "lappemic",
"node_id": "MDQ6VXNlcjYxODc2NjIz",
"organizations_url": "https://api.github.com/users/lappemic/orgs",
"received_events_url": "https://api.github.com/users/lappemic/received_events",
"repos_url": "https://api.github.com/users/lappemic/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lappemic/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lappemic"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [] | "2024-07-23T07:36:59" | "2024-07-25T13:45:21" | "2024-07-25T13:45:21" | CONTRIBUTOR | null | ### Feature request
Add a `batch` method to the Dataset class, similar to the one recently implemented for `IterableDataset` in PR #7054.
### Motivation
A batched iteration speeds up data loading significantly (see e.g. #6279)
### Your contribution
I plan to open a PR to implement this. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7063/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7063/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7062 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7062/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7062/comments | https://api.github.com/repos/huggingface/datasets/issues/7062/events | https://github.com/huggingface/datasets/pull/7062 | 2,424,467,484 | PR_kwDODunzps52LUPR | 7,062 | Avoid calling http_head for non-HTTP URLs | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7062). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005591 / 0.011353 (-0.005761) | 0.003992 / 0.011008 (-0.007016) | 0.063932 / 0.038508 (0.025424) | 0.034572 / 0.023109 (0.011463) | 0.252532 / 0.275898 (-0.023366) | 0.271233 / 0.323480 (-0.052247) | 0.005146 / 0.007986 (-0.002840) | 0.002844 / 0.004328 (-0.001484) | 0.049555 / 0.004250 (0.045305) | 0.044111 / 0.037052 (0.007059) | 0.270131 / 0.258489 (0.011642) | 0.318109 / 0.293841 (0.024269) | 0.030247 / 0.128546 (-0.098300) | 0.012438 / 0.075646 (-0.063209) | 0.205160 / 0.419271 (-0.214112) | 0.036228 / 0.043533 (-0.007305) | 0.250664 / 0.255139 (-0.004475) | 0.263884 / 0.283200 (-0.019315) | 0.018141 / 0.141683 (-0.123541) | 1.128504 / 1.452155 (-0.323650) | 1.182543 / 1.492716 (-0.310173) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094576 / 0.018006 (0.076570) | 0.301153 / 0.000490 (0.300664) | 0.000246 / 0.000200 (0.000046) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019143 / 0.037411 (-0.018268) | 0.062788 / 0.014526 (0.048262) | 0.074688 / 0.176557 (-0.101869) | 0.121799 / 0.737135 (-0.615336) | 0.076200 / 0.296338 (-0.220138) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277002 / 0.215209 (0.061793) | 2.735738 / 2.077655 (0.658083) | 1.430408 / 1.504120 (-0.073712) | 1.309795 / 1.541195 (-0.231400) | 1.339083 / 1.468490 (-0.129407) | 0.702540 / 4.584777 (-3.882237) | 2.352468 / 3.745712 (-1.393244) | 2.913698 / 5.269862 (-2.356164) | 1.871739 / 4.565676 (-2.693938) | 0.077054 / 0.424275 (-0.347221) | 0.005055 / 0.007607 (-0.002552) | 0.330550 / 0.226044 (0.104505) | 3.272556 / 2.268929 (1.003627) | 1.805268 / 55.444624 (-53.639356) | 1.504791 / 6.876477 (-5.371686) | 1.511361 / 2.142072 (-0.630712) | 0.784451 / 4.805227 (-4.020776) | 0.132182 / 6.500664 (-6.368482) | 0.042516 / 0.075469 (-0.032954) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.946939 / 1.841788 (-0.894849) | 11.369607 / 8.074308 (3.295299) | 9.667350 / 10.191392 (-0.524042) | 0.138689 / 0.680424 (-0.541735) | 0.014416 / 0.534201 (-0.519785) | 0.300685 / 0.579283 (-0.278598) | 0.259709 / 0.434364 (-0.174655) | 0.341271 / 0.540337 (-0.199066) | 0.435609 / 1.386936 (-0.951327) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005726 / 0.011353 (-0.005627) | 0.004071 / 0.011008 (-0.006937) | 0.050837 / 0.038508 (0.012329) | 0.047000 / 0.023109 (0.023890) | 0.278543 / 0.275898 (0.002645) | 0.300526 / 0.323480 (-0.022954) | 0.004483 / 0.007986 (-0.003503) | 0.002835 / 0.004328 (-0.001494) | 0.050925 / 0.004250 (0.046675) | 0.041834 / 0.037052 (0.004782) | 0.285059 / 0.258489 (0.026570) | 0.324557 / 0.293841 (0.030716) | 0.038949 / 0.128546 (-0.089597) | 0.012145 / 0.075646 (-0.063501) | 0.061791 / 0.419271 (-0.357481) | 0.034493 / 0.043533 (-0.009040) | 0.274034 / 0.255139 (0.018895) | 0.295886 / 0.283200 (0.012686) | 0.018524 / 0.141683 (-0.123159) | 1.148766 / 1.452155 (-0.303388) | 1.207966 / 1.492716 (-0.284750) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094078 / 0.018006 (0.076071) | 0.307850 / 0.000490 (0.307361) | 0.000224 / 0.000200 (0.000024) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023502 / 0.037411 (-0.013910) | 0.077321 / 0.014526 (0.062795) | 0.091147 / 0.176557 (-0.085410) | 0.131111 / 0.737135 (-0.606025) | 0.090906 / 0.296338 (-0.205432) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290700 / 0.215209 (0.075491) | 2.833655 / 2.077655 (0.756001) | 1.546371 / 1.504120 (0.042251) | 1.415337 / 1.541195 (-0.125858) | 1.445752 / 1.468490 (-0.022738) | 0.737880 / 4.584777 (-3.846897) | 0.961549 / 3.745712 (-2.784164) | 2.844021 / 5.269862 (-2.425841) | 2.023547 / 4.565676 (-2.542130) | 0.079791 / 0.424275 (-0.344484) | 0.005449 / 0.007607 (-0.002158) | 0.356381 / 0.226044 (0.130337) | 3.515555 / 2.268929 (1.246627) | 1.920407 / 55.444624 (-53.524217) | 1.628637 / 6.876477 (-5.247839) | 1.752995 / 2.142072 (-0.389077) | 0.807264 / 4.805227 (-3.997963) | 0.133627 / 6.500664 (-6.367037) | 0.041861 / 0.075469 (-0.033609) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.035643 / 1.841788 (-0.806144) | 12.114792 / 8.074308 (4.040484) | 10.185844 / 10.191392 (-0.005548) | 0.142354 / 0.680424 (-0.538070) | 0.015466 / 0.534201 (-0.518734) | 0.304681 / 0.579283 (-0.274603) | 0.124297 / 0.434364 (-0.310067) | 0.339907 / 0.540337 (-0.200430) | 0.436266 / 1.386936 (-0.950670) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#856eb84569006ab9389ddbcce8b7141befeab9cc \"CML watermark\")\n"
] | "2024-07-23T07:25:09" | "2024-07-23T14:28:27" | "2024-07-23T14:21:08" | MEMBER | null | Avoid calling `http_head` for non-HTTP URLs, by adding and `else` statement.
Currently, it makes an unnecessary HTTP call (which adds latency) for non-HTTP protocols, like FTP, S3,...
I discovered this while working in an unrelated issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7062/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7062/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7062.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7062",
"merged_at": "2024-07-23T14:21:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7062.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7062"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7061/comments | https://api.github.com/repos/huggingface/datasets/issues/7061/events | https://github.com/huggingface/datasets/issues/7061 | 2,423,786,881 | I_kwDODunzps6QeA2B | 7,061 | Custom Dataset | Still Raise Error while handling errors in _generate_examples | {
"avatar_url": "https://avatars.githubusercontent.com/u/68266028?v=4",
"events_url": "https://api.github.com/users/hahmad2008/events{/privacy}",
"followers_url": "https://api.github.com/users/hahmad2008/followers",
"following_url": "https://api.github.com/users/hahmad2008/following{/other_user}",
"gists_url": "https://api.github.com/users/hahmad2008/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hahmad2008",
"id": 68266028,
"login": "hahmad2008",
"node_id": "MDQ6VXNlcjY4MjY2MDI4",
"organizations_url": "https://api.github.com/users/hahmad2008/orgs",
"received_events_url": "https://api.github.com/users/hahmad2008/received_events",
"repos_url": "https://api.github.com/users/hahmad2008/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hahmad2008/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hahmad2008/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hahmad2008"
} | [] | open | false | null | [] | null | [] | "2024-07-22T21:18:12" | "2024-07-22T21:18:12" | null | NONE | null | ### Describe the bug
I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution.
```
def _generate_examples(self, filepaths):
errors=[]
id_ = 0
for filepath in filepaths:
try:
with open(filepath, 'r') as f:
for line in f:
json_obj = json.loads(line)
yield id_, json_obj
id_ += 1
except Exception as exc:
logger.error(f"error occur at filepath: {filepath}")
errors.append(error)
```
seems the logger.error is printed but still exception is raised the the run is exit.
```
Downloading and preparing dataset custom_dataset/default to /home/myuser/.cache/huggingface/datasets/custom_dataset/default-a14cdd566afee0a6/1.0.0/acfcc9fb9c57034b580c4252841
ERROR: datasets_modules.datasets.custom_dataset.acfcc9fb9c57034b580c4252841bb890a5617cbd28678dd4be5e52b81188ad02.custom_dataset: 2024-07-22 10:47:42,167: error occur at filepath: '/home/myuser/ds/corrupted-file.jsonl
Traceback (most recent call last):
File "/home/myuser/.cache/huggingface/modules/datasets_modules/datasets/custom_dataset/ac..2/custom_dataset.py", line 48, in _generate_examples
json_obj = json.loads(line)
File "myenv/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "myenv/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "myenv/lib/python3.8/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Invalid control character at: line 1 column 4 (char 3)
Generating train split: 0 examples [00:06, ? examples/s]>
RemoteTraceback:
"""
Traceback (most recent call last):
File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1637, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "myenv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 594, in finalize
raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "myenv/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "myenv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 1353, in
_write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1646, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
"""
The above exception was the direct cause of the following exception:
โ โ
โ myenv/lib/python3.8/site-packages/datasets/utils/py_utils. โ
โ py:1377 in <listcomp> โ
โ โ
โ 1374 โ โ โ โ if all(async_result.ready() for async_result in async_results) and queue โ
โ 1375 โ โ โ โ โ break โ
โ 1376 โ โ # we get the result in case there's an error to raise โ
โ โฑ 1377 โ โ [async_result.get() for async_result in async_results] โ
โ 1378 โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ .0 = <list_iterator object at 0x7f2cc1f0ce20> โ โ
โ โ async_result = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ myenv/lib/python3.8/site-packages/multiprocess/pool.py:771 โ
โ in get โ
โ โ
โ 768 โ โ if self._success: โ
โ 769 โ โ โ return self._value โ
โ 770 โ โ else: โ
โ โฑ 771 โ โ โ raise self._value โ
โ 772 โ โ
โ 773 โ def _set(self, i, obj): โ
โ 774 โ โ self._success, self._value = obj โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ self = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> โ โ
โ โ timeout = None โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
same as above
### Expected behavior
should handle error and continue reading remaining files
### Environment info
python 3.9 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7061/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7061/timeline | null | null | null | null | false |
Dataset Card for GitHub Issues (experiment)
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the ๐ค Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
Dataset Details
Dataset Description
- Curated by: [More Information Needed]
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: [More Information Needed]
- Language(s) (NLP): [More Information Needed]
- License: [More Information Needed]
Dataset Sources [optional]
Use the GitHub API for issues.
Uses
Use the data for text classification and sematic search practice
Direct Use
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Dataset Structure
[More Information Needed]
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Data Collection and Processing
[More Information Needed]
Who are the source data producers?
[More Information Needed]
Annotations [optional]
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Dataset Card Authors [optional]
[More Information Needed]
Dataset Card Contact
[More Information Needed]
- Downloads last month
- 39