Update README.md
Browse files
README.md
CHANGED
@@ -10,10 +10,10 @@ The GitHub Code dataset is a very large dataset so for most use cases it is reco
|
|
10 |
```python
|
11 |
from datasets import load_dataset
|
12 |
|
13 |
-
ds = load_dataset("github-code", streaming=True, split="train")
|
14 |
print(next(iter(ds)))
|
15 |
|
16 |
-
|
17 |
{
|
18 |
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
|
19 |
'repo_name': 'MirekSz/webpack-es6-ts',
|
@@ -24,12 +24,13 @@ print(next(iter(ds)))
|
|
24 |
}
|
25 |
```
|
26 |
|
27 |
-
You can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list:
|
28 |
|
29 |
```python
|
30 |
-
ds = load_dataset("github-code", streaming=True, split="train", languages=["Dockerfile"])
|
31 |
print(next(iter(ds))["code"])
|
32 |
|
|
|
33 |
"""\
|
34 |
FROM rockyluke/ubuntu:precise
|
35 |
|
@@ -42,7 +43,7 @@ ENV DEBIAN_FRONTEND="noninteractive" \
|
|
42 |
We also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:
|
43 |
|
44 |
```python
|
45 |
-
ds = load_dataset("github-code", streaming=True, split="train", licenses=["mit", "isc"])
|
46 |
|
47 |
licenses = []
|
48 |
iterable = iter(ds)
|
@@ -51,12 +52,13 @@ for i in range(10_000):
|
|
51 |
licenses.append(element["license"])
|
52 |
print(Counter(licenses))
|
53 |
|
|
|
54 |
Counter({'mit': 9896, 'isc': 104})
|
55 |
```
|
56 |
|
57 |
Naturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:
|
58 |
```python
|
59 |
-
ds = load_dataset("github-code", split="train")
|
60 |
```
|
61 |
|
62 |
## Languages
|
|
|
10 |
```python
|
11 |
from datasets import load_dataset
|
12 |
|
13 |
+
ds = load_dataset("lvwerra/github-code", streaming=True, split="train")
|
14 |
print(next(iter(ds)))
|
15 |
|
16 |
+
#OUTPUT:
|
17 |
{
|
18 |
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
|
19 |
'repo_name': 'MirekSz/webpack-es6-ts',
|
|
|
24 |
}
|
25 |
```
|
26 |
|
27 |
+
You can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration:
|
28 |
|
29 |
```python
|
30 |
+
ds = load_dataset("lvwerra/github-code", streaming=True, split="train", languages=["Dockerfile"])
|
31 |
print(next(iter(ds))["code"])
|
32 |
|
33 |
+
#OUTPUT:
|
34 |
"""\
|
35 |
FROM rockyluke/ubuntu:precise
|
36 |
|
|
|
43 |
We also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:
|
44 |
|
45 |
```python
|
46 |
+
ds = load_dataset("lvwerra/github-code", streaming=True, split="train", licenses=["mit", "isc"])
|
47 |
|
48 |
licenses = []
|
49 |
iterable = iter(ds)
|
|
|
52 |
licenses.append(element["license"])
|
53 |
print(Counter(licenses))
|
54 |
|
55 |
+
#OUTPUT:
|
56 |
Counter({'mit': 9896, 'isc': 104})
|
57 |
```
|
58 |
|
59 |
Naturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:
|
60 |
```python
|
61 |
+
ds = load_dataset("lvwerra/github-code", split="train")
|
62 |
```
|
63 |
|
64 |
## Languages
|