Datasets:

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
asahi417 commited on
Commit
c41faeb
1 Parent(s): 03dc503

fix readme

Browse files
Files changed (3) hide show
  1. README.md +254 -37
  2. check_stats.py +0 -77
  3. stats.py +37 -0
README.md CHANGED
@@ -5,14 +5,16 @@ license:
5
  - other
6
  multilinguality:
7
  - monolingual
8
- pretty_name: t_rex
 
 
9
  ---
10
 
11
  # Dataset Card for "relbert/t_rex"
12
  ## Dataset Description
13
  - **Repository:** [https://hadyelsahar.github.io/t-rex/](https://hadyelsahar.github.io/t-rex/)
14
  - **Paper:** [https://aclanthology.org/L18-1544/](https://aclanthology.org/L18-1544/)
15
- - **Dataset:** T-REX
16
 
17
  ## Dataset Summary
18
  This is the T-REX dataset proposed in [https://aclanthology.org/L18-1544/](https://aclanthology.org/L18-1544/).
@@ -21,34 +23,250 @@ and the test split contains predicates that is not included in the train/validat
21
  The train/validation splits are created for each configuration by the ratio of 9:1.
22
  The number of triples in each split is summarized in the table below.
23
 
24
- ***Note:*** To make it consistent with other datasets ([fewshot_link_prediction_](https://huggingface.co/datasets/relbert/fewshot_link_prediction) and [conceptnet](https://huggingface.co/datasets/relbert/conceptnet)), we rename predicate/subject/object as relation/head/tail.
25
-
26
- - train/validation split
27
-
28
- | data | number of triples (train) | number of triples (validation) | number of triples (all) | number of unique predicates (train) | number of unique predicates (validation) | number of unique predicates (all) | number of unique entities (train) | number of unique entities (validation) | number of unique entities (all) |
29
- |:----------------------------------------------|----------------------------:|---------------------------------:|--------------------------:|--------------------------------------:|-------------------------------------------:|------------------------------------:|------------------------------------:|-----------------------------------------:|----------------------------------:|
30
- | filter_unified.min_entity_1_max_predicate_100 | 7,075 | 787 | 9,193 | 212 | 166 | 246 | 8,496 | 1,324 | 10,454 |
31
- | filter_unified.min_entity_1_max_predicate_50 | 4,131 | 459 | 5,304 | 212 | 156 | 246 | 5,111 | 790 | 6,212 |
32
- | filter_unified.min_entity_1_max_predicate_25 | 2,358 | 262 | 3,034 | 212 | 144 | 246 | 3,079 | 465 | 3,758 |
33
- | filter_unified.min_entity_1_max_predicate_10 | 1,134 | 127 | 1,465 | 210 | 94 | 246 | 1,587 | 233 | 1,939 |
34
- | filter_unified.min_entity_2_max_predicate_100 | 4,873 | 542 | 6,490 | 195 | 139 | 229 | 5,386 | 887 | 6,704 |
35
- | filter_unified.min_entity_2_max_predicate_50 | 3,002 | 334 | 3,930 | 193 | 139 | 229 | 3,457 | 575 | 4,240 |
36
- | filter_unified.min_entity_2_max_predicate_25 | 1,711 | 191 | 2,251 | 195 | 113 | 229 | 2,112 | 331 | 2,603 |
37
- | filter_unified.min_entity_2_max_predicate_10 | 858 | 96 | 1,146 | 194 | 81 | 229 | 1,149 | 177 | 1,446 |
38
- | filter_unified.min_entity_3_max_predicate_100 | 3,659 | 407 | 4,901 | 173 | 116 | 208 | 3,892 | 662 | 4,844 |
39
- | filter_unified.min_entity_3_max_predicate_50 | 2,336 | 260 | 3,102 | 174 | 115 | 208 | 2,616 | 447 | 3,240 |
40
- | filter_unified.min_entity_3_max_predicate_25 | 1,390 | 155 | 1,851 | 173 | 94 | 208 | 1,664 | 272 | 2,073 |
41
- | filter_unified.min_entity_3_max_predicate_10 | 689 | 77 | 937 | 171 | 59 | 208 | 922 | 135 | 1,159 |
42
- | filter_unified.min_entity_4_max_predicate_100 | 2,995 | 333 | 4,056 | 158 | 105 | 193 | 3,104 | 563 | 3,917 |
43
- | filter_unified.min_entity_4_max_predicate_50 | 1,989 | 222 | 2,645 | 157 | 102 | 193 | 2,225 | 375 | 2,734 |
44
- | filter_unified.min_entity_4_max_predicate_25 | 1,221 | 136 | 1,632 | 158 | 76 | 193 | 1,458 | 237 | 1,793 |
45
- | filter_unified.min_entity_4_max_predicate_10 | 603 | 68 | 829 | 157 | 52 | 193 | 797 | 126 | 1,018 |
46
-
47
- - test split
48
-
49
- | number of triples (test) | number of unique predicates (test) | number of unique entities (test) |
50
- |---------------------------:|-------------------------------------:|-----------------------------------:|
51
- | 122 | 34 | 188 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
  ### Filtering to Remove Noise
54
  We apply filtering to keep triples with alpha-numeric subject and object, as well as triples with at least either of subject or object is a named-entity.
@@ -77,15 +295,14 @@ we choose top-`max predicate` triples based on the frequency of the subject and
77
 
78
 
79
  ## Dataset Structure
80
- ### Data Instances
81
  An example looks as follows.
82
- ```
83
  {
84
- "tail": "Persian",
85
- "head": "Tajik",
86
- "title": "Tandoor bread",
87
- "text": "Tandoor bread (Arabic: \u062e\u0628\u0632 \u062a\u0646\u0648\u0631 khubz tannoor, Armenian: \u0569\u0578\u0576\u056b\u0580 \u0570\u0561\u0581 tonir hats, Azerbaijani: T\u0259ndir \u00e7\u00f6r\u0259yi, Georgian: \u10d7\u10dd\u10dc\u10d8\u10e1 \u10de\u10e3\u10e0\u10d8 tonis puri, Kazakh: \u0442\u0430\u043d\u0434\u044b\u0440 \u043d\u0430\u043d tandyr nan, Kyrgyz: \u0442\u0430\u043d\u0434\u044b\u0440 \u043d\u0430\u043d tandyr nan, Persian: \u0646\u0627\u0646 \u062a\u0646\u0648\u0631\u06cc nan-e-tanuri, Tajik: \u043d\u043e\u043d\u0438 \u0442\u0430\u043d\u0443\u0440\u0439 noni tanuri, Turkish: Tand\u0131r ekme\u011fi, Uyghur: ) is a type of leavened bread baked in a clay oven called a tandoor, similar to naan. In Pakistan, tandoor breads are popular especially in the Khyber Pakhtunkhwa and Punjab regions, where naan breads are baked in tandoor clay ovens fired by wood or charcoal. These tandoor-prepared naans are known as tandoori naan.",
88
- "relation": "[Artifact] is a type of [Type]"
89
  }
90
  ```
91
 
 
5
  - other
6
  multilinguality:
7
  - monolingual
8
+ size_categories:
9
+ - n<1K
10
+ pretty_name: relbert/t_rex
11
  ---
12
 
13
  # Dataset Card for "relbert/t_rex"
14
  ## Dataset Description
15
  - **Repository:** [https://hadyelsahar.github.io/t-rex/](https://hadyelsahar.github.io/t-rex/)
16
  - **Paper:** [https://aclanthology.org/L18-1544/](https://aclanthology.org/L18-1544/)
17
+ - **Dataset:** Cleaned T-REX for link prediction.
18
 
19
  ## Dataset Summary
20
  This is the T-REX dataset proposed in [https://aclanthology.org/L18-1544/](https://aclanthology.org/L18-1544/).
 
23
  The train/validation splits are created for each configuration by the ratio of 9:1.
24
  The number of triples in each split is summarized in the table below.
25
 
26
+ ***Note:*** To make it consistent with other datasets ([nell](https://huggingface.co/datasets/relbert/nell) and [conceptnet](https://huggingface.co/datasets/relbert/conceptnet)), we rename predicate/subject/object as relation/head/tail.
27
+
28
+ - Number of instances (`filter_unified.min_entity_4_max_predicate_10`)
29
+
30
+ | | train | validation | test |
31
+ |:--------------------------------|--------:|-------------:|-------:|
32
+ | number of pairs | 603 | 68 | 122 |
33
+ | number of unique relation types | 157 | 52 | 34 |
34
+
35
+ - Number of pairs in each relation type (`filter_unified.min_entity_4_max_predicate_10`)
36
+
37
+ | | number of pairs (train) | number of pairs (validation) | number of pairs (test) |
38
+ |:----------------------------------------------------------|--------------------------:|-------------------------------:|-------------------------:|
39
+ | [Academic Subject] studies [Topic] | 3 | 0 | 0 |
40
+ | [Airline] is in [Airline Alliance] | 3 | 2 | 0 |
41
+ | [Army] has [Fleet] | 9 | 1 | 0 |
42
+ | [Art Work] follows after [Art Work] | 2 | 1 | 0 |
43
+ | [Art Work] is a translation of [Art Work] | 2 | 1 | 0 |
44
+ | [Art Work] is painted by [Person] | 1 | 2 | 0 |
45
+ | [Art Work] is sculpted by [Person] | 4 | 0 | 0 |
46
+ | [Art Work] is written by [Person] | 1 | 0 | 0 |
47
+ | [Artifact] has a shape of [Shape] | 1 | 0 | 0 |
48
+ | [Artifact] is a patron saint of [Country] | 4 | 0 | 0 |
49
+ | [Artifact] is a type of [Type] | 3 | 0 | 0 |
50
+ | [Artifact] is built on [Date] | 5 | 0 | 0 |
51
+ | [Artifact] is discovered by [Person] | 4 | 0 | 0 |
52
+ | [Artifact] is formation of [Army] | 1 | 1 | 0 |
53
+ | [Artifact] is formed from [Artifact] | 9 | 0 | 0 |
54
+ | [Artifact] is influenced by [Artifact] | 6 | 1 | 0 |
55
+ | [Artifact] is maintained by [Company] | 6 | 2 | 0 |
56
+ | [Artifact] is name of [Artifact] | 10 | 0 | 0 |
57
+ | [Artifact] is named after [Person] | 5 | 0 | 0 |
58
+ | [Artifact] is the OS of [Software] | 3 | 0 | 0 |
59
+ | [Artifact] is the platform of [Game] | 4 | 0 | 0 |
60
+ | [Artifact] is used for its namesake of [Artifact] | 3 | 0 | 0 |
61
+ | [Artists] leads [Movement] | 7 | 0 | 0 |
62
+ | [Award] is presented by [Company] | 1 | 0 | 0 |
63
+ | [Bank] is the central bank of [Country] | 1 | 0 | 0 |
64
+ | [Bridge] crosses [Artifact] | 3 | 1 | 0 |
65
+ | [Bridge] crosses [River] | 1 | 0 | 0 |
66
+ | [Building] has an architectural style of [Person] | 5 | 0 | 0 |
67
+ | [City] is a twin city of [City] | 6 | 2 | 0 |
68
+ | [City] is in [Country] | 1 | 0 | 0 |
69
+ | [City] is the capital of [Country] | 2 | 0 | 0 |
70
+ | [Company] is a subsidiary of [Company] | 3 | 1 | 0 |
71
+ | [Company] is in a sector of [Sector] | 2 | 0 | 0 |
72
+ | [Company] operates [Vehicle] | 1 | 0 | 0 |
73
+ | [Company] owns [Product] | 3 | 1 | 0 |
74
+ | [Company] publishes [Art Work] | 7 | 0 | 0 |
75
+ | [Competition] is a league of [Sport] | 2 | 2 | 0 |
76
+ | [Council] is the council of [Country] | 6 | 0 | 0 |
77
+ | [Country] has [History] | 2 | 0 | 0 |
78
+ | [Country] is [Political Party] assembly | 4 | 1 | 0 |
79
+ | [Country] is enclaved by [Country] | 4 | 0 | 0 |
80
+ | [Country] is in [Continent] | 2 | 0 | 0 |
81
+ | [Country] joins [War] | 2 | 1 | 0 |
82
+ | [Country]'s county seat is [Location] | 6 | 0 | 0 |
83
+ | [Country]'s flag is [Artifact] | 1 | 0 | 0 |
84
+ | [Culture] is originated in [Country] | 2 | 0 | 0 |
85
+ | [Currency] is used in [Country] | 3 | 2 | 0 |
86
+ | [Disease] is caused by [Virus] | 4 | 0 | 0 |
87
+ | [Event] is since [Date] | 3 | 0 | 0 |
88
+ | [Event] takes place at [Location] | 4 | 0 | 0 |
89
+ | [Fictional Character] is a mascot of [Sport Team] | 2 | 1 | 0 |
90
+ | [Fictional Character] is from [Art Work] | 6 | 1 | 0 |
91
+ | [Food] is made from [Ingredient] | 6 | 0 | 0 |
92
+ | [Government] is the government of [Country] | 4 | 0 | 0 |
93
+ | [Government] is the jurisdiction of [City] | 1 | 0 | 0 |
94
+ | [Group] has a section of [Group] | 6 | 1 | 0 |
95
+ | [Group] is a predecessor of [Group] | 5 | 0 | 0 |
96
+ | [Group] is a religious order of [Group] | 1 | 0 | 0 |
97
+ | [Group] is created on [Date] | 6 | 0 | 0 |
98
+ | [Group] is founded at [Location] | 9 | 0 | 0 |
99
+ | [Group] is founded by [Person] | 6 | 0 | 0 |
100
+ | [Group] is founded on [Date] | 2 | 0 | 0 |
101
+ | [Group] is legislature of [Country] | 2 | 0 | 0 |
102
+ | [Group] is the parliament of [Country] | 2 | 0 | 0 |
103
+ | [Group]'s leader is [Person] | 3 | 0 | 0 |
104
+ | [Head of Government] is appointed by [Head of Government] | 1 | 0 | 0 |
105
+ | [Island] is [Country] | 1 | 0 | 0 |
106
+ | [Job] is the head of state in [Location] | 1 | 0 | 0 |
107
+ | [Land] is [Country] | 1 | 1 | 0 |
108
+ | [Language] consists of [Alphabet] | 4 | 0 | 0 |
109
+ | [Language] is a dialect of [Language] | 5 | 0 | 0 |
110
+ | [Location] is a sovereign state of [Location] | 7 | 1 | 0 |
111
+ | [Location] is an Indian reservation in [Country] | 9 | 0 | 0 |
112
+ | [Location] is an administrative center of [Location] | 6 | 0 | 0 |
113
+ | [Location] is exclave of [Country] | 4 | 0 | 0 |
114
+ | [Location] is in [Planet] | 1 | 0 | 0 |
115
+ | [Location] is next to [Location] | 1 | 0 | 0 |
116
+ | [Location] is on the coast of [Ocean] | 5 | 0 | 0 |
117
+ | [Location] is split from [Location] | 2 | 0 | 0 |
118
+ | [Location] is the highest peak in [Country] | 2 | 0 | 0 |
119
+ | [Medication] is for [Disease] | 4 | 0 | 0 |
120
+ | [Movie] is [Genre] | 9 | 1 | 0 |
121
+ | [Movie] is a libretto by [Person] | 6 | 0 | 0 |
122
+ | [Movie] is a spinoff of [Movie] | 1 | 0 | 0 |
123
+ | [Movie] is in the universe of [Art Work] | 1 | 0 | 0 |
124
+ | [Movie] is produced by [Company] | 6 | 1 | 0 |
125
+ | [Music Artist] is [Genre] | 7 | 0 | 0 |
126
+ | [Music] is made by [Artist] | 1 | 1 | 0 |
127
+ | [Music] is released on [Date] | 4 | 0 | 0 |
128
+ | [Organization]'s ideology is [Ideology] | 5 | 0 | 0 |
129
+ | [PC]'s cpu is [CPU] | 8 | 1 | 0 |
130
+ | [Person] and [Person] are married | 5 | 0 | 0 |
131
+ | [Person] belongs to [Record Label] | 1 | 1 | 0 |
132
+ | [Person] built [Artifact] | 5 | 0 | 0 |
133
+ | [Person] causes [War] | 2 | 0 | 0 |
134
+ | [Person] creates [Work] | 4 | 0 | 0 |
135
+ | [Person] dies at [Location] | 5 | 2 | 0 |
136
+ | [Person] dies on [Date] | 5 | 1 | 0 |
137
+ | [Person] has a house in [Location] | 9 | 0 | 0 |
138
+ | [Person] is [Occupation] | 3 | 0 | 0 |
139
+ | [Person] is [Sex] | 2 | 1 | 0 |
140
+ | [Person] is a candidate of [Election] | 3 | 1 | 0 |
141
+ | [Person] is a chancellor of [Country] | 4 | 0 | 0 |
142
+ | [Person] is a coach of [Sport Team] | 3 | 0 | 0 |
143
+ | [Person] is a concubine of [Person] | 4 | 0 | 0 |
144
+ | [Person] is a consort of [Person] | 3 | 1 | 0 |
145
+ | [Person] is a husband of [Person] | 2 | 0 | 0 |
146
+ | [Person] is a manager of [Sport Team] | 3 | 0 | 0 |
147
+ | [Person] is a member of [Music Group] | 6 | 1 | 0 |
148
+ | [Person] is a mistress of [Person] | 4 | 0 | 0 |
149
+ | [Person] is a premier of [Group] | 3 | 0 | 0 |
150
+ | [Person] is a presenter of [TV show] | 2 | 0 | 0 |
151
+ | [Person] is a student of [Person] | 1 | 0 | 0 |
152
+ | [Person] is active in [Location] | 2 | 1 | 0 |
153
+ | [Person] is awarded by [Award] | 5 | 0 | 0 |
154
+ | [Person] is born in [Country] | 5 | 1 | 0 |
155
+ | [Person] is born in [Location] | 4 | 1 | 0 |
156
+ | [Person] is born on [Date] | 5 | 2 | 0 |
157
+ | [Person] is buried at [Tomb] | 5 | 0 | 0 |
158
+ | [Person] is drafted by [Group] | 3 | 0 | 0 |
159
+ | [Person] is from [Era] | 6 | 0 | 0 |
160
+ | [Person] is in [Prison] | 3 | 0 | 0 |
161
+ | [Person] is killed by [Person] | 2 | 1 | 0 |
162
+ | [Person] is played at [Group] | 7 | 3 | 0 |
163
+ | [Person] is the chair of [Group] | 1 | 0 | 0 |
164
+ | [Person] is the emperor of [Country] | 6 | 0 | 0 |
165
+ | [Person] is the leader of [Dynasty] | 4 | 1 | 0 |
166
+ | [Person] is the mayor of [City] | 2 | 0 | 0 |
167
+ | [Person] is the monarch of [Country] | 4 | 0 | 0 |
168
+ | [Person] is the president of [Country] | 3 | 1 | 0 |
169
+ | [Person] is the prime minister of [Country] | 3 | 1 | 0 |
170
+ | [Person] is the queen of [Country] | 5 | 0 | 0 |
171
+ | [Person] live in [Location] | 8 | 1 | 0 |
172
+ | [Person] plays in [Movie] | 3 | 0 | 0 |
173
+ | [Person] plays in [Sport Team] | 8 | 0 | 0 |
174
+ | [Person] studies [Academic Subject] | 6 | 1 | 0 |
175
+ | [Person] studies at [School] | 9 | 1 | 0 |
176
+ | [Pet] is a pet of [Person] | 1 | 0 | 0 |
177
+ | [Planet] is in the orbit of [Orbit] | 1 | 0 | 0 |
178
+ | [Play] is performed by [Person] | 7 | 2 | 0 |
179
+ | [Railway] is in [Location] | 4 | 0 | 0 |
180
+ | [Religion] is a denomination by [Artifact] | 2 | 2 | 0 |
181
+ | [River] drains [Location] | 5 | 0 | 0 |
182
+ | [River] is a tributary of [River] | 4 | 0 | 0 |
183
+ | [River] outflows to [Location] | 3 | 0 | 0 |
184
+ | [Software] is under license of [License] | 5 | 3 | 0 |
185
+ | [Software] is used for [Purpose] | 3 | 0 | 0 |
186
+ | [Software] is written in [Programming Language] | 6 | 1 | 0 |
187
+ | [Sport Team] is an affiliate of [Sport Team] | 2 | 1 | 0 |
188
+ | [Sport Team] plays at [Competition] | 5 | 0 | 0 |
189
+ | [Sport Team] plays in [Competition] | 8 | 0 | 0 |
190
+ | [Sport Team] wins [Competition] | 5 | 0 | 0 |
191
+ | [Sport Team]'s home field is [Location] | 4 | 0 | 0 |
192
+ | [Star] is a [Constellation] | 1 | 0 | 0 |
193
+ | [State] is a state of [Country] | 1 | 0 | 0 |
194
+ | [System] is a system in [Artifact] | 7 | 2 | 0 |
195
+ | [Town] is in [Location] | 1 | 0 | 0 |
196
+ | [Art Work] is directed by [Person] | 0 | 1 | 0 |
197
+ | [Planet] is a satellite of [Planet] | 0 | 2 | 0 |
198
+ | [Act] is signed by [Person] | 0 | 0 | 2 |
199
+ | [Airline] has a hub in [Location] | 0 | 0 | 7 |
200
+ | [Artifact] is a coat of arms of [Group] | 0 | 0 | 5 |
201
+ | [Artifact] is a result of [Artifact] | 0 | 0 | 1 |
202
+ | [Artifact] is found in [Artifact] | 0 | 0 | 4 |
203
+ | [Artifact] is in [Color] | 0 | 0 | 4 |
204
+ | [Artifact] is manufactured by [Company] | 0 | 0 | 5 |
205
+ | [Artist] is produced by [Person] | 0 | 0 | 1 |
206
+ | [City] is a capital town of [Country] | 0 | 0 | 6 |
207
+ | [Country] claims [City] | 0 | 0 | 1 |
208
+ | [Country] is a member of [Group] | 0 | 0 | 4 |
209
+ | [Event] starts on [Date] | 0 | 0 | 2 |
210
+ | [Group] is [Religion] | 0 | 0 | 6 |
211
+ | [Group] is legislative body of [Country] | 0 | 0 | 8 |
212
+ | [Group] is merged into [Group] | 0 | 0 | 4 |
213
+ | [Group] speaks [Language] | 0 | 0 | 5 |
214
+ | [License] is approved by [Organization] | 0 | 0 | 3 |
215
+ | [Location] is a ballpark of [Sport Team] | 0 | 0 | 3 |
216
+ | [Location] is a river mouth of [Bay] | 0 | 0 | 1 |
217
+ | [Movie] is screenplayed by [Person] | 0 | 0 | 5 |
218
+ | [Movie] stars [Actor] | 0 | 0 | 2 |
219
+ | [Music] is an anthem of [Country] | 0 | 0 | 1 |
220
+ | [Occupation] lives in [Location] | 0 | 0 | 1 |
221
+ | [Person] belongs to [Political Party] | 0 | 0 | 10 |
222
+ | [Person] is a chief executive of [Company] | 0 | 0 | 1 |
223
+ | [Person] is a child of [Person] | 0 | 0 | 2 |
224
+ | [Person] is the king of [Country] | 0 | 0 | 4 |
225
+ | [Person] plays [Instrument] | 0 | 0 | 6 |
226
+ | [Person] speaks [Language] | 0 | 0 | 1 |
227
+ | [Radio Program] is broadcasted on [Radio Channel] | 0 | 0 | 3 |
228
+ | [Software] is developed by [Company] | 0 | 0 | 1 |
229
+ | [Station] is the terminus of [Railway] | 0 | 0 | 3 |
230
+ | [TV Series] is broadcasted on [TV Channel] | 0 | 0 | 1 |
231
+ | [Timezone] is a timezon in [Country] | 0 | 0 | 9 |
232
+
233
+ ### Other Statistics
234
+
235
+ | | number of pairs | number of unique relation types |
236
+ |:--------------------------------------------|------------------:|----------------------------------:|
237
+ | min_entity_1_max_predicate_100 (train) | 7075 | 212 |
238
+ | min_entity_1_max_predicate_100 (validation) | 787 | 166 |
239
+ | min_entity_1_max_predicate_50 (train) | 4131 | 212 |
240
+ | min_entity_1_max_predicate_50 (validation) | 459 | 156 |
241
+ | min_entity_1_max_predicate_25 (train) | 2358 | 212 |
242
+ | min_entity_1_max_predicate_25 (validation) | 262 | 144 |
243
+ | min_entity_1_max_predicate_10 (train) | 1134 | 210 |
244
+ | min_entity_1_max_predicate_10 (validation) | 127 | 94 |
245
+ | min_entity_2_max_predicate_100 (train) | 4873 | 195 |
246
+ | min_entity_2_max_predicate_100 (validation) | 542 | 139 |
247
+ | min_entity_2_max_predicate_50 (train) | 3002 | 193 |
248
+ | min_entity_2_max_predicate_50 (validation) | 334 | 139 |
249
+ | min_entity_2_max_predicate_25 (train) | 1711 | 195 |
250
+ | min_entity_2_max_predicate_25 (validation) | 191 | 113 |
251
+ | min_entity_2_max_predicate_10 (train) | 858 | 194 |
252
+ | min_entity_2_max_predicate_10 (validation) | 96 | 81 |
253
+ | min_entity_3_max_predicate_100 (train) | 3659 | 173 |
254
+ | min_entity_3_max_predicate_100 (validation) | 407 | 116 |
255
+ | min_entity_3_max_predicate_50 (train) | 2336 | 174 |
256
+ | min_entity_3_max_predicate_50 (validation) | 260 | 115 |
257
+ | min_entity_3_max_predicate_25 (train) | 1390 | 173 |
258
+ | min_entity_3_max_predicate_25 (validation) | 155 | 94 |
259
+ | min_entity_3_max_predicate_10 (train) | 689 | 171 |
260
+ | min_entity_3_max_predicate_10 (validation) | 77 | 59 |
261
+ | min_entity_4_max_predicate_100 (train) | 2995 | 158 |
262
+ | min_entity_4_max_predicate_100 (validation) | 333 | 105 |
263
+ | min_entity_4_max_predicate_50 (train) | 1989 | 157 |
264
+ | min_entity_4_max_predicate_50 (validation) | 222 | 102 |
265
+ | min_entity_4_max_predicate_25 (train) | 1221 | 158 |
266
+ | min_entity_4_max_predicate_25 (validation) | 136 | 76 |
267
+ | min_entity_4_max_predicate_10 (train) | 603 | 157 |
268
+ | min_entity_4_max_predicate_10 (validation) | 68 | 52 |
269
+
270
 
271
  ### Filtering to Remove Noise
272
  We apply filtering to keep triples with alpha-numeric subject and object, as well as triples with at least either of subject or object is a named-entity.
 
295
 
296
 
297
  ## Dataset Structure
 
298
  An example looks as follows.
299
+ ```shell
300
  {
301
+ "tail": "Persian",
302
+ "head": "Tajik",
303
+ "title": "Tandoor bread",
304
+ "text": "Tandoor bread (Arabic: \u062e\u0628\u0632 \u062a\u0646\u0648\u0631 khubz tannoor, Armenian: \u0569\u0578\u0576\u056b\u0580 \u0570\u0561\u0581 tonir hats, Azerbaijani: T\u0259ndir \u00e7\u00f6r\u0259yi, Georgian: \u10d7\u10dd\u10dc\u10d8\u10e1 \u10de\u10e3\u10e0\u10d8 tonis puri, Kazakh: \u0442\u0430\u043d\u0434\u044b\u0440 \u043d\u0430\u043d tandyr nan, Kyrgyz: \u0442\u0430\u043d\u0434\u044b\u0440 \u043d\u0430\u043d tandyr nan, Persian: \u0646\u0627\u0646 \u062a\u0646\u0648\u0631\u06cc nan-e-tanuri, Tajik: \u043d\u043e\u043d\u0438 \u0442\u0430\u043d\u0443\u0440\u0439 noni tanuri, Turkish: Tand\u0131r ekme\u011fi, Uyghur: ) is a type of leavened bread baked in a clay oven called a tandoor, similar to naan. In Pakistan, tandoor breads are popular especially in the Khyber Pakhtunkhwa and Punjab regions, where naan breads are baked in tandoor clay ovens fired by wood or charcoal. These tandoor-prepared naans are known as tandoori naan.",
305
+ "relation": "[Artifact] is a type of [Type]"
306
  }
307
  ```
308
 
check_stats.py DELETED
@@ -1,77 +0,0 @@
1
- import json
2
- from itertools import product
3
-
4
- import pandas as pd
5
-
6
-
7
- parameters_min_e_freq = [1, 2, 3, 4]
8
- parameters_max_p_freq = [100, 50, 25, 10]
9
-
10
- stats = []
11
- predicate = {}
12
-
13
- for min_e_freq, max_p_freq in product(parameters_min_e_freq, parameters_max_p_freq):
14
-
15
- with open(f"data/t_rex.filter_unified.min_entity_{min_e_freq}_max_predicate_{max_p_freq}.train.jsonl") as f:
16
- train = [json.loads(i) for i in f.read().split('\n') if len(i) > 0]
17
- df_train = pd.DataFrame(train)
18
-
19
- with open(f"data/t_rex.filter_unified.min_entity_{min_e_freq}_max_predicate_{max_p_freq}.validation.jsonl") as f:
20
- validation = [json.loads(i) for i in f.read().split('\n') if len(i) > 0]
21
- df_validation = pd.DataFrame(validation)
22
-
23
- with open(f"data/t_rex.filter_unified.min_entity_{min_e_freq}_max_predicate_{max_p_freq}.jsonl") as f:
24
- full = [json.loads(i) for i in f.read().split('\n') if len(i) > 0]
25
- df_full = pd.DataFrame(full)
26
- predicate[f"min_entity_{min_e_freq}_max_predicate_{max_p_freq}"] = df_full['predicate'].unique().tolist()
27
-
28
- stats.append({
29
- "data": f"filter_unified.min_entity_{min_e_freq}_max_predicate_{max_p_freq}",
30
- "number of triples (train)": len(train),
31
- "number of triples (validation)": len(validation),
32
- "number of triples (all)": len(full),
33
- "number of unique predicates (train)": len(df_train['predicate'].unique()),
34
- "number of unique predicates (validation)": len(df_validation['predicate'].unique()),
35
- "number of unique predicates (all)": len(df_full['predicate'].unique()),
36
- "number of unique entities (train)": len(
37
- list(set(df_train['object'].unique().tolist() + df_train['subject'].unique().tolist()))),
38
- "number of unique entities (validation)": len(
39
- list(set(df_validation['object'].unique().tolist() + df_validation['subject'].unique().tolist()))),
40
- "number of unique entities (all)": len(
41
- list(set(df_full['object'].unique().tolist() + df_full['subject'].unique().tolist())))
42
- })
43
-
44
- df = pd.DataFrame(stats)
45
- df.index = df.pop("data")
46
- for c in df.columns:
47
- df.loc[:, c] = df[c].map('{:,d}'.format)
48
-
49
- print(df.to_markdown())
50
-
51
- with open(f"data/t_rex.filter_unified.test.jsonl") as f:
52
- test = [json.loads(i) for i in f.read().split('\n') if len(i) > 0]
53
- df_test = pd.DataFrame(test)
54
- predicate["test"] = df_test['predicate'].unique().tolist()
55
- df_test = pd.DataFrame([{
56
- "number of triples (test)": len(df_test),
57
- "number of unique predicates (test)": len(df_test['predicate'].unique()),
58
- "number of unique entities (test)": len(
59
- list(set(df_test['object'].unique().tolist() + df_test['subject'].unique().tolist()))
60
- )
61
- }])
62
-
63
- for c in df_test.columns:
64
- df_test.loc[:, c] = df_test[c].map('{:,d}'.format)
65
- print(df_test.to_markdown(index=False))
66
- df["number of triples (test)"] = df_test["number of triples (test)"].values[0]
67
- df["number of unique predicates (test)"] = df_test["number of unique predicates (test)"].values[0]
68
- df["number of unique entities (test)"] = df_test["number of unique entities (test)"].values[0]
69
- df.pop("number of triples (all)")
70
- df.pop("number of unique predicates (all)")
71
- df.pop("number of unique entities (all)")
72
- df = df[sorted(df.columns)]
73
- df.to_csv("data/stats.csv")
74
-
75
- # predicates in training vs test
76
- with open("data/predicates.json", "w") as f:
77
- json.dump(predicate, f)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
stats.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from itertools import product
2
+ import pandas as pd
3
+ from datasets import load_dataset
4
+
5
+
6
+ def get_stats(name):
7
+ relation = []
8
+ size = []
9
+ data = load_dataset("relbert/t_rex", name)
10
+ splits = data.keys()
11
+ for split in splits:
12
+ df = data[split].to_pandas()
13
+ size.append({
14
+ "number of pairs": len(df),
15
+ "number of unique relation types": len(df["relation"].unique())
16
+ })
17
+ relation.append(df.groupby('relation')['head'].count().to_dict())
18
+ relation = pd.DataFrame(relation, index=[f"number of pairs ({s})" for s in splits]).T
19
+ relation = relation.fillna(0).astype(int)
20
+ size = pd.DataFrame(size, index=splits).T
21
+ return relation, size
22
+
23
+ df_relation, df_size = get_stats("filter_unified.min_entity_4_max_predicate_10")
24
+ print(f"\n- Number of instances (`filter_unified.min_entity_4_max_predicate_10`) \n\n {df_size.to_markdown()}")
25
+ print(f"\n- Number of pairs in each relation type (`filter_unified.min_entity_4_max_predicate_10`) \n\n {df_relation.to_markdown()}")
26
+
27
+
28
+ parameters_min_e_freq = [1, 2, 3, 4]
29
+ parameters_max_p_freq = [100, 50, 25, 10]
30
+ df_size_list = []
31
+ for e, p in product(parameters_min_e_freq, parameters_max_p_freq):
32
+ _, df_size = get_stats(f"filter_unified.min_entity_{e}_max_predicate_{p}")
33
+ df_size.pop("test")
34
+ df_size.columns = [f"min_entity_{e}_max_predicate_{p} ({c})" for c in df_size.columns]
35
+ df_size_list.append(df_size)
36
+ df_size_list = pd.concat([i.T for i in df_size_list])
37
+ print(df_size_list.to_markdown())