mch-dd commited on
Commit
8091e47
1 Parent(s): c3587dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -31
README.md CHANGED
@@ -1,33 +1,33 @@
1
- <h1>Spanish Public Domain Books </h1>
2
- <p>Spanish-Public Domain-Newspapers or Spanish-PD-Newspapers is a large collection aiming to aggregate all Spanish monographies in the public domain. As of March 2024, with Spanish-PD-Newspapers, it is the biggest Spanish open corpus. </p>
3
-
4
- <h2>Dataset summary</h2>
5
- <p>The collection contains 247,491 individual texts making up 2,697,414,811 words recovered from multiple sources, including Spanish leading cultural heritage institution Biblioteca Digitale Hispanica (BDH) and Internet Archive. Each parquet file has the full text of 2,000 books selected at random.</p>
6
- <h2>Curation method</h2>
7
- <p>The composition of the dataset adheres to the criteria for public domain works in the EU and, consequently, all Berne-countries for EU authors: any publication whose author is dead for more than 70 years. Additionally, the initial consolidation of public domain status for cultural heritage operates in the EU under the 2019 Copyright Directive (art. 14).
8
- </p>
9
- <h2>Uses</h2>
10
- <p>The collection aims to expand the availability of open works for the training of Large Language Models. The text can be used for model training and republished without restriction for reproducibility purposes. <br>
 
 
 
11
  The rationales for creation of this collection are multifold:
12
- </p>
13
- <ul>
14
- <li><b>Scientific</b>: We observe that the closure of training corpora represents a major barrier to AI research. Large language models face a real crisis of reproducibility. </li>
15
- <li><b>Legal</b>: With the adoption of the AI Act with its obligations in terms of copyright law compliance for the pretraining corpora, the European AI ecosystem will have to change its provenance practices. </li>
16
- <li><b>Cultural</b>: The linguistic diversity of the European Union is currently underrepresented. Unlike web archives, open, heritage, administrative, or scientific texts are often of high quality: they are long, multilingual, and editorialized publications.</li>
17
- <li><b>Economical</b>: Today, value capture is concentrated on players whose financial resources are already considerable, allowing them to collect or purchase data at a high price. Making a royalty-free corpus available to as many people as possible frees innovation in uses and minimizes economic dependencies on dominant actors.</li>
18
- </ul>
19
- <h2>License</h2>
20
- <p>The entire collection is in the public domain in all regions. This means that the patrimonial rights of each individual or collective right holders have expired. <br>
21
  There has been a debate for years in Europe over the definition of public domain and the possibility to restrict its use. Since 2019, the EU Copyright Directive states that "Member States shall provide that, when the term of protection of a work of visual art has expired, any material resulting from an act of reproduction of that work is not subject to copyright or related rights, unless the material resulting from that act of reproduction is original in the sense that it is the author's own intellectual creation." (art. 14)
22
- </p>
23
- <h2>Future work</h2>
24
- <p>This dataset is not a one-time work but will continue to evolve significantly in three directions:
25
- <ul>
26
- <li>Expansion of the dataset to the late 19th and early 20th century works and its further enhancement with currently unexploited collections coming from European patrimonial data repositories. </li>
27
- <li>Correction of computer generated errors in the text. All the texts have been transcribed automatically through the use of Optical Character Recognition (OCR) software. The original files have been digitized over a long time period (since the mid-2000s) and some documents should be. Future versions will strive either to re-OCRize the original text or use experimental LLM models for partial OCR correction.</li>
28
- <li>Enhancement of the structure/editorial presentation of the original text. Some parts of the original documents are likely unwanted for large scale analysis or model training (header, page count…). Additionally, some advanced document structures like tables or multi-column layout are unlikely to be well-formatted. </li>
29
- </ul>
30
- </p>
31
- <h2>Acknowledgements</h2>
32
- <p>The corpus was stored and processed with the generous support of Scaleway. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC).
33
- </p>
 
1
+ # Spanish Public Domain Books
2
+
3
+ **Spanish-Public Domain-Newspapers** or **Spanish-PD-Newspapers** is a large collection aiming to aggregate all Spanish monographies in the public domain. As of March 2024, with Spanish-PD-Newspapers, it is the biggest Spanish open corpus.
4
+
5
+ ## Dataset summary
6
+ The collection contains 247,491 individual texts making up 2,697,414,811 words recovered from multiple sources, including Spanish leading cultural heritage institution Biblioteca Digitale Hispanica (BDH) and Internet Archive. Each parquet file has the full text of 2,000 books selected at random.
7
+
8
+ ## Curation method
9
+ The composition of the dataset adheres to the criteria for public domain works in the EU and, consequently, all Berne-countries for EU authors: any publication whose author is dead for more than 70 years. Additionally, the initial consolidation of public domain status for cultural heritage operates in the EU under the 2019 Copyright Directive (art. 14).
10
+
11
+ ## Uses
12
+ The collection aims to expand the availability of open works for the training of Large Language Models. The text can be used for model training and republished without restriction for reproducibility purposes.
13
+
14
  The rationales for creation of this collection are multifold:
15
+ * **Scientific**: We observe that the closure of training corpora represents a major barrier to AI research. Large language models face a real crisis of reproducibility.
16
+ * **Legal**: With the adoption of the AI Act with its obligations in terms of copyright law compliance for the pretraining corpora, the European AI ecosystem will have to change its provenance practices.
17
+ * **Cultural**: The linguistic diversity of the European Union is currently underrepresented. Unlike web archives, open, heritage, administrative, or scientific texts are often of high quality: they are long, multilingual, and editorialized publications.
18
+ * **Economical**: Today, value capture is concentrated on players whose financial resources are already considerable, allowing them to collect or purchase data at a high price. Making a royalty-free corpus available to as many people as possible frees innovation in uses and minimizes economic dependencies on dominant actors.
19
+
20
+ ## License
21
+ The entire collection is in the public domain in all regions. This means that the patrimonial rights of each individual or collective right holders have expired.
22
+
 
23
  There has been a debate for years in Europe over the definition of public domain and the possibility to restrict its use. Since 2019, the EU Copyright Directive states that "Member States shall provide that, when the term of protection of a work of visual art has expired, any material resulting from an act of reproduction of that work is not subject to copyright or related rights, unless the material resulting from that act of reproduction is original in the sense that it is the author's own intellectual creation." (art. 14)
24
+
25
+ ## Future work
26
+ This dataset is not a one-time work but will continue to evolve significantly in three directions:
27
+
28
+ * Expansion of the dataset to the late 19th and early 20th century works and its further enhancement with currently unexploited collections coming from European patrimonial data repositories.
29
+ * Correction of computer generated errors in the text. All the texts have been transcribed automatically through the use of Optical Character Recognition (OCR) software. The original files have been digitized over a long time period (since the mid-2000s) and some documents should be. Future versions will strive either to re-OCRize the original text or use experimental LLM models for partial OCR correction.
30
+ * Enhancement of the structure/editorial presentation of the original text. Some parts of the original documents are likely unwanted for large scale analysis or model training (header, page count…). Additionally, some advanced document structures like tables or multi-column layout are unlikely to be well-formatted.
31
+
32
+ ## Acknowledgements
33
+ The corpus was stored and processed with the generous support of Scaleway. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC).