Missing important parts of text (at least in 20231101.fr)

#51
by Jeronymous - opened

Hello,
And first of all, thanks for that great work, it's awesome to be able to have recent wikipedia dump data in HuggingFace :)

Concerning the quality of the data, I was interested in the French part, and I noticed that a lot of "wikicodes" are ignored, leading to some missing parts in the text, on places where there are important pieces of information in terms of knowledge base.

Here is an illustration:

  1. Get the text that corresponds to this page: https://fr.wikipedia.org/wiki/Saint-Valentin
import dataset 
ds = load_dataset("wikimedia/wikipedia", "20231101.fr")
for sample in ds["train"]:
   if sample['id'] == "10034":
      text = sample["text"]
      break
  1. Print a part:
start = text.index("Au début du")
end = text.index("Reliques des saints Valentin") + 30
print(text[start:end])

This prints:

Au début du , Charles d’Orléans fit connaître l'œuvre d'Othon à la cour de France. Il écrivit lui-même plusieurs poèmes
dédiés à la Saint-Valentin. Par la suite, cette tradition se perdit dans le monde latin et .

Reliques des saints Valentin 

which corresponds to
Screenshot from 2023-12-04 18-59-03.png

We can see that "XVe siècle" is missing in the beginning ("[Au début du ] XVe siècle").
As well as the end of the third sentence ("[le monde latin et ]ne fut réactualisée qu'au XIXe siècle[.]").

In the original Wikipedia dumps, that part looks like this:

Au début du {{s-|XV}}, [[Charles d'Orléans (1394-1465)|Charles d’Orléans]] fit connaître l'œuvre d'[[Othon III de Grandson|Othon]] à la cour de France. Il écrivit lui-même plusieurs poèmes
dédiés à la Saint-Valentin. Par la suite, cette tradition se perdit dans le monde latin et {{référence souhaitée|ne fut réactualisée qu'au {{s-|XIX}}}}.

So I guess all the things in between curly brackets "{{...}}" from the dump are missed.
(This is just a single example among many possible cases)

Is it a known problem?
Is there a solution identified for that?

Actually, I started the work to retro-engineer the wikicode (in python) to recover most of the information (for French pages, because I am focusing on this).
But re-inventing the wheel like this is a never ending task...
It also seems impossible to be complete and future proof, given that for example some things in curly brackets directly refers to an updated version of the wikipedia knowledge base (for instances, number of inhabitants in cities/countries...). And I don't know how to deal with this.

In short, I would like to help, but I also need help...

(Side help that I might need: I am currently working with text files, generating one text file per Wikipedia page, and I am not sure yet how to package this correctly and efficiently for HF datasets).

Best regards
-- Jérôme

Hi, @Jeronymous .

Thanks a lot for reporting. Yes, we are aware of this issue, it is an open problem and quite complex to resolve.

The curly brackets in Wikipedia correspond to what they call "templates" (https://meta.wikimedia.org/wiki/Help:Template), and these can be seen as functions: they are given as input the function name and the input parameters, and they produce a text output. This functions are language dependent: each language Wikipedia has their own templates with different names and functionalities.

So, as a first step, we decided to remove all of them, because they are difficult to parse.

But I agree that we miss important information. It would be great that we could improve this and release a better parsing script in future versions. We are open to suggestions. Ideally, we would like to have all templates "transcluded" in the final document.

Maybe the team from Wikimedia can give some ideas, for example, @isaacj who proposed many of the improvements of the cleaning code.

Wikimedia org

Yes, thanks for raising this issue. First some context on design of these filtering rules: my goal is high-quality natural language text that could be used to train language models while also maintaining a relatively simple and consistent pipeline. For that reason, I personally lean towards choices that overall reduce weird artifacts even if it sometimes means that useful content is cut out (preference for precision over recall). Most templates don't contain natural-language -- e.g., infoboxes that contain essentially key-value pairs; citations; templates indicating various issues with a page -- so they're removed by default. If I wanted to use this text in a user-facing application, I might make different choices. That said, some ideas:

  • One idea that folks have floated is having an allow-list of templates. This universe of templates that contain important text to the understanding of a sentence is relatively limited. In my experience, they are mainly templates that auto-format measurements/dates or flag content as being in another language. The challenge specific to this is generating that list and keeping it up-to-date. Templates are language-specific and not well-categorized so this would probably be difficult to achieve good coverage and rely on folks coming across issues like this and reporting up. It's also not straightforward always to extract the desired text from the template. So incomplete solution at best and also introduces overhead to evaluate potential additions to the lists so I hesitate to take this route.
  • Another idea is to use a more rule-based approach to deciding which templates to include or not. This is more complicated from a code standpoint but is a one-time patch that should apply to all languages. Here you might do something like maintain some context about where a template appears and if there are plaintext characters on either side (as opposed to newlines), you would keep it (otherwise skip over). Would have to do some testing to see if e.g., want plaintext characters on both sides or if just one side is sufficient or if you require a full-stop to appear after it etc. but this would hopefully accurately find templates that are adding content mid-sentence with minimal false-positives. I lean towards this rule-based approach for what to try if someone is interested in taking it on.

The problem with either of these two approaches is that they're still trying to extract natural-language from wikitext templates, which is not always obvious or possible without additional context about the template. For instance, your example is {{référence souhaitée|ne fut réactualisée qu'au {{s-|XIX}}}} which should be ne fut réactualisée qu'au XIXe siècle. While you might start with extracting the text from the first template parameter, this still doesn't help with turning XIX into XIXe siècle. And more examples at the end of this post with templates with multiple parameters that need to be joined or interpreted along with connector words that are even messier.

The long-term fix if this is something we want to address is probably to consider turning to the HTML dumps. These (newish) dumps contain the parsed HTML for articles, which means we no longer have to try to parse these templates and can instead just decide whether to include them or not. Here maybe it's sufficient to just include any text even if it comes from a template that is nested within a <p> element and isn't superscript (to ignore footnotes)? The main problem here is that we don't have a nice ecosystem of Python libraries to help with processing these dumps and providing the necessary semantics to do the filtering (plenty of general-purpose HTML libraries but they don't do a great job with plaintext extraction from Wikipedia HTML). We've taken some small steps in this direction but aren't quite there yet (though welcome contributions): https://techblog.wikimedia.org/2023/02/24/from-hell-to-html/

Just for documentation purposes, some more examples of this template issue:

  • https://en.wikipedia.org/w/index.php?title=Cabbage&oldid=1187362009: A cabbage generally weighs between {{convert|500|and|1000|g|lbs|sigfig=1}} should read A cabbage generally weighs between 500 and 1,000 grams (1 and 2 lb).
  • https://en.wikipedia.org/w/index.php?title=Aardvark&oldid=1182219814#Name: The name "aardvark" is [[Afrikaans]] ({{IPA|af|ˈɑːrtfark}}), comes from earlier Afrikaans {{Lang|af|erdvark}}<ref name=Colliers/> and means "earth [[pig]]" or "ground pig" (''aarde'': "earth", ''vark'': "pig", or "young pig"/child), because of its burrowing habits. should read The name "aardvark" is Afrikaans (Afrikaans pronunciation: [ˈɑːrtfark]), comes from earlier Afrikaans erdvark and means "earth pig" or "ground pig" (aarde: "earth", vark: "pig", or "young pig"/child), because of its burrowing habits. And actually you probably don't want to extract the ({{IPA|af|ˈɑːrtfark}}) into (Afrikaans pronunciation: [ˈɑːrtfark]).

And if someone wants to play with approaches, you can use the PAWS infrastructure (Jupyter notebooks hosted by the Wikimedia Foundation that can be used for Wikimedia-specific analyses etc. and that have local access to things like the dumps so it's a lot easier to prototype stuff without downloading large files). Example notebooks: https://public-paws.wmcloud.org/User:Isaac%20(WMF)/wikitext-to-plaintext.ipynb and https://public-paws.wmcloud.org/User:Isaac%20(WMF)/HuggingFace%20Wikipedia%20Processing.ipynb and some basic details on how to get started. You can also see an example with the HTML dumps here: https://public-paws.wmcloud.org/User:Isaac%20(WMF)/Outreachy%20Summer%202022/mwparserfromhtml_demo.ipynb

Thank you both a lot for your answers, for clarifying and pointing to relevant people and resources.

As you @isaacj , I also want to extract textual content from Wikipedia for language modeling (train LLM / perform Retrieval Augmented Generation).
I understand the argument of "precision more important than recall". But honestly, from what I see on French wikipedia, ignoring all the templates just produces a high recall in terms of "amount of knowledge" thrown away. All citations, numbers, dates, quantities, ... an many linked or annotated stuff just disappear.
I also understand that just taking the text inside all templates is not an option.
That is why I started to code functions to parse French templates (because I just want to focus on this language for now), where the easiest (and boring) part is to list all the templates to throw away, and a harder and never-ending part is to correctly interpret all the templates to keep. Because the templates can always evolve with new contributions, it seems just impossible to maintain ignoring what runs to generate Wikipedia HTML pages.

Given that all the templates as defined somewhere (e.g. https://en.wikipedia.org/wiki/Template:Date) I genuinely thought it would be possible to run some code that transforms each templates into HTML, and then HTML to plain text (maybe it's easier to process HTML on small parts which we want to keep, rather than the HTML of a full page?).
But I have no idea where to find / how to run the code that parse the templates in practice.

Another option is to start from the HTML dumps.
Thank you for pointing the blog post "From hell to HTML" :)
I am just not reassured when you write "some small steps in this direction but aren't quite there yet" (I feel that I have just opened a can of worms...)
Concerning mwparserfromhtml : it seems to convert wikicode to HTML.
Or this project also converts HTML from Wikipedia pages to "meaningful" plain text (ignoring things that appear at the bottom of each page for instance)?

Thanks a lot @isaacj for your insightful explanation.

It is clear that additional work is needed to improve the quality of the resulting text : either implementing some logic in template parsing, or trying with HTML dumps instead. I could start testing them and check their performance/suitability.

Just as a side note: I wonder if Wikipedia templates will eventually be replaced by Wikifunctions (https://www.wikifunctions.org) and if these could help our purpose.

I confirm that starting from the HTML requires much less effort to get a better stuff.
No language-specific tricks are needed. Only some heuristics that concern the layout of Wikipedia pages.
I am quite satisfied of the result after 2 days of work (on 70 samples pages I took, with various topics : history, math, chemistry...).
It can even produce subscript and superscript: "m²", "[CoF₂NH₂CH₂CH₂NH₂(NH₃)₂]⁺", ...
I also have a better grasp of the structure of each page (it is easy to associate each paragraph with the corresponding headings for instance).

My main trouble is that it's significantly longer to process each page (that is strange, I have to figure out where is the bottleneck, maybe with bs4...).
Downloading html dumps is also longer.
It is going to take some time to process the last dump from 20231201.
But I have hope. And will probably make the dataset and the code public.

Thanks again to both of you for those valuable insights!

Wikimedia org

But I have hope. And will probably make the dataset and the code public.

Great to hear! Looking forward to seeing the code. Hopefully some pieces I can incorporate back into the mwparserfromhtml library to improve its HTML-to-plaintext functionality better. You can see the existing functionality in this little UI but you'll notice the missing measurement info in the output plaintext for that example because I haven't resolved the template issue there either yet.

My main trouble is that it's significantly longer to process each page (that is strange, I have to figure out where is the bottleneck, maybe with bs4...).

It could be bs4 but also the HTML output is just substantially larger because it's actually our intermediate Parsoid output (as opposed to e.g., what you'd see when inspecting the HTML on Wikipedia) which is designed to be backwards compatible with the wikitext (so includes a bunch of things that help with linking the two). The nice part is that even though the average article will take longer to process, worse-case examples should be a lot better because Parsoid has already handled things like unclosed tags that can appear in the wikitext. All to say, hopefully parallelization mostly makes the slower processing less of an issue.

Just as a side note: I wonder if Wikipedia templates will eventually be replaced by Wikifunctions (https://www.wikifunctions.org) and if these could help our purpose.

Possibly, but that's likely a ways off still. And in the meantime, I suspect you'll see a state like what happens with existing Wikidata content that gets incorporated into infoboxes etc., which is that you still have a template but rather than hard-coding things, you have some Lua code that the template is calling that's hitting relevant APIs, doing all the processing, etc. All to say, templates likely will remain a wrapper for these sorts of things for a while to come. One step in the right direction would likely be this vision for global templates, which would at least make it easier to know how to handle templates across different languages of Wikipedia.

Wikimedia org

As for Wikifunctions, it is still in its early stage, it will be at least years before it's usable on other Wikimedia projects and even then, I guess the function from Wikifunction will be called inside templates. The Wikicode of Wikipedia (and other Wikimedia projects) will stay the same.

Hello, I am planning to process a new dump, but first I would like to address the issue of missing information...

Any suggestion? Thanks! 🤗

CC: @isaacj

Wikimedia org

Thanks for checking in @albertvillanova ! Update:

  • I have built in the functionality for converting HTML -> plaintext in the mwparserfromhtml library. You can see an example implementation here. Long-term, this is the direction to go in as the wikitext is inherently incomplete as has been documented in this discussion (and the trend is towards more transclusion of external elements so this "problem" will only grow in size). I've been waiting to make this suggestion while there were some improvements made to the HTML dumps but those seem to be moving forward so this is good timing for this conversation. I'm going to open up a separate discussion with you about how to get those HTML dumps for processing so the rest below assumes that has been taken care of.

Here's what we'd need to figure out to switch to the HTML as opposed to wikitext as source:

  • The HTML dumps file format is different the current wikitext dumps. It's a single tar-compressed file for each language edition that, when uncompressed, contains 2GB partitions of JSONlines files. This can make the parallelization a little more work but you can see an example as to how we've done it on our cluster and hopefully most of that would transfer to your infrastructure. General steps:
    • Unpack each Wikipedia tar file into the individual JSON files. For English Wikipedia, this results in ~360 2GB JSON files.
    • Set up an operator to load in those JSON files (we're using PySpark so just spark.read.json()), and process the article_body.htmlstring from each JSON to produce the plaintext. The id, URL, and title metadata can be accessed via the identifier, url, and name keys in the JSON, respectively.
  • The example implementation code makes some choices about what types of HTML text to include or exclude. I made some conservative choices but happy to discuss what would be best here. For example:
    • I skip any text from infoboxes, tables, or lists because while most of that might be useful data in a RAG sense, it's not actual well-formed sentences that would be useful for pre-training models.
    • I skip any paragraphs that aren't at least 15 characters and any documents that aren't at least 20 characters. This is mainly to filter out any remaining edge-cases.
  • The other benefit beyond more well-formed text is that the HTML is also well-formed whereas the wikitext is not forced to adhere to any standards. The HTML is more verbose so it generally takes a bit longer to process for any given article, but you avoid the edge-cases where e.g., the wikitext has some unclosed tags that force a parser into a bunch of loops trying to figure out where the tag closes and consume a bunch of memory/time. So the HTML is probably slower average processing but the edge cases are far less worse and so I've been quite happy with the parallel processing for it and spend less time chasing down issues.
Wikimedia org

Awesome!! Great news!!!

I am going trough your code and start implementing in the script here: https://huggingface.co/datasets/wikimedia/wikipedia/tree/script

How about using data from this link? https://dumps.wikimedia.org/other/cirrussearch/ It seems that all the templates have already been rendered. Is it the complete wikipedia dump?

Wikimedia org

Hi @jordane95 ,

Please note that link does not contain Wikipedia content data, but Search indexes dumped in Elasticsearch bulk insert format.

Sign up or log in to comment