π© Report β Legal Issue: Illegal Content & Child Pornography
For the record, this dataset appears to have originally contained illegal child pornography (one match in 100 samples shown in the preview). I don't think the amount of illegal content reaches 1% in the entire dataset, but obviously it's not 0% either. There's a formal investigation underway.
Context: from the day it was uploaded, the dataset preview of laion2b-multi contained a link to a porn website with a description that matches child abuse. This was first reported in another thread here dating back to last August. [4]
Upon discovering it (independently from the first report), since handling this kind of content is best done by professionals (and not an organization without ethical oversight), I immediately informed the FBI, an NGO funded by the DOJ, and Interpol [1]. Furthermore:
- I informed an HuggingFace employee directly via Twitter. [2]
- I wrote to the privacy@ HF email address about it too. [3]
- It was then removed by LAION 7 (seven!) months after the original report. [4]
Note that the link and description containing the child abuse was removed from the dataset preview only after my efforts to draw attention to this. Only that specific sample_id was removed from the dataset. (It's still in the repository history.)
Due to the importance of the matter, I'd like to raise the following questions:
The majority of the filtering code is English-language and models are Caucasian-biased. The laion2b-multi dataset is thus much more likely to contain illegal content not correctly filtered. How was the sample removed flagged "UNLIKELY"?
What's HuggingFace doing as the platform hosting this? It was hosting a dataset with descriptions child porn and pedophilia for 12 months, and a report was outstanding for 7 months without any action taken. What changes are going to be made?
I appreciate that action is being taken now β even if it's 7 to 12 months delayed. I hope, for the sake of AI Ethics and the example that you're setting here, to find answers that are beneficial to the community sooner rather than later.
Sincerely,
[1] https://twitter.com/alexjc/status/1638142287728328704
[2] https://twitter.com/alexjc/status/1637874663006019594
[3] Hash of email thread: fcc50d52c0d35c2d4c4b3e5256110c43eeb47d68509425f03b909d4121f04c4d (sha3-256)
[4] https://huggingface.co/datasets/laion/laion2B-multi/discussions/2
Since there have been discussions about this on Social Media, it's important to post an update on the situation.
1) ALLEGATIONS
In multiple Mastodon posts [1] [2], @mmitchell wrote the following:
- "some people at HF are being attacked"
- "as if pedophiles"
- "inappropriate cruelty"
- "bullying the people who are helping"
In case these are accusations, no such thing has happened in this context as far as I saw. The only entity in question here is HuggingFace's responsibilities as an organization. Such comments on Mastodon can only be understood as playing the victim card in order to save face (professionally) because they feel responsible, and/or to reduce the liability for HuggingFace as a platform hosting problematic data with little to no oversight.
I'd appreciate if HF Staff, in particular C-suites, did not make disparaging comments at my expense, and instead spent more time fostering a community where people feel able to report important problems such as this one and be heard without being intentionally misrepresented. (If this continues, I would be left no option but to consider this Bad Faith on behalf of HuggingFace β in light of the company avoiding all recent Legal Issues being posted in its own forums.)
2) CLAIMS OF FACT
Furthermore, the same social media comments [1] [2] made the following claims:
- "a lot of time and energy spent on trying to find CSAM"
- "none has been found"
- pointing out image link is now dead
- pointing out domain is now parked
These statements seems to be concealing the following facts:
- There was a link with a pedophile caption in the dataset preview for 12 months without action.
- There was a report for this link on HuggingFace for 7 months without action.
- While the link is dead now, it was not dead at creation time since there were embeddings computed for every single image.
- The owners of the repository acknowledged the description was inappropriate β thus the content could have been too.
As discussed in the original post above, please clarify exactly what was done when the dataset was originally uploaded (now 13 months ago), what was done when the report was filed (now 8 months ago), and what happened for its removal to be finally processed (now ~1 month ago).
3) CSAM CONTENT
IMPORTANT: There is more CSAM content in the database and an international investigation is underway; it's being handled externally not least because of HuggingFace's approach to the situation so far.
In a subsequent post [3], @mmitchell implies that HuggingFace itself is not setup as "a startup that reviews and approves or rejects datasets for sharing" (and people who complain should, in contrast, try a startup that review datasets). Thus, the possibility of CSAM is known to HuggingFace, with problematic links with pedophile descriptions slipping through the filters for 12 months, that the risk of hosting links to CSAM is being taken intentionally by HF, and yet still there are huge problems with accountability and processing requests β delayed months to the point the links expire.
What's the strategy when that content is officially brought up? What's the way to report such content when it's suspected so that it's taken seriously? Estimating there are 5k of CSAM that should be investigated (between 1k and 20k actual CSAM) in this dataset. I see no evidence this is being handled competently by the HF organization, and it appears to be intentional avoiding of known problems β hiding behind a vanishing safe harbor defense. Worse, since links are ignored for 7 months and people who raise concerns are publicly humiliated and disparaged, how exactly does HuggingFace expect to be told about problematic content it is intentionally hosting?
Who at HuggingFace allowed to download, store, process illegal CSAM links and content? How are users able to report these things if it's illegal to click those links? Thousands of AI/ML organizations downloading are taking on legal liability as a consequence of this dataset being hosted here and (naively) branded safe.
Regards,
[1] https://mastodon.social/@mmitchell_ai/110272585737333657
@alex@dair-community.social , because your RT'ed this, I just wanted to pop in to say that there has been a lot of time and energy spent on trying to find CSAM, and none has been found. Some people at HF are being attacked as if pedophiles but it's just...inappropriate cruelty.
[2] https://mastodon.social/@mmitchell_ai/110272619934747538
@alex@dair-community.social Here is the link they are saying is CSAM (and apparently reporting HF to the FBI for?)
https://static4.gbporno.com/0a/f8/cb/0af8cb3bb8152da12fcc26d273eee1bd/0af8cb3bb8152da12fcc26d273eee1bd.1.jpg
You can just check gbporno.com, though, to see what's actually happening without clicking into a jpeg.
There are SERIOUS ISSUES with data and LAION, and it's frustrating when the energy is being funneled into just bullying the people who are helping.
[3] https://mastodon.social/@mmitchell_ai/110277361693407720
@adam_harvey sounds like you're interested in making a startup that reviews and approves or rejects datasets for sharing. Go for it!