Datasets:
Where is the missing data?
Hi, I see everywhere, from papers to this dataset's README, that there are 161,443 messages. However, when I load the dataset, I see that there are only 85k messages... Where is the missing data? Is there any other way to access Open Assistant data that is not through HF Datasets? (In case that maybe only a portion of the data was uploaded to the Hub).
Thank you so much for this great initiative :)
You find all messages in the file 2023-04-12_oasst_all.messages.jsonl.gz. Please see also the README section Using the Huggingface Datasets.
Why the default isn't "all messages?" Is there a reason for that?
It is an arbitrary decision that was made before release.
Our default set contains only trees which are in ready-for-export state which excludes for example deleted and spam messages and trees in prompt-lottery-waiting and growing state. For supervised training of an assistant model trees in ready-for-export state are most relevant since most of the other messages are only prompts (or bad messages). Prompts can be useful for RL tuning though. In general the Hugginface dataset library is not ideal for our tree structure.
Thank you for your reply @andreaskoepf .
Hi
@andreaskoepf
, just following up on this, it seems that even for the ready-for-export trees, there are some messages that are deleted
or whose review_result
isn't True
. Is this intended? Do you recommend using these messages or discarding them?
In particular, some review_result
s are neither True
nor False
, but None
. What does that mean and what's the recommendation for these messages?
In particular, some
review_result
s are neitherTrue
norFalse
, butNone
. What does that mean and what's the recommendation for these messages?
None
means the message never gathered sufficient reviews to pass or fail the review stage. What you do with that information is up to you
@ZhaofengWu
The simplest is to ignore messages which are either deleted or don't have a positive review result. Potentially you could use deleted messages or messages with negative review result as additional bad samples for training of a RM or as counter-examples for a DPO approach.
The messages without completed review (i.e. those with None) should probably be ignored (I hope this is only a very tiny fraction of the data).