Is sorting method considered during data collection?
Thanks for sharing the dataset! It's great that the authors consider creation time of comments when scores (preference) are collected, i.e., the preferred comment needs to be created later than the non-preferred comment. I have an additional question on the effect of sorting method on the scores similar to creation time. By default, Reddit uses the "best at top" method to rank comments where the quality is defined as the ratio of upvotes to downvotes. When there are many comments, Reddit will fold the comments and I think top comments will more likely get more votes than other comments folded far away at the bottom. How is this being treated here or it is not much of an issue?
Great question! We mention this in the limitations section: a higher-scoring comment is more likely to get yet another vote, both because of greater visibility due to Reddit's sorting and because there's a "herding effect" where users are influenced by the existing high score (this effect can be as large as 25% - 32%, according to Muchnik et al.).
That's why we filtered by creation time -- it means the higher-scoring comment overcame these biases at some point, even if it later benefited from the same biases. Because of these biases however, the comment score is not an unbiased estimate of how much Reddit users actually like the comment. While the direction of the collective preference is valid, not all voters' opinions are equally impactful, since earlier voters will determine which comment benefits from these biases.
or it is not much of an issue
In practice, it's not much of an issue once you consider creation time. SHP and Anthropic's HH-RLHF datasets have roughly the same label distribution (50% label A, 50% label B), and FLAN-T5 models trained and tested on each separately get similar accuracies. But note that we recommend doing more filtering of the SHP data in the evaluation section: finetuning on just examples with a >2 score ratio gets better test performance on all the data, even though we're finetuning on less data. This is because the greater the score gap, the less likely it is that noise during the voting process will change the direction of the preference.
Surprisingly, the filtered SHP data has more usable information about the preference label than even the HH-RLHF data, meaning that the filtered collective preference labels in SHP are more learnable than the individual preference labels in HH-RLHF!