Datasets:

Languages:
English
Size:
n>1T
ArXiv:
License:

Llama v.s. OLMo token counts

#43
by WillHeld - opened

Hi Dolma team!

Thanks for the fantastic work! I've been ingesting Dolma recently and have been assuming that the OLMo and Llama token counts should be ~similar (e.g. under 10% drop, especially since most of OLMo is English).

Thus far this has been true, except for Open Web Math where the token count I'm getting from Llama is ~5B rather than the 12.6B reported in the readme.

Do you all happen to have Llama token counts as well to cross-verify this? I'm trying to figure out whether the discrepancy is that Llama is tokenizing that dataset in particular wildly efficiently, or whether there's some difference in the hosted files and those from which the token count was computed.

Thanks again!

Yeah, you’re right—the OLMo and LLaMA token counts usually stay within the same ballpark, especially with English-heavy datasets. The difference with Open Web Math might come down to how LLaMA tokenizes math-heavy content, which can sometimes throw off the count.

We don’t have a direct LLaMA token count for every subset, but checking against the latest files might help clarify things. If the numbers still feel way off, let us know, and we’ll look into it!

Hi @amanrangapur ! Thanks for the reply!

My version of the files was downloaded in August when I made the post. I don't see any new commits in the repo, so want to double check whether those would count as the "latest files"!

Yeah @WillHeld , v1.7 is the latest. Give me some time, I will investigate and get back to you.

No rush! I ended up running all my experiments as is since I did a fair amount of sanity checking to make sure I had downloaded what was available. It seems totally reasonable that llama tokenizer may just have a higher compression on math tokens.

Sign up or log in to comment