Padtoken

#1
by ECaleb - opened

How i can set the pad_token_id to stop open generation?

There are numerous ways to achieve this.

Sometimes one can be inferred from Autotokenizers, other times you will need to explicitly define the id by viewing the tokenizer files. Maybe there are other ways to do it, but the per-model-basis method leverages the structure of the OpenVINO IR for your model.

Review "tokenizer_config.json" and look for pad_token (or something similar); I opened up a converted Qwen2.5-32B -Coder which uses the Qwen2Tokenizer. Here we see

 "pad_token": "<|endoftext|>"

or something similar; then open "tokenizer.json" and look for the "<|endoftext|>" token object. In this example, we get the token ID 151643. You would set that value explicitly and for models which share tokenizers you wont have an issue with something like

if tokenizer.pad_token_id is None:
    tokenizer.pad_token = tokenizer.eos_token  # or any other appropriate token
pad_token_id = tokenizer.pad_token_id

Hope this helps.

Also, check out my repo for more converted models. The official Intel repos have lots of outdated/"vanilla" models. I will be hosting a space soon with a conversion tool that makes it much easier to build the Optimum CLI commands, which can be quite difficult to configure when trying to access more advanced quantization strategies or special cases.

Sign up or log in to comment