Unable to deploy Meta-Llama-3.1-8B-Instruct model on Sagemaker

#58
by axs531622 - opened

I am trying to deploy meta-llama/Meta-Llama-3.1-8B-Instruct on Sagemaker. Its giving this error,

"The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.

The tokenizer class you load from this checkpoint is 'PreTrainedTokenizerFast'.
The class this function is called from is 'LlamaTokenizer'."

Any idea if there is a quick fix or workaround?

Thanks
Atish

I have read somewhere that is might have to do with the transformers version used. In Sagemaker, the image_uri that you use for the HF DLC is not using the latest version.
One way to solve this issue might be the following:

  • pull the latest image: docker pull 763104351884.dkr.ecr.<region>.amazonaws.com/huggingface-pytorch-tgi-inference:2.3.0-tgi2.0.2-gpu-py310-cu121-ubuntu22.04
  • Create a Dockerfile where you put
    FROM image_you_pulled
    RUN pip install -U transformers
  • docker build -t huggingface-pytorch-tgi-inference:2.3.0-tgi2.0.2-gpu-py310-cu121-ubuntu22.04 .
  • push to ECR and use your new image image_uri=huggingface-pytorch-tgi-inference:2.3.0-tgi2.0.2-gpu-py310-cu121-ubuntu22.04:latest

Please let me know if that works.

Actually, even better : use 763104351884.dkr.ecr.<region>.amazonaws.com/huggingface-pytorch-tgi-inference:2.3.0-tgi2.2.0-gpu-py310-cu121-ubuntu22.04-v2.0 as the image_uri

This also solves the: Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 40 column 3 on Sagemaker, related to recent upgrades of the tokenizer (As discussed in https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1/discussions/229).

Sign up or log in to comment