*** OSError: meta-llama/Meta-Llama-3-8B-Instruct does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.

#131
by akjagadish - opened

OSError: meta-llama/Meta-Llama-3-8B-Instruct does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.

akjagadish changed discussion title from trying to use LLama-3-8B-Instruct model to self.model = AutoModel.from_pretrained(engine, device_map="auto", torch_dtype=torch.bfloat16, use_auth_token=hf_key)
akjagadish changed discussion title from self.model = AutoModel.from_pretrained(engine, device_map="auto", torch_dtype=torch.bfloat16, use_auth_token=hf_key) to *** OSError: meta-llama/Meta-Llama-3-8B-Instruct does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.

I had the same error message with meta llama3 8b instruct. After I downgraded to torch==2.1.0 and transformers==4.40.0, it worked for me.

I was bothered by this for quite a while as well. You probably solved this problem, but I hope others who see this do not have to go through the same tedious thing I went through again.

You are seeing this because you need to convert the original downloaded file to 'hf'(or that the converting did not go completely well), see here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py

Remember 3 things:

  • specify the llama version
  • make sure you have enough RAM for it(more than 16 GB). If when running it it shows 'killed', this is why
  • if there is dependency issues: download transformers from here: pip install git+https://github.com/huggingface/transformers

Then it should work
And when loading the tokenizer if something is wrong, use 'AutoTokenizer'

Sign up or log in to comment