Not able to run hello world example, bigcode/starcoder is not a valid model identifier

#11
by rameshn - opened

Hi,
I tried running the following, and get an error. Is the model path correct ?
Thanks.

--
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_ckpt = "bigcode/starcoder"
#model_ckpt = "https://huggingface.co/bigcode/starcoder"

model = AutoModelForCausalLM.from_pretrained(model_ckpt)
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device="cuda:2")
print( pipe("def hello():") )

--
...
OSError: bigcode/starcoder is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'

You have to login using huggingface_hub. Look at the Quickstart guide

BigCode org

Hi. You should go to hf.co/bigcode/starcoder and fill accept the agreement if you want to be able to use the model. You would also want to connect using huggingface-cli.

Maybe you also need some requirements:

https://github.com/bigcode-project/starcoder

Set your personal access token to the use_auth_token parameter when you call from_pretrained.

I have same issue. Tried to login using huggingface_hub, install all requirements, only @cactusthecoder8 recomendation helped, to use use_auth_token parameter to download tokenizer, but then running fails on the next line
model = AutoModelForCausalLM.from_pretrained(checkpoint, trust_remote_code=True).to(device)
With error:

Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'. ```
BigCode org

Can you try adding use_auth_token to model loading too (btw you don't need trust_remote_code=True). Overall if you accept the agreement on the model page and follow these steps it should work (assuming you have enough memory):

!pip install transformers==4.28.1
!huggingface-cli login
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoder")
model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder")

Can you try adding use_auth_token to model loading too (btw you don't need trust_remote_code=True). Overall if you accept the agreement on the model page and follow these steps it should work (assuming you have enough memory):

!pip install transformers==4.28.1
!huggingface-cli login
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoder")
model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder")

What is the recommended memory?

What is the recommended memory?

RAM: There are 7 checkpoints with approximately 10G per shard.
GPU memory: definitely > 16 G but 32G is enough

What is the recommended memory?

RAM: There are 7 checkpoints with approximately 10G per shard.
GPU memory: definitely > 16 G but 24G is enough

Oh :( I suppose that is the reason I'm getting a killed application when I try to run the example... thanks mate!

Summarizing the suggestions that fixed the problem for me: (these are in addition to pip install of requirements.txt)
a) accept the license agreement on https://huggingface.co/bigcode/starcoder
b) get access token for starcoder from https://huggingface.co/settings/tokens
c) run 'huggingface-cli login' --> use token provided above

@rameshn how to accept the license agreement ?

Summarizing the suggestions that fixed the problem for me: (these are in addition to pip install of requirements.txt)
a) accept the license agreement on https://huggingface.co/bigcode/starcoder
b) get access token for starcoder from https://huggingface.co/settings/tokens
c) run 'huggingface-cli login' --> use token provided above

@rameshn how to accept the license agreement ?

image.png

What is the recommended memory?

RAM: There are 7 checkpoints with approximately 10G per shard.
GPU memory: definitely > 16 G but 32G is enough

wait,me stupid, do i need 10G or 70G of RAM?

70GB for storage

Sign up or log in to comment