File size: 1,286 Bytes
f7ea344
1
{"text": "When you use your [fine-tuned model](https://platform.openai.com/docs/guides/fine-tuning) for the first time in a while, it might take a little while for it to load. This sometimes causes the first few requests to fail with a 429 code and an error message that reads \"the model is still being loaded\".\n\n\n\nThe amount of time it takes to load a model will depend on the shared traffic and the size of the model. A larger model like `gpt-4`, for example, might take up to a few minutes to load, while smaller models might load much faster.\n\n\n\nOnce the model is loaded, ChatCompletion requests should be much faster and you're less likely to experience timeouts. \n\n\n\nWe recommend handling these errors programmatically and implementing retry logic. The first few calls may fail while the model loads. Retry the first call with exponential backoff until it succeeds, then continue as normal (see the \"Retrying with exponential backoff\" section of this [notebook](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_handle_rate_limits.ipynb) for examples).\n\n", "title": "What is the \"model is still being loaded\" error?", "article_id": "6643004", "url": "https://help.openai.com/en/articles/6643004-what-is-the-model-is-still-being-loaded-error"}