openai-dataset / article_6643200.json
Prasant's picture
Upload article_6643200.json with huggingface_hub
0625355 verified
raw
history blame
1.43 kB
{"text": "If the [`temperature`](https://platform.openai.com/docs/api-reference/chat/create#chat-create-temperature) parameter is set above 0, the model will likely produce different results each time - this is expected behavior. If you're seeing unexpected differences in the quality completions you receive from [Playground](https://platform.openai.com/playground) vs. the API with `temperature` set to 0, there are a few potential causes to consider. \n\n\n\nFirst, check that your prompt is exactly the same. Even slight differences, such as an extra space or newline character, can lead to different outputs. \n\n\n\nNext, ensure you're using the same parameters in both cases. For example, the `model` parameter set to `gpt-3.5-turbo` and `gpt-4` will produce different completions even with the same prompt, because `gpt-4` is a newer and more capable instruction-following [model](https://platform.openai.com/docs/models).\n\n\n\nIf you've double-checked all of these things and are still seeing discrepancies, ask for help on the [Community Forum](https://community.openai.com/), where users may have experienced similar issues or may be able to assist in troubleshooting your specific case.\n\n", "title": "Why am I getting different completions on Playground vs. the API?", "article_id": "6643200", "url": "https://help.openai.com/en/articles/6643200-why-am-i-getting-different-completions-on-playground-vs-the-api"}