This is a legacy endpoint. We recommend using the Chat Completions API for new projects, which provides more capabilities and better performance.
Create a completion
Creates a completion for the provided prompt and parameters.ID of the model to use. You can use the List models API to see available models.
The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
The maximum number of tokens that can be generated in the completion.
Sampling temperature between 0 and 2. Higher values make output more random, lower values make it more focused.
An alternative to sampling with temperature, called nucleus sampling. The model considers the results of the tokens with top_p probability mass.
How many completions to generate for each prompt.
Whether to stream back partial progress. For streaming, use
create_streaming method instead.Include the log probabilities on the
logprobs most likely output tokens, as well the chosen tokens.Echo back the prompt in addition to the completion.
Up to 4 sequences where the API will stop generating further tokens.
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far.
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text.
Generates
best_of completions server-side and returns the “best” (the one with the highest log probability per token).Modify the likelihood of specified tokens appearing in the completion. Maps token IDs to bias values from -100 to 100.
The suffix that comes after a completion of inserted text.
If specified, the system will make a best effort to sample deterministically for improved reproducibility.
A unique identifier representing your end-user, which can help OpenAI monitor and detect abuse.
Response
Returns aCompletion object.
Unique identifier for the completion.
The object type, always
text_completion.Unix timestamp of when the completion was created.
The model used for completion.
A list of completion choices.
Token usage statistics.
Examples
Basic completion
Streaming completion
Usecreate_streaming for Server-Sent Events streaming: