LowCodeAPI
122 APIs
  • Chat(1)
  • Audio(3)
  • Images(3)
  • Assistants(5)
  • Embeddings(1)
  • Files(5)
  • Fine Tunes (6)
  • Messages(5)
  • Models(3)
  • Moderations(1)
  • Run steps(2)
  • Runs(7)
  • Threads(4)
118 APIs
4 APIs
5 APIs
7 APIs
19 APIs
1 APIs
6 APIs
38 APIs
POST
POST
https://api.lowcodeapi.com/openai/v1/chat/completions
Request Payload

The payload will be sent as a application/json as a part of the request body.

Payload
messagesarray*
The messages to generate chat completions for; in the chat format
modelstring*
ID of the model to use. Currently only gpt-3.5-turbo and gpt-3.5-turbo-0301 are supported
frequency_penaltynumber
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far; decreasing the model's likelihood to repeat the same line verbatim
logit_biasnumber
Modify the likelihood of specified tokens appearing in the completion
logprobsboolean
Whether to return log probabilities of the output tokens or not
top_logprobsnumber
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability
max_tokensnumber
The maximum number of tokens allowed for the generated answer. By default the number of tokens the model can return will be (4096 - prompt tokens)
nnumber
How many chat completion choices to generate for each input message
presence_penaltynumber
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far; increasing the model's likelihood to talk about new topics
response_formatobject
An object specifying the format that the model must output
seedstring
Up to 4 sequences where the API will stop generating further tokens
streamboolean
If set; partial message deltas will be sent like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available; with the stream terminated by a data - [DONE] message
stream_optionsobject
Options for streaming response
temperaturenumber
What sampling temperature to use between 0 and 2. Higher values like 0.8 will make the output more random; while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both
top_pnumber
An alternative to sampling with temperature; called nucleus sampling; where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both
toolsarray
A list of tools the model may call
tool_choiceobject
Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message
parallel_tool_callsboolean
Whether to enable parallel function calling during tool use
userstring
A unique identifier representing your end-user which can help OpenAI to monitor and detect abuse
OpenAI