Class CreateChatCompletionRequest
java.lang.Object
com.sap.ai.sdk.foundationmodels.openai.generated.model.CreateChatCompletionRequest
CreateChatCompletionRequest
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionaddFunctionsItem(ChatCompletionFunctions functionsItem) Add one functions instance to thisCreateChatCompletionRequest.addMessagesItem(ChatCompletionRequestMessage messagesItem) Add one messages instance to thisCreateChatCompletionRequest.addToolsItem(ChatCompletionTool toolsItem) Add one tools instance to thisCreateChatCompletionRequest.booleanfrequencyPenalty(BigDecimal frequencyPenalty) Set the frequencyPenalty of thisCreateChatCompletionRequestinstance and return the same instance.functionCall(CreateChatCompletionRequestAllOfFunctionCall functionCall) Set the functionCall of thisCreateChatCompletionRequestinstance and return the same instance.functions(List<ChatCompletionFunctions> functions) Set the functions of thisCreateChatCompletionRequestinstance and return the same instance.getCustomField(String name) Deprecated.Get the names of the unrecognizable properties of theCreateChatCompletionRequest.Number between -2.0 and 2.0.Deprecated.Deprecated.Modify the likelihood of specified tokens appearing in the completion.An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.The maximum number of [tokens](/tokenizer) that can be generated in the chat completion.A list of messages comprising the conversation so far.getN()How many chat completion choices to generate for each input message.Number between -2.0 and 2.0.Get responseFormatgetSeed()This feature is in Beta.getStop()Get stopGet streamOptionsWhat sampling temperature to use, between 0 and 2.Get toolChoicegetTools()A list of tools the model may call.An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.getTopP()An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.getUser()A unique identifier representing your end-user, which can help to monitor and detect abuse.inthashCode()Whether to return log probabilities of the output tokens or not.Whether to enable parallel function calling during tool use.isStream()If set, partial message deltas will be sent, like in ChatGPT.Set the logitBias of thisCreateChatCompletionRequestinstance and return the same instance.Set the logprobs of thisCreateChatCompletionRequestinstance and return the same instance.maxCompletionTokens(Integer maxCompletionTokens) Set the maxCompletionTokens of thisCreateChatCompletionRequestinstance and return the same instance.Set the maxTokens of thisCreateChatCompletionRequestinstance and return the same instance.messages(List<ChatCompletionRequestMessage> messages) Set the messages of thisCreateChatCompletionRequestinstance and return the same instance.Set the n of thisCreateChatCompletionRequestinstance and return the same instance.parallelToolCalls(Boolean parallelToolCalls) Set the parallelToolCalls of thisCreateChatCompletionRequestinstance and return the same instance.presencePenalty(BigDecimal presencePenalty) Set the presencePenalty of thisCreateChatCompletionRequestinstance and return the same instance.putlogitBiasItem(String key, Integer logitBiasItem) Put one logitBias instance to thisCreateChatCompletionRequestinstance.responseFormat(CreateChatCompletionRequestAllOfResponseFormat responseFormat) Set the responseFormat of thisCreateChatCompletionRequestinstance and return the same instance.Set the seed of thisCreateChatCompletionRequestinstance and return the same instance.voidsetCustomField(String customFieldName, Object customFieldValue) Set an unrecognizable property of thisCreateChatCompletionRequestinstance.voidsetFrequencyPenalty(BigDecimal frequencyPenalty) Set the frequencyPenalty of thisCreateChatCompletionRequestinstance.voidsetFunctionCall(CreateChatCompletionRequestAllOfFunctionCall functionCall) Set the functionCall of thisCreateChatCompletionRequestinstance.voidsetFunctions(List<ChatCompletionFunctions> functions) Set the functions of thisCreateChatCompletionRequestinstance.voidsetLogitBias(Map<String, Integer> logitBias) Set the logitBias of thisCreateChatCompletionRequestinstance.voidsetLogprobs(Boolean logprobs) Set the logprobs of thisCreateChatCompletionRequestinstance.voidsetMaxCompletionTokens(Integer maxCompletionTokens) Set the maxCompletionTokens of thisCreateChatCompletionRequestinstance.voidsetMaxTokens(Integer maxTokens) Set the maxTokens of thisCreateChatCompletionRequestinstance.voidsetMessages(List<ChatCompletionRequestMessage> messages) Set the messages of thisCreateChatCompletionRequestinstance.voidSet the n of thisCreateChatCompletionRequestinstance.voidsetParallelToolCalls(Boolean parallelToolCalls) Set the parallelToolCalls of thisCreateChatCompletionRequestinstance.voidsetPresencePenalty(BigDecimal presencePenalty) Set the presencePenalty of thisCreateChatCompletionRequestinstance.voidsetResponseFormat(CreateChatCompletionRequestAllOfResponseFormat responseFormat) Set the responseFormat of thisCreateChatCompletionRequestinstance.voidSet the seed of thisCreateChatCompletionRequestinstance.voidSet the stop of thisCreateChatCompletionRequestinstance.voidSet the stream of thisCreateChatCompletionRequestinstance.voidsetStreamOptions(ChatCompletionStreamOptions streamOptions) Set the streamOptions of thisCreateChatCompletionRequestinstance.voidsetTemperature(BigDecimal temperature) Set the temperature of thisCreateChatCompletionRequestinstance.voidsetToolChoice(ChatCompletionToolChoiceOption toolChoice) Set the toolChoice of thisCreateChatCompletionRequestinstance.voidsetTools(List<ChatCompletionTool> tools) Set the tools of thisCreateChatCompletionRequestinstance.voidsetTopLogprobs(Integer topLogprobs) Set the topLogprobs of thisCreateChatCompletionRequestinstance.voidsetTopP(BigDecimal topP) Set the topP of thisCreateChatCompletionRequestinstance.voidSet the user of thisCreateChatCompletionRequestinstance.Set the stop of thisCreateChatCompletionRequestinstance and return the same instance.Set the stream of thisCreateChatCompletionRequestinstance and return the same instance.streamOptions(ChatCompletionStreamOptions streamOptions) Set the streamOptions of thisCreateChatCompletionRequestinstance and return the same instance.temperature(BigDecimal temperature) Set the temperature of thisCreateChatCompletionRequestinstance and return the same instance.toMap()Get the value of all properties of thisCreateChatCompletionRequestinstance including unrecognized properties.toolChoice(ChatCompletionToolChoiceOption toolChoice) Set the toolChoice of thisCreateChatCompletionRequestinstance and return the same instance.tools(List<ChatCompletionTool> tools) Set the tools of thisCreateChatCompletionRequestinstance and return the same instance.topLogprobs(Integer topLogprobs) Set the topLogprobs of thisCreateChatCompletionRequestinstance and return the same instance.topP(BigDecimal topP) Set the topP of thisCreateChatCompletionRequestinstance and return the same instance.toString()Set the user of thisCreateChatCompletionRequestinstance and return the same instance.
-
Constructor Details
-
CreateChatCompletionRequest
public CreateChatCompletionRequest()
-
-
Method Details
-
temperature
Set the temperature of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
temperature- What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. Minimum: 0 Maximum: 2- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getTemperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. minimum: 0 maximum: 2- Returns:
- temperature The temperature of this
CreateChatCompletionRequestinstance.
-
setTemperature
Set the temperature of thisCreateChatCompletionRequestinstance.- Parameters:
temperature- What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. Minimum: 0 Maximum: 2
-
topP
Set the topP of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
topP- An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. Minimum: 0 Maximum: 1- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getTopP
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. minimum: 0 maximum: 1- Returns:
- topP The topP of this
CreateChatCompletionRequestinstance.
-
setTopP
Set the topP of thisCreateChatCompletionRequestinstance.- Parameters:
topP- An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. Minimum: 0 Maximum: 1
-
stream
Set the stream of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
stream- If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
isStream
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).- Returns:
- stream The stream of this
CreateChatCompletionRequestinstance.
-
setStream
Set the stream of thisCreateChatCompletionRequestinstance.- Parameters:
stream- If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
-
stop
@Nonnull public CreateChatCompletionRequest stop(@Nullable CreateChatCompletionRequestAllOfStop stop) Set the stop of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
stop- The stop of thisCreateChatCompletionRequest- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getStop
Get stop- Returns:
- stop The stop of this
CreateChatCompletionRequestinstance.
-
setStop
Set the stop of thisCreateChatCompletionRequestinstance.- Parameters:
stop- The stop of thisCreateChatCompletionRequest
-
maxTokens
Set the maxTokens of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
maxTokens- The maximum number of [tokens](/tokenizer) that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getMaxTokens
The maximum number of [tokens](/tokenizer) that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.- Returns:
- maxTokens The maxTokens of this
CreateChatCompletionRequestinstance.
-
setMaxTokens
Set the maxTokens of thisCreateChatCompletionRequestinstance.- Parameters:
maxTokens- The maximum number of [tokens](/tokenizer) that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
-
maxCompletionTokens
@Nonnull public CreateChatCompletionRequest maxCompletionTokens(@Nullable Integer maxCompletionTokens) Set the maxCompletionTokens of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
maxCompletionTokens- An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getMaxCompletionTokens
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.- Returns:
- maxCompletionTokens The maxCompletionTokens of this
CreateChatCompletionRequestinstance.
-
setMaxCompletionTokens
Set the maxCompletionTokens of thisCreateChatCompletionRequestinstance.- Parameters:
maxCompletionTokens- An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
-
presencePenalty
Set the presencePenalty of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
presencePenalty- Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Minimum: -2 Maximum: 2- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getPresencePenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. minimum: -2 maximum: 2- Returns:
- presencePenalty The presencePenalty of this
CreateChatCompletionRequestinstance.
-
setPresencePenalty
Set the presencePenalty of thisCreateChatCompletionRequestinstance.- Parameters:
presencePenalty- Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Minimum: -2 Maximum: 2
-
frequencyPenalty
Set the frequencyPenalty of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
frequencyPenalty- Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Minimum: -2 Maximum: 2- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getFrequencyPenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. minimum: -2 maximum: 2- Returns:
- frequencyPenalty The frequencyPenalty of this
CreateChatCompletionRequestinstance.
-
setFrequencyPenalty
Set the frequencyPenalty of thisCreateChatCompletionRequestinstance.- Parameters:
frequencyPenalty- Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Minimum: -2 Maximum: 2
-
logitBias
Set the logitBias of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
logitBias- Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
putlogitBiasItem
@Nonnull public CreateChatCompletionRequest putlogitBiasItem(@Nonnull String key, @Nonnull Integer logitBiasItem) Put one logitBias instance to thisCreateChatCompletionRequestinstance.- Parameters:
key- The String key of this logitBias instancelogitBiasItem- The logitBias that should be added under the given key- Returns:
- The same instance of type
CreateChatCompletionRequest
-
getLogitBias
Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.- Returns:
- logitBias The logitBias of this
CreateChatCompletionRequestinstance.
-
setLogitBias
Set the logitBias of thisCreateChatCompletionRequestinstance.- Parameters:
logitBias- Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
-
user
Set the user of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
user- A unique identifier representing your end-user, which can help to monitor and detect abuse.- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getUser
A unique identifier representing your end-user, which can help to monitor and detect abuse.- Returns:
- user The user of this
CreateChatCompletionRequestinstance.
-
setUser
Set the user of thisCreateChatCompletionRequestinstance.- Parameters:
user- A unique identifier representing your end-user, which can help to monitor and detect abuse.
-
messages
@Nonnull public CreateChatCompletionRequest messages(@Nonnull List<ChatCompletionRequestMessage> messages) Set the messages of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
messages- A list of messages comprising the conversation so far. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
addMessagesItem
@Nonnull public CreateChatCompletionRequest addMessagesItem(@Nonnull ChatCompletionRequestMessage messagesItem) Add one messages instance to thisCreateChatCompletionRequest.- Parameters:
messagesItem- The messages that should be added- Returns:
- The same instance of type
CreateChatCompletionRequest
-
getMessages
A list of messages comprising the conversation so far. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).- Returns:
- messages The messages of this
CreateChatCompletionRequestinstance.
-
setMessages
Set the messages of thisCreateChatCompletionRequestinstance.- Parameters:
messages- A list of messages comprising the conversation so far. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).
-
logprobs
Set the logprobs of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
logprobs- Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`.- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
isLogprobs
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`.- Returns:
- logprobs The logprobs of this
CreateChatCompletionRequestinstance.
-
setLogprobs
Set the logprobs of thisCreateChatCompletionRequestinstance.- Parameters:
logprobs- Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`.
-
topLogprobs
Set the topLogprobs of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
topLogprobs- An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used. Minimum: 0 Maximum: 20- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getTopLogprobs
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used. minimum: 0 maximum: 20- Returns:
- topLogprobs The topLogprobs of this
CreateChatCompletionRequestinstance.
-
setTopLogprobs
Set the topLogprobs of thisCreateChatCompletionRequestinstance.- Parameters:
topLogprobs- An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used. Minimum: 0 Maximum: 20
-
n
Set the n of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
n- How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs. Minimum: 1 Maximum: 128- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getN
How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs. minimum: 1 maximum: 128- Returns:
- n The n of this
CreateChatCompletionRequestinstance.
-
setN
Set the n of thisCreateChatCompletionRequestinstance.- Parameters:
n- How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs. Minimum: 1 Maximum: 128
-
parallelToolCalls
Set the parallelToolCalls of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
parallelToolCalls- Whether to enable parallel function calling during tool use.- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
isParallelToolCalls
Whether to enable parallel function calling during tool use.- Returns:
- parallelToolCalls The parallelToolCalls of this
CreateChatCompletionRequestinstance.
-
setParallelToolCalls
Set the parallelToolCalls of thisCreateChatCompletionRequestinstance.- Parameters:
parallelToolCalls- Whether to enable parallel function calling during tool use.
-
responseFormat
@Nonnull public CreateChatCompletionRequest responseFormat(@Nullable CreateChatCompletionRequestAllOfResponseFormat responseFormat) Set the responseFormat of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
responseFormat- The responseFormat of thisCreateChatCompletionRequest- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getResponseFormat
Get responseFormat- Returns:
- responseFormat The responseFormat of this
CreateChatCompletionRequestinstance.
-
setResponseFormat
public void setResponseFormat(@Nullable CreateChatCompletionRequestAllOfResponseFormat responseFormat) Set the responseFormat of thisCreateChatCompletionRequestinstance.- Parameters:
responseFormat- The responseFormat of thisCreateChatCompletionRequest
-
seed
Set the seed of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
seed- This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result. Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend. Minimum: -9223372036854775808 Maximum: 9223372036854775807- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getSeed
This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result. Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend. minimum: -9223372036854775808 maximum: 9223372036854775807- Returns:
- seed The seed of this
CreateChatCompletionRequestinstance.
-
setSeed
Set the seed of thisCreateChatCompletionRequestinstance.- Parameters:
seed- This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result. Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend. Minimum: -9223372036854775808 Maximum: 9223372036854775807
-
streamOptions
@Nonnull public CreateChatCompletionRequest streamOptions(@Nullable ChatCompletionStreamOptions streamOptions) Set the streamOptions of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
streamOptions- The streamOptions of thisCreateChatCompletionRequest- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getStreamOptions
Get streamOptions- Returns:
- streamOptions The streamOptions of this
CreateChatCompletionRequestinstance.
-
setStreamOptions
Set the streamOptions of thisCreateChatCompletionRequestinstance.- Parameters:
streamOptions- The streamOptions of thisCreateChatCompletionRequest
-
tools
Set the tools of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
tools- A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
addToolsItem
Add one tools instance to thisCreateChatCompletionRequest.- Parameters:
toolsItem- The tools that should be added- Returns:
- The same instance of type
CreateChatCompletionRequest
-
getTools
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.- Returns:
- tools The tools of this
CreateChatCompletionRequestinstance.
-
setTools
Set the tools of thisCreateChatCompletionRequestinstance.- Parameters:
tools- A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
-
toolChoice
@Nonnull public CreateChatCompletionRequest toolChoice(@Nullable ChatCompletionToolChoiceOption toolChoice) Set the toolChoice of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
toolChoice- The toolChoice of thisCreateChatCompletionRequest- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getToolChoice
Get toolChoice- Returns:
- toolChoice The toolChoice of this
CreateChatCompletionRequestinstance.
-
setToolChoice
Set the toolChoice of thisCreateChatCompletionRequestinstance.- Parameters:
toolChoice- The toolChoice of thisCreateChatCompletionRequest
-
functionCall
@Nonnull public CreateChatCompletionRequest functionCall(@Nullable CreateChatCompletionRequestAllOfFunctionCall functionCall) Set the functionCall of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
functionCall- The functionCall of thisCreateChatCompletionRequest- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
getFunctionCall
Deprecated.Get functionCall- Returns:
- functionCall The functionCall of this
CreateChatCompletionRequestinstance.
-
setFunctionCall
Set the functionCall of thisCreateChatCompletionRequestinstance.- Parameters:
functionCall- The functionCall of thisCreateChatCompletionRequest
-
functions
@Nonnull public CreateChatCompletionRequest functions(@Nullable List<ChatCompletionFunctions> functions) Set the functions of thisCreateChatCompletionRequestinstance and return the same instance.- Parameters:
functions- Deprecated in favor of `tools`. A list of functions the model may generate JSON inputs for.- Returns:
- The same instance of this
CreateChatCompletionRequestclass
-
addFunctionsItem
@Nonnull public CreateChatCompletionRequest addFunctionsItem(@Nonnull ChatCompletionFunctions functionsItem) Add one functions instance to thisCreateChatCompletionRequest.- Parameters:
functionsItem- The functions that should be added- Returns:
- The same instance of type
CreateChatCompletionRequest
-
getFunctions
Deprecated.Deprecated in favor of `tools`. A list of functions the model may generate JSON inputs for.- Returns:
- functions The functions of this
CreateChatCompletionRequestinstance.
-
setFunctions
Set the functions of thisCreateChatCompletionRequestinstance.- Parameters:
functions- Deprecated in favor of `tools`. A list of functions the model may generate JSON inputs for.
-
getCustomFieldNames
Get the names of the unrecognizable properties of theCreateChatCompletionRequest.- Returns:
- The set of properties names
-
getCustomField
@Nullable @Deprecated public Object getCustomField(@Nonnull String name) throws NoSuchElementException Deprecated.UsetoMap()instead.Get the value of an unrecognizable property of thisCreateChatCompletionRequestinstance.- Parameters:
name- The name of the property- Returns:
- The value of the property
- Throws:
NoSuchElementException- If no property with the given name could be found.
-
toMap
Get the value of all properties of thisCreateChatCompletionRequestinstance including unrecognized properties.- Returns:
- The map of all properties
-
setCustomField
Set an unrecognizable property of thisCreateChatCompletionRequestinstance. If the map previously contained a mapping for the key, the old value is replaced by the specified value.- Parameters:
customFieldName- The name of the propertycustomFieldValue- The value of the property
-
equals
-
hashCode
public int hashCode() -
toString
-
toMap()instead.