Class CreateChatCompletionRequest

java.lang.Object
com.sap.ai.sdk.foundationmodels.openai.generated.model.CreateChatCompletionRequest

public class CreateChatCompletionRequest extends Object
CreateChatCompletionRequest
  • Constructor Details

    • CreateChatCompletionRequest

      public CreateChatCompletionRequest()
  • Method Details

    • temperature

      @Nonnull public CreateChatCompletionRequest temperature(@Nullable BigDecimal temperature)
      Set the temperature of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      temperature - What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. Minimum: 0 Maximum: 2
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getTemperature

      @Nullable public BigDecimal getTemperature()
      What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. minimum: 0 maximum: 2
      Returns:
      temperature The temperature of this CreateChatCompletionRequest instance.
    • setTemperature

      public void setTemperature(@Nullable BigDecimal temperature)
      Set the temperature of this CreateChatCompletionRequest instance.
      Parameters:
      temperature - What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. Minimum: 0 Maximum: 2
    • topP

      @Nonnull public CreateChatCompletionRequest topP(@Nullable BigDecimal topP)
      Set the topP of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      topP - An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. Minimum: 0 Maximum: 1
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getTopP

      @Nullable public BigDecimal getTopP()
      An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. minimum: 0 maximum: 1
      Returns:
      topP The topP of this CreateChatCompletionRequest instance.
    • setTopP

      public void setTopP(@Nullable BigDecimal topP)
      Set the topP of this CreateChatCompletionRequest instance.
      Parameters:
      topP - An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. Minimum: 0 Maximum: 1
    • stream

      @Nonnull public CreateChatCompletionRequest stream(@Nullable Boolean stream)
      Set the stream of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      stream - If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • isStream

      @Nullable public Boolean isStream()
      If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
      Returns:
      stream The stream of this CreateChatCompletionRequest instance.
    • setStream

      public void setStream(@Nullable Boolean stream)
      Set the stream of this CreateChatCompletionRequest instance.
      Parameters:
      stream - If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
    • stop

      Set the stop of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      stop - The stop of this CreateChatCompletionRequest
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getStop

      @Nonnull public CreateChatCompletionRequestAllOfStop getStop()
      Get stop
      Returns:
      stop The stop of this CreateChatCompletionRequest instance.
    • setStop

      public void setStop(@Nullable CreateChatCompletionRequestAllOfStop stop)
      Set the stop of this CreateChatCompletionRequest instance.
      Parameters:
      stop - The stop of this CreateChatCompletionRequest
    • maxTokens

      @Nonnull public CreateChatCompletionRequest maxTokens(@Nullable Integer maxTokens)
      Set the maxTokens of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      maxTokens - The maximum number of [tokens](/tokenizer) that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getMaxTokens

      @Nullable public Integer getMaxTokens()
      The maximum number of [tokens](/tokenizer) that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
      Returns:
      maxTokens The maxTokens of this CreateChatCompletionRequest instance.
    • setMaxTokens

      public void setMaxTokens(@Nullable Integer maxTokens)
      Set the maxTokens of this CreateChatCompletionRequest instance.
      Parameters:
      maxTokens - The maximum number of [tokens](/tokenizer) that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
    • maxCompletionTokens

      @Nonnull public CreateChatCompletionRequest maxCompletionTokens(@Nullable Integer maxCompletionTokens)
      Set the maxCompletionTokens of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      maxCompletionTokens - An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getMaxCompletionTokens

      @Nullable public Integer getMaxCompletionTokens()
      An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
      Returns:
      maxCompletionTokens The maxCompletionTokens of this CreateChatCompletionRequest instance.
    • setMaxCompletionTokens

      public void setMaxCompletionTokens(@Nullable Integer maxCompletionTokens)
      Set the maxCompletionTokens of this CreateChatCompletionRequest instance.
      Parameters:
      maxCompletionTokens - An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
    • presencePenalty

      @Nonnull public CreateChatCompletionRequest presencePenalty(@Nullable BigDecimal presencePenalty)
      Set the presencePenalty of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      presencePenalty - Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Minimum: -2 Maximum: 2
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getPresencePenalty

      @Nullable public BigDecimal getPresencePenalty()
      Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. minimum: -2 maximum: 2
      Returns:
      presencePenalty The presencePenalty of this CreateChatCompletionRequest instance.
    • setPresencePenalty

      public void setPresencePenalty(@Nullable BigDecimal presencePenalty)
      Set the presencePenalty of this CreateChatCompletionRequest instance.
      Parameters:
      presencePenalty - Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Minimum: -2 Maximum: 2
    • frequencyPenalty

      @Nonnull public CreateChatCompletionRequest frequencyPenalty(@Nullable BigDecimal frequencyPenalty)
      Set the frequencyPenalty of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      frequencyPenalty - Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Minimum: -2 Maximum: 2
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getFrequencyPenalty

      @Nullable public BigDecimal getFrequencyPenalty()
      Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. minimum: -2 maximum: 2
      Returns:
      frequencyPenalty The frequencyPenalty of this CreateChatCompletionRequest instance.
    • setFrequencyPenalty

      public void setFrequencyPenalty(@Nullable BigDecimal frequencyPenalty)
      Set the frequencyPenalty of this CreateChatCompletionRequest instance.
      Parameters:
      frequencyPenalty - Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Minimum: -2 Maximum: 2
    • logitBias

      @Nonnull public CreateChatCompletionRequest logitBias(@Nullable Map<String,Integer> logitBias)
      Set the logitBias of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      logitBias - Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • putlogitBiasItem

      @Nonnull public CreateChatCompletionRequest putlogitBiasItem(@Nonnull String key, @Nonnull Integer logitBiasItem)
      Put one logitBias instance to this CreateChatCompletionRequest instance.
      Parameters:
      key - The String key of this logitBias instance
      logitBiasItem - The logitBias that should be added under the given key
      Returns:
      The same instance of type CreateChatCompletionRequest
    • getLogitBias

      @Nullable public Map<String,Integer> getLogitBias()
      Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
      Returns:
      logitBias The logitBias of this CreateChatCompletionRequest instance.
    • setLogitBias

      public void setLogitBias(@Nullable Map<String,Integer> logitBias)
      Set the logitBias of this CreateChatCompletionRequest instance.
      Parameters:
      logitBias - Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
    • user

      @Nonnull public CreateChatCompletionRequest user(@Nullable String user)
      Set the user of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      user - A unique identifier representing your end-user, which can help to monitor and detect abuse.
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getUser

      @Nonnull public String getUser()
      A unique identifier representing your end-user, which can help to monitor and detect abuse.
      Returns:
      user The user of this CreateChatCompletionRequest instance.
    • setUser

      public void setUser(@Nullable String user)
      Set the user of this CreateChatCompletionRequest instance.
      Parameters:
      user - A unique identifier representing your end-user, which can help to monitor and detect abuse.
    • messages

      @Nonnull public CreateChatCompletionRequest messages(@Nonnull List<ChatCompletionRequestMessage> messages)
      Set the messages of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      messages - A list of messages comprising the conversation so far. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • addMessagesItem

      @Nonnull public CreateChatCompletionRequest addMessagesItem(@Nonnull ChatCompletionRequestMessage messagesItem)
      Add one messages instance to this CreateChatCompletionRequest.
      Parameters:
      messagesItem - The messages that should be added
      Returns:
      The same instance of type CreateChatCompletionRequest
    • getMessages

      @Nonnull public List<ChatCompletionRequestMessage> getMessages()
      A list of messages comprising the conversation so far. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).
      Returns:
      messages The messages of this CreateChatCompletionRequest instance.
    • setMessages

      public void setMessages(@Nonnull List<ChatCompletionRequestMessage> messages)
      Set the messages of this CreateChatCompletionRequest instance.
      Parameters:
      messages - A list of messages comprising the conversation so far. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).
    • logprobs

      @Nonnull public CreateChatCompletionRequest logprobs(@Nullable Boolean logprobs)
      Set the logprobs of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      logprobs - Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`.
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • isLogprobs

      @Nullable public Boolean isLogprobs()
      Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`.
      Returns:
      logprobs The logprobs of this CreateChatCompletionRequest instance.
    • setLogprobs

      public void setLogprobs(@Nullable Boolean logprobs)
      Set the logprobs of this CreateChatCompletionRequest instance.
      Parameters:
      logprobs - Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`.
    • topLogprobs

      @Nonnull public CreateChatCompletionRequest topLogprobs(@Nullable Integer topLogprobs)
      Set the topLogprobs of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      topLogprobs - An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used. Minimum: 0 Maximum: 20
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getTopLogprobs

      @Nullable public Integer getTopLogprobs()
      An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used. minimum: 0 maximum: 20
      Returns:
      topLogprobs The topLogprobs of this CreateChatCompletionRequest instance.
    • setTopLogprobs

      public void setTopLogprobs(@Nullable Integer topLogprobs)
      Set the topLogprobs of this CreateChatCompletionRequest instance.
      Parameters:
      topLogprobs - An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used. Minimum: 0 Maximum: 20
    • n

      @Nonnull public CreateChatCompletionRequest n(@Nullable Integer n)
      Set the n of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      n - How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs. Minimum: 1 Maximum: 128
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getN

      @Nullable public Integer getN()
      How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs. minimum: 1 maximum: 128
      Returns:
      n The n of this CreateChatCompletionRequest instance.
    • setN

      public void setN(@Nullable Integer n)
      Set the n of this CreateChatCompletionRequest instance.
      Parameters:
      n - How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs. Minimum: 1 Maximum: 128
    • parallelToolCalls

      @Nonnull public CreateChatCompletionRequest parallelToolCalls(@Nullable Boolean parallelToolCalls)
      Set the parallelToolCalls of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      parallelToolCalls - Whether to enable parallel function calling during tool use.
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • isParallelToolCalls

      @Nonnull public Boolean isParallelToolCalls()
      Whether to enable parallel function calling during tool use.
      Returns:
      parallelToolCalls The parallelToolCalls of this CreateChatCompletionRequest instance.
    • setParallelToolCalls

      public void setParallelToolCalls(@Nullable Boolean parallelToolCalls)
      Set the parallelToolCalls of this CreateChatCompletionRequest instance.
      Parameters:
      parallelToolCalls - Whether to enable parallel function calling during tool use.
    • responseFormat

      @Nonnull public CreateChatCompletionRequest responseFormat(@Nullable CreateChatCompletionRequestAllOfResponseFormat responseFormat)
      Set the responseFormat of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      responseFormat - The responseFormat of this CreateChatCompletionRequest
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getResponseFormat

      @Nonnull public CreateChatCompletionRequestAllOfResponseFormat getResponseFormat()
      Get responseFormat
      Returns:
      responseFormat The responseFormat of this CreateChatCompletionRequest instance.
    • setResponseFormat

      public void setResponseFormat(@Nullable CreateChatCompletionRequestAllOfResponseFormat responseFormat)
      Set the responseFormat of this CreateChatCompletionRequest instance.
      Parameters:
      responseFormat - The responseFormat of this CreateChatCompletionRequest
    • seed

      @Nonnull public CreateChatCompletionRequest seed(@Nullable Integer seed)
      Set the seed of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      seed - This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result. Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend. Minimum: -9223372036854775808 Maximum: 9223372036854775807
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getSeed

      @Nullable public Integer getSeed()
      This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result. Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend. minimum: -9223372036854775808 maximum: 9223372036854775807
      Returns:
      seed The seed of this CreateChatCompletionRequest instance.
    • setSeed

      public void setSeed(@Nullable Integer seed)
      Set the seed of this CreateChatCompletionRequest instance.
      Parameters:
      seed - This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result. Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend. Minimum: -9223372036854775808 Maximum: 9223372036854775807
    • streamOptions

      @Nonnull public CreateChatCompletionRequest streamOptions(@Nullable ChatCompletionStreamOptions streamOptions)
      Set the streamOptions of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      streamOptions - The streamOptions of this CreateChatCompletionRequest
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getStreamOptions

      @Nullable public ChatCompletionStreamOptions getStreamOptions()
      Get streamOptions
      Returns:
      streamOptions The streamOptions of this CreateChatCompletionRequest instance.
    • setStreamOptions

      public void setStreamOptions(@Nullable ChatCompletionStreamOptions streamOptions)
      Set the streamOptions of this CreateChatCompletionRequest instance.
      Parameters:
      streamOptions - The streamOptions of this CreateChatCompletionRequest
    • tools

      @Nonnull public CreateChatCompletionRequest tools(@Nullable List<ChatCompletionTool> tools)
      Set the tools of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      tools - A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • addToolsItem

      @Nonnull public CreateChatCompletionRequest addToolsItem(@Nonnull ChatCompletionTool toolsItem)
      Add one tools instance to this CreateChatCompletionRequest.
      Parameters:
      toolsItem - The tools that should be added
      Returns:
      The same instance of type CreateChatCompletionRequest
    • getTools

      @Nonnull public List<ChatCompletionTool> getTools()
      A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
      Returns:
      tools The tools of this CreateChatCompletionRequest instance.
    • setTools

      public void setTools(@Nullable List<ChatCompletionTool> tools)
      Set the tools of this CreateChatCompletionRequest instance.
      Parameters:
      tools - A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
    • toolChoice

      @Nonnull public CreateChatCompletionRequest toolChoice(@Nullable ChatCompletionToolChoiceOption toolChoice)
      Set the toolChoice of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      toolChoice - The toolChoice of this CreateChatCompletionRequest
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getToolChoice

      @Nonnull public ChatCompletionToolChoiceOption getToolChoice()
      Get toolChoice
      Returns:
      toolChoice The toolChoice of this CreateChatCompletionRequest instance.
    • setToolChoice

      public void setToolChoice(@Nullable ChatCompletionToolChoiceOption toolChoice)
      Set the toolChoice of this CreateChatCompletionRequest instance.
      Parameters:
      toolChoice - The toolChoice of this CreateChatCompletionRequest
    • functionCall

      @Nonnull public CreateChatCompletionRequest functionCall(@Nullable CreateChatCompletionRequestAllOfFunctionCall functionCall)
      Set the functionCall of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      functionCall - The functionCall of this CreateChatCompletionRequest
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • getFunctionCall

      @Deprecated @Nonnull public CreateChatCompletionRequestAllOfFunctionCall getFunctionCall()
      Deprecated.
      Get functionCall
      Returns:
      functionCall The functionCall of this CreateChatCompletionRequest instance.
    • setFunctionCall

      public void setFunctionCall(@Nullable CreateChatCompletionRequestAllOfFunctionCall functionCall)
      Set the functionCall of this CreateChatCompletionRequest instance.
      Parameters:
      functionCall - The functionCall of this CreateChatCompletionRequest
    • functions

      @Nonnull public CreateChatCompletionRequest functions(@Nullable List<ChatCompletionFunctions> functions)
      Set the functions of this CreateChatCompletionRequest instance and return the same instance.
      Parameters:
      functions - Deprecated in favor of `tools`. A list of functions the model may generate JSON inputs for.
      Returns:
      The same instance of this CreateChatCompletionRequest class
    • addFunctionsItem

      @Nonnull public CreateChatCompletionRequest addFunctionsItem(@Nonnull ChatCompletionFunctions functionsItem)
      Add one functions instance to this CreateChatCompletionRequest.
      Parameters:
      functionsItem - The functions that should be added
      Returns:
      The same instance of type CreateChatCompletionRequest
    • getFunctions

      @Deprecated @Nonnull public List<ChatCompletionFunctions> getFunctions()
      Deprecated.
      Deprecated in favor of `tools`. A list of functions the model may generate JSON inputs for.
      Returns:
      functions The functions of this CreateChatCompletionRequest instance.
    • setFunctions

      public void setFunctions(@Nullable List<ChatCompletionFunctions> functions)
      Set the functions of this CreateChatCompletionRequest instance.
      Parameters:
      functions - Deprecated in favor of `tools`. A list of functions the model may generate JSON inputs for.
    • getCustomFieldNames

      @Nonnull public Set<String> getCustomFieldNames()
      Get the names of the unrecognizable properties of the CreateChatCompletionRequest.
      Returns:
      The set of properties names
    • getCustomField

      @Nullable @Deprecated public Object getCustomField(@Nonnull String name) throws NoSuchElementException
      Deprecated.
      Use toMap() instead.
      Get the value of an unrecognizable property of this CreateChatCompletionRequest instance.
      Parameters:
      name - The name of the property
      Returns:
      The value of the property
      Throws:
      NoSuchElementException - If no property with the given name could be found.
    • toMap

      @Nonnull public Map<String,Object> toMap()
      Get the value of all properties of this CreateChatCompletionRequest instance including unrecognized properties.
      Returns:
      The map of all properties
    • setCustomField

      public void setCustomField(@Nonnull String customFieldName, @Nullable Object customFieldValue)
      Set an unrecognizable property of this CreateChatCompletionRequest instance. If the map previously contained a mapping for the key, the old value is replaced by the specified value.
      Parameters:
      customFieldName - The name of the property
      customFieldValue - The value of the property
    • equals

      public boolean equals(@Nullable Object o)
      Overrides:
      equals in class Object
    • hashCode

      public int hashCode()
      Overrides:
      hashCode in class Object
    • toString

      @Nonnull public String toString()
      Overrides:
      toString in class Object