Class ChatCompletionsRequestCommon

java.lang.Object
com.sap.ai.sdk.foundationmodels.openai.generated.model.ChatCompletionsRequestCommon

public class ChatCompletionsRequestCommon extends Object
ChatCompletionsRequestCommon
  • Constructor Details

    • ChatCompletionsRequestCommon

      public ChatCompletionsRequestCommon()
  • Method Details

    • temperature

      @Nonnull public ChatCompletionsRequestCommon temperature(@Nullable BigDecimal temperature)
      Set the temperature of this ChatCompletionsRequestCommon instance and return the same instance.
      Parameters:
      temperature - What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. Minimum: 0 Maximum: 2
      Returns:
      The same instance of this ChatCompletionsRequestCommon class
    • getTemperature

      @Nullable public BigDecimal getTemperature()
      What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. minimum: 0 maximum: 2
      Returns:
      temperature The temperature of this ChatCompletionsRequestCommon instance.
    • setTemperature

      public void setTemperature(@Nullable BigDecimal temperature)
      Set the temperature of this ChatCompletionsRequestCommon instance.
      Parameters:
      temperature - What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. Minimum: 0 Maximum: 2
    • topP

      @Nonnull public ChatCompletionsRequestCommon topP(@Nullable BigDecimal topP)
      Set the topP of this ChatCompletionsRequestCommon instance and return the same instance.
      Parameters:
      topP - An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. Minimum: 0 Maximum: 1
      Returns:
      The same instance of this ChatCompletionsRequestCommon class
    • getTopP

      @Nullable public BigDecimal getTopP()
      An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. minimum: 0 maximum: 1
      Returns:
      topP The topP of this ChatCompletionsRequestCommon instance.
    • setTopP

      public void setTopP(@Nullable BigDecimal topP)
      Set the topP of this ChatCompletionsRequestCommon instance.
      Parameters:
      topP - An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. Minimum: 0 Maximum: 1
    • stream

      @Nonnull public ChatCompletionsRequestCommon stream(@Nullable Boolean stream)
      Set the stream of this ChatCompletionsRequestCommon instance and return the same instance.
      Parameters:
      stream - If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message.
      Returns:
      The same instance of this ChatCompletionsRequestCommon class
    • isStream

      @Nullable public Boolean isStream()
      If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message.
      Returns:
      stream The stream of this ChatCompletionsRequestCommon instance.
    • setStream

      public void setStream(@Nullable Boolean stream)
      Set the stream of this ChatCompletionsRequestCommon instance.
      Parameters:
      stream - If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message.
    • stop

      @Nonnull public ChatCompletionsRequestCommon stop(@Nullable ChatCompletionsRequestCommonStop stop)
      Set the stop of this ChatCompletionsRequestCommon instance and return the same instance.
      Parameters:
      stop - The stop of this ChatCompletionsRequestCommon
      Returns:
      The same instance of this ChatCompletionsRequestCommon class
    • getStop

      @Nonnull public ChatCompletionsRequestCommonStop getStop()
      Get stop
      Returns:
      stop The stop of this ChatCompletionsRequestCommon instance.
    • setStop

      public void setStop(@Nullable ChatCompletionsRequestCommonStop stop)
      Set the stop of this ChatCompletionsRequestCommon instance.
      Parameters:
      stop - The stop of this ChatCompletionsRequestCommon
    • maxTokens

      @Nonnull public ChatCompletionsRequestCommon maxTokens(@Nullable Integer maxTokens)
      Set the maxTokens of this ChatCompletionsRequestCommon instance and return the same instance.
      Parameters:
      maxTokens - The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens). This value is now deprecated in favor of `max_completion_tokens`, and is not compatible with o1 series models.
      Returns:
      The same instance of this ChatCompletionsRequestCommon class
    • getMaxTokens

      @Nonnull public Integer getMaxTokens()
      The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens). This value is now deprecated in favor of `max_completion_tokens`, and is not compatible with o1 series models.
      Returns:
      maxTokens The maxTokens of this ChatCompletionsRequestCommon instance.
    • setMaxTokens

      public void setMaxTokens(@Nullable Integer maxTokens)
      Set the maxTokens of this ChatCompletionsRequestCommon instance.
      Parameters:
      maxTokens - The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens). This value is now deprecated in favor of `max_completion_tokens`, and is not compatible with o1 series models.
    • maxCompletionTokens

      @Nonnull public ChatCompletionsRequestCommon maxCompletionTokens(@Nullable Integer maxCompletionTokens)
      Set the maxCompletionTokens of this ChatCompletionsRequestCommon instance and return the same instance.
      Parameters:
      maxCompletionTokens - An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
      Returns:
      The same instance of this ChatCompletionsRequestCommon class
    • getMaxCompletionTokens

      @Nullable public Integer getMaxCompletionTokens()
      An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
      Returns:
      maxCompletionTokens The maxCompletionTokens of this ChatCompletionsRequestCommon instance.
    • setMaxCompletionTokens

      public void setMaxCompletionTokens(@Nullable Integer maxCompletionTokens)
      Set the maxCompletionTokens of this ChatCompletionsRequestCommon instance.
      Parameters:
      maxCompletionTokens - An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
    • presencePenalty

      @Nonnull public ChatCompletionsRequestCommon presencePenalty(@Nullable BigDecimal presencePenalty)
      Set the presencePenalty of this ChatCompletionsRequestCommon instance and return the same instance.
      Parameters:
      presencePenalty - Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Minimum: -2 Maximum: 2
      Returns:
      The same instance of this ChatCompletionsRequestCommon class
    • getPresencePenalty

      @Nonnull public BigDecimal getPresencePenalty()
      Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. minimum: -2 maximum: 2
      Returns:
      presencePenalty The presencePenalty of this ChatCompletionsRequestCommon instance.
    • setPresencePenalty

      public void setPresencePenalty(@Nullable BigDecimal presencePenalty)
      Set the presencePenalty of this ChatCompletionsRequestCommon instance.
      Parameters:
      presencePenalty - Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Minimum: -2 Maximum: 2
    • frequencyPenalty

      @Nonnull public ChatCompletionsRequestCommon frequencyPenalty(@Nullable BigDecimal frequencyPenalty)
      Set the frequencyPenalty of this ChatCompletionsRequestCommon instance and return the same instance.
      Parameters:
      frequencyPenalty - Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Minimum: -2 Maximum: 2
      Returns:
      The same instance of this ChatCompletionsRequestCommon class
    • getFrequencyPenalty

      @Nonnull public BigDecimal getFrequencyPenalty()
      Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. minimum: -2 maximum: 2
      Returns:
      frequencyPenalty The frequencyPenalty of this ChatCompletionsRequestCommon instance.
    • setFrequencyPenalty

      public void setFrequencyPenalty(@Nullable BigDecimal frequencyPenalty)
      Set the frequencyPenalty of this ChatCompletionsRequestCommon instance.
      Parameters:
      frequencyPenalty - Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Minimum: -2 Maximum: 2
    • logitBias

      @Nonnull public ChatCompletionsRequestCommon logitBias(@Nullable Object logitBias)
      Set the logitBias of this ChatCompletionsRequestCommon instance and return the same instance.
      Parameters:
      logitBias - Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
      Returns:
      The same instance of this ChatCompletionsRequestCommon class
    • getLogitBias

      @Nullable public Object getLogitBias()
      Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
      Returns:
      logitBias The logitBias of this ChatCompletionsRequestCommon instance.
    • setLogitBias

      public void setLogitBias(@Nullable Object logitBias)
      Set the logitBias of this ChatCompletionsRequestCommon instance.
      Parameters:
      logitBias - Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
    • user

      @Nonnull public ChatCompletionsRequestCommon user(@Nullable String user)
      Set the user of this ChatCompletionsRequestCommon instance and return the same instance.
      Parameters:
      user - A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.
      Returns:
      The same instance of this ChatCompletionsRequestCommon class
    • getUser

      @Nonnull public String getUser()
      A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.
      Returns:
      user The user of this ChatCompletionsRequestCommon instance.
    • setUser

      public void setUser(@Nullable String user)
      Set the user of this ChatCompletionsRequestCommon instance.
      Parameters:
      user - A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.
    • getCustomFieldNames

      @Nonnull public Set<String> getCustomFieldNames()
      Get the names of the unrecognizable properties of the ChatCompletionsRequestCommon.
      Returns:
      The set of properties names
    • getCustomField

      @Nullable @Deprecated public Object getCustomField(@Nonnull String name) throws NoSuchElementException
      Deprecated.
      Use toMap() instead.
      Get the value of an unrecognizable property of this ChatCompletionsRequestCommon instance.
      Parameters:
      name - The name of the property
      Returns:
      The value of the property
      Throws:
      NoSuchElementException - If no property with the given name could be found.
    • toMap

      @Nonnull public Map<String,Object> toMap()
      Get the value of all properties of this ChatCompletionsRequestCommon instance including unrecognized properties.
      Returns:
      The map of all properties
    • setCustomField

      public void setCustomField(@Nonnull String customFieldName, @Nullable Object customFieldValue)
      Set an unrecognizable property of this ChatCompletionsRequestCommon instance. If the map previously contained a mapping for the key, the old value is replaced by the specified value.
      Parameters:
      customFieldName - The name of the property
      customFieldValue - The value of the property
    • equals

      public boolean equals(@Nullable Object o)
      Overrides:
      equals in class Object
    • hashCode

      public int hashCode()
      Overrides:
      hashCode in class Object
    • toString

      @Nonnull public String toString()
      Overrides:
      toString in class Object