Class ChatCompletionsRequestCommon
java.lang.Object
com.sap.ai.sdk.foundationmodels.openai.generated.model.ChatCompletionsRequestCommon
ChatCompletionsRequestCommon
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionbooleanfrequencyPenalty(BigDecimal frequencyPenalty) Set the frequencyPenalty of thisChatCompletionsRequestCommoninstance and return the same instance.getCustomField(String name) Deprecated.Get the names of the unrecognizable properties of theChatCompletionsRequestCommon.Number between -2.0 and 2.0.Modify the likelihood of specified tokens appearing in the completion.An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.The maximum number of tokens allowed for the generated answer.Number between -2.0 and 2.0.getStop()Get stopWhat sampling temperature to use, between 0 and 2.getTopP()An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.getUser()A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.inthashCode()isStream()If set, partial message deltas will be sent, like in ChatGPT.Set the logitBias of thisChatCompletionsRequestCommoninstance and return the same instance.maxCompletionTokens(Integer maxCompletionTokens) Set the maxCompletionTokens of thisChatCompletionsRequestCommoninstance and return the same instance.Set the maxTokens of thisChatCompletionsRequestCommoninstance and return the same instance.presencePenalty(BigDecimal presencePenalty) Set the presencePenalty of thisChatCompletionsRequestCommoninstance and return the same instance.voidsetCustomField(String customFieldName, Object customFieldValue) Set an unrecognizable property of thisChatCompletionsRequestCommoninstance.voidsetFrequencyPenalty(BigDecimal frequencyPenalty) Set the frequencyPenalty of thisChatCompletionsRequestCommoninstance.voidsetLogitBias(Object logitBias) Set the logitBias of thisChatCompletionsRequestCommoninstance.voidsetMaxCompletionTokens(Integer maxCompletionTokens) Set the maxCompletionTokens of thisChatCompletionsRequestCommoninstance.voidsetMaxTokens(Integer maxTokens) Set the maxTokens of thisChatCompletionsRequestCommoninstance.voidsetPresencePenalty(BigDecimal presencePenalty) Set the presencePenalty of thisChatCompletionsRequestCommoninstance.voidSet the stop of thisChatCompletionsRequestCommoninstance.voidSet the stream of thisChatCompletionsRequestCommoninstance.voidsetTemperature(BigDecimal temperature) Set the temperature of thisChatCompletionsRequestCommoninstance.voidsetTopP(BigDecimal topP) Set the topP of thisChatCompletionsRequestCommoninstance.voidSet the user of thisChatCompletionsRequestCommoninstance.Set the stop of thisChatCompletionsRequestCommoninstance and return the same instance.Set the stream of thisChatCompletionsRequestCommoninstance and return the same instance.temperature(BigDecimal temperature) Set the temperature of thisChatCompletionsRequestCommoninstance and return the same instance.toMap()Get the value of all properties of thisChatCompletionsRequestCommoninstance including unrecognized properties.topP(BigDecimal topP) Set the topP of thisChatCompletionsRequestCommoninstance and return the same instance.toString()Set the user of thisChatCompletionsRequestCommoninstance and return the same instance.
-
Constructor Details
-
ChatCompletionsRequestCommon
public ChatCompletionsRequestCommon()
-
-
Method Details
-
temperature
Set the temperature of thisChatCompletionsRequestCommoninstance and return the same instance.- Parameters:
temperature- What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. Minimum: 0 Maximum: 2- Returns:
- The same instance of this
ChatCompletionsRequestCommonclass
-
getTemperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. minimum: 0 maximum: 2- Returns:
- temperature The temperature of this
ChatCompletionsRequestCommoninstance.
-
setTemperature
Set the temperature of thisChatCompletionsRequestCommoninstance.- Parameters:
temperature- What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. Minimum: 0 Maximum: 2
-
topP
Set the topP of thisChatCompletionsRequestCommoninstance and return the same instance.- Parameters:
topP- An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. Minimum: 0 Maximum: 1- Returns:
- The same instance of this
ChatCompletionsRequestCommonclass
-
getTopP
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. minimum: 0 maximum: 1- Returns:
- topP The topP of this
ChatCompletionsRequestCommoninstance.
-
setTopP
Set the topP of thisChatCompletionsRequestCommoninstance.- Parameters:
topP- An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. Minimum: 0 Maximum: 1
-
stream
Set the stream of thisChatCompletionsRequestCommoninstance and return the same instance.- Parameters:
stream- If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message.- Returns:
- The same instance of this
ChatCompletionsRequestCommonclass
-
isStream
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message.- Returns:
- stream The stream of this
ChatCompletionsRequestCommoninstance.
-
setStream
Set the stream of thisChatCompletionsRequestCommoninstance.- Parameters:
stream- If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message.
-
stop
Set the stop of thisChatCompletionsRequestCommoninstance and return the same instance.- Parameters:
stop- The stop of thisChatCompletionsRequestCommon- Returns:
- The same instance of this
ChatCompletionsRequestCommonclass
-
getStop
Get stop- Returns:
- stop The stop of this
ChatCompletionsRequestCommoninstance.
-
setStop
Set the stop of thisChatCompletionsRequestCommoninstance.- Parameters:
stop- The stop of thisChatCompletionsRequestCommon
-
maxTokens
Set the maxTokens of thisChatCompletionsRequestCommoninstance and return the same instance.- Parameters:
maxTokens- The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens). This value is now deprecated in favor of `max_completion_tokens`, and is not compatible with o1 series models.- Returns:
- The same instance of this
ChatCompletionsRequestCommonclass
-
getMaxTokens
The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens). This value is now deprecated in favor of `max_completion_tokens`, and is not compatible with o1 series models.- Returns:
- maxTokens The maxTokens of this
ChatCompletionsRequestCommoninstance.
-
setMaxTokens
Set the maxTokens of thisChatCompletionsRequestCommoninstance.- Parameters:
maxTokens- The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens). This value is now deprecated in favor of `max_completion_tokens`, and is not compatible with o1 series models.
-
maxCompletionTokens
@Nonnull public ChatCompletionsRequestCommon maxCompletionTokens(@Nullable Integer maxCompletionTokens) Set the maxCompletionTokens of thisChatCompletionsRequestCommoninstance and return the same instance.- Parameters:
maxCompletionTokens- An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.- Returns:
- The same instance of this
ChatCompletionsRequestCommonclass
-
getMaxCompletionTokens
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.- Returns:
- maxCompletionTokens The maxCompletionTokens of this
ChatCompletionsRequestCommoninstance.
-
setMaxCompletionTokens
Set the maxCompletionTokens of thisChatCompletionsRequestCommoninstance.- Parameters:
maxCompletionTokens- An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
-
presencePenalty
Set the presencePenalty of thisChatCompletionsRequestCommoninstance and return the same instance.- Parameters:
presencePenalty- Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Minimum: -2 Maximum: 2- Returns:
- The same instance of this
ChatCompletionsRequestCommonclass
-
getPresencePenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. minimum: -2 maximum: 2- Returns:
- presencePenalty The presencePenalty of this
ChatCompletionsRequestCommoninstance.
-
setPresencePenalty
Set the presencePenalty of thisChatCompletionsRequestCommoninstance.- Parameters:
presencePenalty- Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Minimum: -2 Maximum: 2
-
frequencyPenalty
@Nonnull public ChatCompletionsRequestCommon frequencyPenalty(@Nullable BigDecimal frequencyPenalty) Set the frequencyPenalty of thisChatCompletionsRequestCommoninstance and return the same instance.- Parameters:
frequencyPenalty- Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Minimum: -2 Maximum: 2- Returns:
- The same instance of this
ChatCompletionsRequestCommonclass
-
getFrequencyPenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. minimum: -2 maximum: 2- Returns:
- frequencyPenalty The frequencyPenalty of this
ChatCompletionsRequestCommoninstance.
-
setFrequencyPenalty
Set the frequencyPenalty of thisChatCompletionsRequestCommoninstance.- Parameters:
frequencyPenalty- Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Minimum: -2 Maximum: 2
-
logitBias
Set the logitBias of thisChatCompletionsRequestCommoninstance and return the same instance.- Parameters:
logitBias- Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.- Returns:
- The same instance of this
ChatCompletionsRequestCommonclass
-
getLogitBias
Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.- Returns:
- logitBias The logitBias of this
ChatCompletionsRequestCommoninstance.
-
setLogitBias
Set the logitBias of thisChatCompletionsRequestCommoninstance.- Parameters:
logitBias- Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
-
user
Set the user of thisChatCompletionsRequestCommoninstance and return the same instance.- Parameters:
user- A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.- Returns:
- The same instance of this
ChatCompletionsRequestCommonclass
-
getUser
A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.- Returns:
- user The user of this
ChatCompletionsRequestCommoninstance.
-
setUser
Set the user of thisChatCompletionsRequestCommoninstance.- Parameters:
user- A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.
-
getCustomFieldNames
Get the names of the unrecognizable properties of theChatCompletionsRequestCommon.- Returns:
- The set of properties names
-
getCustomField
@Nullable @Deprecated public Object getCustomField(@Nonnull String name) throws NoSuchElementException Deprecated.UsetoMap()instead.Get the value of an unrecognizable property of thisChatCompletionsRequestCommoninstance.- Parameters:
name- The name of the property- Returns:
- The value of the property
- Throws:
NoSuchElementException- If no property with the given name could be found.
-
toMap
Get the value of all properties of thisChatCompletionsRequestCommoninstance including unrecognized properties.- Returns:
- The map of all properties
-
setCustomField
Set an unrecognizable property of thisChatCompletionsRequestCommoninstance. If the map previously contained a mapping for the key, the old value is replaced by the specified value.- Parameters:
customFieldName- The name of the propertycustomFieldValue- The value of the property
-
equals
-
hashCode
public int hashCode() -
toString
-
toMap()instead.