Class OpenAiCompletionParameters
java.lang.Object
com.sap.ai.sdk.foundationmodels.openai.model.OpenAiCompletionParameters
- Direct Known Subclasses:
OpenAiChatCompletionParameters
Deprecated.
OpenAI completion input parameters.
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic classDeprecated."stream_options": { "include_usage": "true" } -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionprotected booleanDeprecated.voidDeprecated.Please useOpenAiClient.streamChatCompletionDeltas(OpenAiChatCompletionParameters)instead.booleanDeprecated.inthashCode()Deprecated.setFrequencyPenalty(Double frequencyPenalty) Deprecated.Number between -2.0 and 2.0.setLogitBias(Map<String, Object> logitBias) Deprecated.Modify the likelihood of specified tokens appearing in the completion.setMaxTokens(Integer maxTokens) Deprecated.The maximum number of [tokens](/tokenizer) that can be generated in the completion.Deprecated.How many completions to generate for each prompt.setPresencePenalty(Double presencePenalty) Deprecated.Number between -2.0 and 2.0.Deprecated.Up to four sequences where the API will stop generating further tokens.setTemperature(Double temperature) Deprecated.What sampling temperature to use, between 0 and 2.Deprecated.An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.Deprecated.A unique identifier representing your end-user, which can help monitoring and detecting abuse.toString()Deprecated.
-
Constructor Details
-
OpenAiCompletionParameters
public OpenAiCompletionParameters()Deprecated.
-
-
Method Details
-
enableStreaming
public void enableStreaming()Deprecated.Please useOpenAiClient.streamChatCompletionDeltas(OpenAiChatCompletionParameters)instead.Enable streaming of the completion. If enabled, partial message deltas will be sent.
-
setStop
Deprecated.Up to four sequences where the API will stop generating further tokens. The returned text won't contain the stop sequence.- Parameters:
values- The stop sequences.- Returns:
- ${code this} instance for chaining.
-
equals
Deprecated. -
canEqual
Deprecated. -
hashCode
public int hashCode()Deprecated. -
toString
Deprecated. -
setMaxTokens
Deprecated.The maximum number of [tokens](/tokenizer) that can be generated in the completion. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).- Returns:
this.
-
setTemperature
Deprecated.What sampling temperature to use, between 0 and 2. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. We generally recommend altering this or top_p but not both.- Returns:
this.
-
setTopP
Deprecated.An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.- Returns:
this.
-
setLogitBias
Deprecated.Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the invalid input: '<'|endoftext|> token from being generated.- Returns:
this.
-
setUser
Deprecated.A unique identifier representing your end-user, which can help monitoring and detecting abuse.- Returns:
this.
-
setN
Deprecated.How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.- Returns:
this.
-
setPresencePenalty
Deprecated.Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.- Returns:
this.
-
setFrequencyPenalty
Deprecated.Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.- Returns:
this.
-