Class OpenAiClient
-
Method Summary
Modifier and TypeMethodDescriptionGenerate a completion for the given low-level request object.chatCompletion(OpenAiChatCompletionParameters parameters) Deprecated.Generate a completion for the given conversation and request parameters.chatCompletion(String prompt) Deprecated.UsechatCompletion(OpenAiChatCompletionRequest)instead.embedding(EmbeddingsCreateRequest request) Get a vector representation of a given inputs using low-level request.embedding(OpenAiEmbeddingParameters parameters) Deprecated.embedding(OpenAiEmbeddingRequest request) Get a vector representation of a given request that can be easily consumed by machine learning models and algorithms using high-level request object.static OpenAiClientforModel(OpenAiModel foundationModel) Create a new OpenAI client for the given foundation model, using the default resource group.streamChatCompletion(String prompt) Stream a completion for the given string prompt as user.Stream a completion for the given low-level request object.Deprecated.Stream a completion for the given conversation and request parameters.withApiVersion(String apiVersion) Create a new OpenAI client targeting the specified API version.static OpenAiClientwithCustomDestination(com.sap.cloud.sdk.cloudplatform.connectivity.Destination destination) Create a new OpenAI client with a custom destination, allowing for a custom resource group or otherwise custom destination.withHeader(String key, String value) Create a new OpenAI client with a custom header added to every call made with this clientwithSystemPrompt(String systemPrompt) Add a system prompt before user prompts.
-
Method Details
-
forModel
@Nonnull public static OpenAiClient forModel(@Nonnull OpenAiModel foundationModel) throws DeploymentResolutionException Create a new OpenAI client for the given foundation model, using the default resource group.- Parameters:
foundationModel- the OpenAI model which is deployed.- Returns:
- a new OpenAI client.
- Throws:
DeploymentResolutionException- if no deployment for the given model was found in the default resource group.
-
withApiVersion
Create a new OpenAI client targeting the specified API version.- Parameters:
apiVersion- the API version to target.- Returns:
- a new client.
-
withCustomDestination
@Nonnull public static OpenAiClient withCustomDestination(@Nonnull com.sap.cloud.sdk.cloudplatform.connectivity.Destination destination) Create a new OpenAI client with a custom destination, allowing for a custom resource group or otherwise custom destination. The destination needs to be configured with a URL pointing to an OpenAI model deployment. Typically, such a destination should be obtained usingAiCoreService.getInferenceDestination(String).Example:
var destination = new AiCoreService().getInferenceDestination("custom-rg").forModel(GPT_4O); OpenAiClient.withCustomDestination(destination);- Parameters:
destination- The specificHttpDestinationto use.- Returns:
- a new OpenAI client.
- See Also:
-
withSystemPrompt
Add a system prompt before user prompts.Note: The system prompt is ignored on chat completions invoked with OpenAiChatCompletionPrompt.
- Parameters:
systemPrompt- the system prompt- Returns:
- the client
-
withHeader
Create a new OpenAI client with a custom header added to every call made with this client- Parameters:
key- the key of the custom header to addvalue- the value of the custom header to add- Returns:
- a new client.
- Since:
- 1.11.0
-
chatCompletion
@Nonnull @Deprecated public OpenAiChatCompletionOutput chatCompletion(@Nonnull String prompt) throws OpenAiClientException Deprecated.UsechatCompletion(OpenAiChatCompletionRequest)instead.Generate a completion for the given string prompt as user.- Parameters:
prompt- a text message.- Returns:
- the completion output
- Throws:
OpenAiClientException- if the request fails
-
chatCompletion
@Nonnull public OpenAiChatCompletionResponse chatCompletion(@Nonnull OpenAiChatCompletionRequest request) throws OpenAiClientException Generate a completion for the given conversation and request parameters.- Parameters:
request- the completion request.- Returns:
- the completion output
- Throws:
OpenAiClientException- if the request fails- Since:
- 1.4.0
-
chatCompletion
@Nonnull public CreateChatCompletionResponse chatCompletion(@Nonnull CreateChatCompletionRequest request) throws OpenAiClientException Generate a completion for the given low-level request object.- Parameters:
request- the completion request.- Returns:
- the completion output
- Throws:
OpenAiClientException- if the request fails- Since:
- 1.4.0
-
chatCompletion
@Nonnull @Deprecated public OpenAiChatCompletionOutput chatCompletion(@Nonnull OpenAiChatCompletionParameters parameters) throws OpenAiClientException Deprecated.UsechatCompletion(OpenAiChatCompletionRequest)instead.Generate a completion for the given conversation and request parameters.- Parameters:
parameters- the completion request.- Returns:
- the completion output
- Throws:
OpenAiClientException- if the request fails
-
streamChatCompletion
@Nonnull public Stream<String> streamChatCompletion(@Nonnull String prompt) throws OpenAiClientException Stream a completion for the given string prompt as user.Returns a lazily populated stream of text chunks. To access more details about the individual chunks, use
streamChatCompletionDeltas(OpenAiChatCompletionRequest).The stream should be consumed using a try-with-resources block to ensure that the underlying HTTP connection is closed.
Example:
try (var stream = client.streamChatCompletion("...")) { stream.forEach(System.out::println); }Please keep in mind that using a terminal stream operation like
Stream.forEach(java.util.function.Consumer<? super T>)will block until all chunks are consumed. Also, for obvious reasons, invokingBaseStream.parallel()on this stream is not supported.- Parameters:
prompt- a text message.- Returns:
- A stream of text chunks
- Throws:
OpenAiClientException- if the request fails or if the finish reason is content_filter- See Also:
-
streamChatCompletionDeltas
@Nonnull public Stream<OpenAiChatCompletionDelta> streamChatCompletionDeltas(@Nonnull OpenAiChatCompletionRequest request) throws OpenAiClientException Stream a completion for the given conversation and request parameters.Returns a lazily populated stream of delta objects. To simply stream the text chunks use
streamChatCompletion(String)The stream should be consumed using a try-with-resources block to ensure that the underlying HTTP connection is closed.
Example:
try (var stream = client.streamChatCompletionDeltas(prompt)) { stream .peek(delta -> System.out.println(delta.getUsage())) .map(OpenAiChatCompletionDelta::getDeltaContent) .forEach(System.out::println); }Please keep in mind that using a terminal stream operation like
Stream.forEach(java.util.function.Consumer<? super T>)will block until all chunks are consumed. Also, for obvious reasons, invokingBaseStream.parallel()on this stream is not supported.- Parameters:
request- The prompt, including a list of messages.- Returns:
- A stream of message deltas
- Throws:
OpenAiClientException- if the request fails or if the finish reason is content_filter- Since:
- 1.4.0
- See Also:
-
streamChatCompletionDeltas
@Nonnull public Stream<OpenAiChatCompletionDelta> streamChatCompletionDeltas(@Nonnull CreateChatCompletionRequest request) throws OpenAiClientException Stream a completion for the given low-level request object. Returns a lazily populated stream of delta objects.- Parameters:
request- The completion request.- Returns:
- A stream of message deltas
- Throws:
OpenAiClientException- if the request fails or if the finish reason is content_filter- Since:
- 1.4.0
- See Also:
-
streamChatCompletionDeltas
@Nonnull @Deprecated public Stream<OpenAiChatCompletionDelta> streamChatCompletionDeltas(@Nonnull OpenAiChatCompletionParameters parameters) throws OpenAiClientException Deprecated.Stream a completion for the given conversation and request parameters.Returns a lazily populated stream of delta objects. To simply stream the text chunks use
streamChatCompletion(String)The stream should be consumed using a try-with-resources block to ensure that the underlying HTTP connection is closed.
Example:
try (var stream = client.streamChatCompletionDeltas(request)) { stream .peek(delta -> System.out.println(delta.getUsage())) .map(com.sap.ai.sdk.foundationmodels.openai.model.OpenAiChatCompletionDelta::getDeltaContent) .forEach(System.out::println); }Please keep in mind that using a terminal stream operation like
Stream.forEach(java.util.function.Consumer<? super T>)will block until all chunks are consumed. Also, for obvious reasons, invokingBaseStream.parallel()on this stream is not supported.- Parameters:
parameters- The prompt, including a list of messages.- Returns:
- A stream of message deltas
- Throws:
OpenAiClientException- if the request fails or if the finish reason is content_filter
-
embedding
@Nonnull public OpenAiEmbeddingResponse embedding(@Nonnull OpenAiEmbeddingRequest request) throws OpenAiClientException Get a vector representation of a given request that can be easily consumed by machine learning models and algorithms using high-level request object.- Parameters:
request- the request with input text.- Returns:
- the embedding response convenience object
- Throws:
OpenAiClientException- if the request fails- Since:
- 1.4.0
- See Also:
-
embedding
@Nonnull public EmbeddingsCreate200Response embedding(@Nonnull EmbeddingsCreateRequest request) throws OpenAiClientException Get a vector representation of a given inputs using low-level request.- Parameters:
request- the request with input text.- Returns:
- the embedding output
- Throws:
OpenAiClientException- if the request fails- Since:
- 1.4.0
- See Also:
-
embedding
@Nonnull @Deprecated public OpenAiEmbeddingOutput embedding(@Nonnull OpenAiEmbeddingParameters parameters) throws OpenAiClientException Deprecated.Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.- Parameters:
parameters- the input text.- Returns:
- the embedding output
- Throws:
OpenAiClientException- if the request fails
-
chatCompletion(OpenAiChatCompletionRequest)instead.