Parses the chunk response and returns the choice by index.
The index of the choice to find.
An LLMChoiceStreaming object associated withe index.
The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence,
length if the maximum number of tokens specified in the request was reached,
content_filter if content was omitted due to a flag from our content filters,
tool_calls if the model called a tool, or function_call (deprecated) if the model called a function.
The index of the choice in the list of choices.
Optionallogprobs?: Log probability information for the choice.
Parses the chunk response and returns the delta content.
OptionalchoiceIndex: numberThe index of the choice to parse.
The message delta content.
Gets the delta tool calls for a specific choice index.
OptionalchoiceIndex: numberThe index of the choice to parse.
The delta tool calls for the specified choice index.
Reason for stopping the completion stream chunk.
OptionalchoiceIndex: numberThe index of the choice to parse.
The finish reason.
Usage of tokens in the chunk response.
Token usage.
Copyright Ⓒ 2026 SAP SE or an SAP affiliate company. All rights reserved.
Azure OpenAI chat completion stream chunk response.