-
Notifications
You must be signed in to change notification settings - Fork 4.1k
Description
When using OpenAIResponseAgent, the ResponseFormat
from OpenAIPromptExecutionSettings
is ignored. This prevents responses from being emitted in a typed/structured (JSON-schema) shape, even though structured outputs work elsewhere in SK (e.g., Chat Completions / IChatCompletionService
and Assistant samples).
This looks like a gap in the Responses pipeline: ResponseCreationOptionsFactory
doesn’t propagate ResponseFormat
, while the Assistants path (AssistantRunOptionsFactory
) does handle response formatting options.
- Responses factory:
dotnet/src/Agents/OpenAI/Internal/ResponseCreationOptionsFactory.cs
([GitHub]) - Assistants factory:
dotnet/src/Agents/OpenAI/Internal/AssistantRunOptionsFactory.cs
([GitHub])
Expected behavior
OpenAIResponseAgent
should honor OpenAIPromptExecutionSettings.ResponseFormat
(JSON mode / JSON Schema, including typeof(T)
), so the model returns the requested schema—just like the documented structured-outputs flow. ([GitHub][3])
Actual behavior
OpenAIResponseAgent
returns free-form text; ResponseFormat
appears to be ignored by the Responses agent path.
Repro (minimal)
// .NET 8, SK 1.61.0
// NuGet: Microsoft.SemanticKernel, Microsoft.SemanticKernel.Connectors.OpenAI
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Agents.OpenAI;
using Microsoft.SemanticKernel.Connectors.OpenAI;
public record Answer(string Response, string Details);
var agent = new OpenAIResponseAgent(_client)
{
Name = Name,
Instructions = DefaultInstructions,
StoreEnabled = true
};
var executionSettings = new OpenAIPromptExecutionSettings
{
ResponseFormat = typeof(Answer)
};
var options = new AgentInvokeOptions { KernelArguments = new KernelArguments(executionSettings), };
var result = "";
await foreach (var response in agent.InvokeAsync(messages, agentThread, options))
{
result += response.Message;
agentThread = response.Thread;
}
Environment
- Semantic Kernel: 1.61.0
- .NET: 8
- Model: gpt-4.1