Skip to content

C#

The C# runtime for Prompty v2 provides first-class .NET support for loading, rendering, and executing .prompty files. It targets .NET 9+ and is distributed as four NuGet packages — a core library plus one package per LLM provider.

The runtime follows the same four-stage pipeline as the Python and TypeScript implementations: Renderer → Parser → Executor → Processor.

All packages are available on NuGet at version 2.0.0-alpha.6.

Terminal window
# Core — required
dotnet add package Prompty.Core --prerelease
# Pick one or more providers
dotnet add package Prompty.OpenAI --prerelease
dotnet add package Prompty.Foundry --prerelease
dotnet add package Prompty.Anthropic --prerelease
PackageDescription
Prompty.CoreCore pipeline, loader, tracing, types
Prompty.OpenAIOpenAI provider (executor + processor)
Prompty.FoundryAzure OpenAI / Microsoft Foundry provider
Prompty.AnthropicAnthropic provider
using Prompty.Core;
// All-in-one execution
var result = await Pipeline.InvokeAsync("greeting.prompty",
new Dictionary<string, object?> { ["name"] = "Jane" });
Console.WriteLine(result);

All pipeline methods are static on the Pipeline class in the Prompty.Core namespace.

using Prompty.Core;
// Load a .prompty file into a typed Prompty object
var agent = PromptyLoader.Load("chat.prompty");
Console.WriteLine(agent.Name); // "chat"
Console.WriteLine(agent.Model.Id); // "gpt-4o"
Console.WriteLine(agent.Instructions); // the markdown body
// Validate inputs against the agent's inputs
var inputs = Pipeline.ValidateInputs(agent, rawInputs);
// Render template with inputs → string
var rendered = await Pipeline.RenderAsync(agent, inputs);
// Parse rendered string → List<Message>
var messages = await Pipeline.ParseAsync(agent, rendered);
// Execute LLM call → raw response
var response = await Pipeline.ExecuteAsync(agent, messages);
// Process raw response → clean result
var result = await Pipeline.ProcessAsync(agent, response);
// Render + parse → List<Message>
var messages = await Pipeline.PrepareAsync(agent, inputs);
// Execute + process → final result
var result = await Pipeline.RunAsync(agent, messages);
// Execute only (raw response, no processing)
var raw = await Pipeline.RunAsync(agent, messages, raw: true);
// Full pipeline: load + prepare + execute + process
var result = await Pipeline.InvokeAsync("chat.prompty", inputs);
// Or pass a pre-loaded agent
var result = await Pipeline.InvokeAsync(agent, inputs);

Each provider ships as a separate NuGet package with an executor and processor.

PackageProvider KeySDKConnection
Prompty.OpenAIopenaiOpenAI .NET SDKApiKeyConnection
Prompty.FoundryfoundryAzure OpenAI / Foundry SDKFoundryConnection, ApiKeyConnection
Prompty.AnthropicanthropicAnthropic .NET SDKApiKeyConnection

Configure the provider in your .prompty frontmatter:

model:
id: gpt-4o
provider: openai
connection:
kind: key
apiKey: ${env:OPENAI_API_KEY}

Providers must be registered at startup before calling any pipeline methods. Use PromptyBuilder for idiomatic fluent registration:

using Prompty.Core;
using Prompty.OpenAI;
// Registers renderers, parser, and providers in one fluent call
new PromptyBuilder()
.AddOpenAI();

Multiple providers can be chained:

using Prompty.Core;
using Prompty.OpenAI;
using Prompty.Foundry;
using Prompty.Anthropic;
new PromptyBuilder()
.AddOpenAI()
.AddFoundry()
.AddAnthropic();

Use the ConnectionRegistry to pre-configure SDK clients for production use (e.g., with managed identity or custom HTTP pipelines).

using Prompty.Core;
// Register a pre-configured client
ConnectionRegistry.Register("my-openai", openAIClient);
// Retrieve it later (e.g., inside an executor)
var client = ConnectionRegistry.Get("my-openai");
// Cleanup
ConnectionRegistry.Remove("my-openai");
ConnectionRegistry.Clear();

Connection types available in the model:

TypekindFields
ApiKeyConnectionkeyendpoint, apiKey
ReferenceConnectionreferencename
RemoteConnectionremotetarget, authenticationMode
AnonymousConnectionanonymous
FoundryConnectionfoundryendpoint, credential-based
OAuthConnectionoauthOAuth-based auth

Streaming uses the standard IAsyncEnumerable<T> pattern. The runtime provides two stream types:

TypeDescription
PromptyStreamIAsyncEnumerable<object> — raw SDK chunks
ProcessedStreamIAsyncEnumerable<string> — extracted text content
using Prompty.Core;
var agent = PromptyLoader.Load("chat.prompty");
var messages = await Pipeline.PrepareAsync(agent, inputs);
// Get the raw streaming response
var raw = await Pipeline.RunAsync(agent, messages, raw: true);
if (raw is PromptyStream stream)
{
await foreach (var chunk in stream)
{
Console.Write(chunk);
}
}

Use TurnAsync to run the agent loop — the runtime calls the LLM, executes any requested tool functions, feeds results back, and repeats until the model produces a final response.

using Prompty.Core;
var agent = PromptyLoader.Load("agent.prompty");
var inputs = new Dictionary<string, object?> { ["question"] = "Weather in Seattle?" };
var tools = new Dictionary<string, Func<string, Task<string>>>
{
["get_weather"] = async (args) =>
{
var parsed = ToolDispatch.ParseArguments(args);
var city = parsed["city"]?.ToString() ?? "unknown";
return $"72°F and sunny in {city}";
}
};
var result = await Pipeline.TurnAsync(
agent, inputs, tools: tools, maxIterations: 10, maxLlmRetries: 3);
Console.WriteLine(result);

The agent loop includes built-in resilience:

  • Resilient JSON parsingParseArguments() recovers from malformed tool arguments (markdown fences, trailing commas)
  • Tool error safety — tool exceptions are caught and fed back to the LLM
  • LLM call retry — transient failures are retried with exponential backoff; ExecuteError carries the full conversation for resumption

When outputs is defined in the .prompty frontmatter, the executor automatically adds response_format to the API call and the processor JSON-parses the response.

outputs:
strict: true
properties:
- name: city
kind: string
- name: temperature
kind: integer
var result = await Pipeline.InvokeAsync("weather.prompty", inputs);
// result is a parsed JSON object matching the schema

Prompty uses a two-layer registry for extensible tool support.

Register individual tool functions by name:

using Prompty.Core;
ToolDispatch.RegisterTool("calculator", async (args) => "42");

Register a handler for an entire tool kind:

ToolDispatch.RegisterToolHandler("mcp", new McpToolHandler());

When the agent loop needs to execute a tool call, dispatch follows this order:

  1. userTools dictionary (passed to TurnAsync)
  2. Name-based registry (RegisterTool)
  3. Kind-based handler (RegisterToolHandler)
  4. Wildcard "*" handler (if registered)
  5. Error — tool not found
// Parse tool arguments from the JSON string the LLM returns
var args = ToolDispatch.ParseArguments(jsonArgs);
// Dispatch a tool call (follows the order above)
var result = await ToolDispatch.DispatchAsync(agent, toolCall, userTools);
TypekindKey Fields
FunctionToolfunctionParameters (PropertySchema), Strict
McpToolmcpConnection, ServerName, ApprovalMode
OpenApiToolopenapiConnection, Specification
PromptyToolpromptyReferences another .prompty file
CustomTool*Connection, Options — wildcard catch-all

Register tracer backends to capture pipeline execution data.

using Prompty.Core.Tracing;
// Console output
Tracer.Add("console", ConsoleTracer.Factory);
// OpenTelemetry spans
OTelTracer.Register();
// JSON file traces
new PromptyTracer(outputDir: "./traces").Register();
var result = await Trace.TraceAsync("operation-name", async (attr) =>
{
attr("input", data);
var output = await DoWork();
attr("output", output);
return output;
});
var wrappedFn = Trace.Wrap<string, string>("my-fn", async (input) =>
{
return await Process(input);
});
var result = await wrappedFn("hello");

The core interfaces for implementing custom invokers:

using Prompty.Core;
public interface IRenderer
{
Task<string> RenderAsync(
Prompty agent, string template, Dictionary<string, object?> inputs);
}
public interface IParser
{
Task<List<Message>> ParseAsync(Prompty agent, string rendered);
}
public interface IExecutor
{
Task<object> ExecuteAsync(Prompty agent, List<Message> messages);
List<Message> FormatToolMessages(
object rawResponse, List<ToolCall> toolCalls,
List<string> toolResults, string? textContent = null);
}
public interface IProcessor
{
Task<object> ProcessAsync(Prompty agent, object response);
}
// Optional: pre-render hook for custom template preprocessing
public interface IPreRenderable
{
(string template, Dictionary<string, object?> context) PreRender(string template);
}
NamespaceContents
Prompty.CorePipeline, PromptyLoader, ConnectionRegistry, ToolDispatch, InvokerRegistry, types
Prompty.Core.TracingTracer, Trace, OTelTracer, PromptyTracer, ConsoleTracer
Prompty.OpenAIOpenAIExecutor, OpenAIProcessor, WireFormat
Prompty.FoundryFoundryExecutor
Prompty.AnthropicAnthropicExecutor

The ${env:VAR} syntax in .prompty files works the same way as in Python and TypeScript. Set environment variables or use a .env file in your project root:

Terminal window
OPENAI_API_KEY=sk-your-key-here
AZURE_OPENAI_ENDPOINT=https://myresource.openai.azure.com/
AZURE_OPENAI_API_KEY=abc123
ANTHROPIC_API_KEY=sk-ant-your-key-here
FOUNDRY_ENDPOINT=https://your-project.services.ai.azure.com