C#
Overview
Section titled “Overview”The C# runtime for Prompty v2 provides first-class .NET support for loading,
rendering, and executing .prompty files. It targets .NET 9+ and is
distributed as four NuGet packages — a core library plus one package per LLM
provider.
The runtime follows the same four-stage pipeline as the Python and TypeScript implementations: Renderer → Parser → Executor → Processor.
Installation
Section titled “Installation”All packages are available on NuGet at version 2.0.0-alpha.6.
# Core — requireddotnet add package Prompty.Core --prerelease
# Pick one or more providersdotnet add package Prompty.OpenAI --prereleasedotnet add package Prompty.Foundry --prereleasedotnet add package Prompty.Anthropic --prerelease| Package | Description |
|---|---|
Prompty.Core | Core pipeline, loader, tracing, types |
Prompty.OpenAI | OpenAI provider (executor + processor) |
Prompty.Foundry | Azure OpenAI / Microsoft Foundry provider |
Prompty.Anthropic | Anthropic provider |
Quick Start
Section titled “Quick Start”using Prompty.Core;
// All-in-one executionvar result = await Pipeline.InvokeAsync("greeting.prompty", new Dictionary<string, object?> { ["name"] = "Jane" });
Console.WriteLine(result);Pipeline API
Section titled “Pipeline API”All pipeline methods are static on the Pipeline class in the Prompty.Core
namespace.
Loading
Section titled “Loading”using Prompty.Core;
// Load a .prompty file into a typed Prompty objectvar agent = PromptyLoader.Load("chat.prompty");
Console.WriteLine(agent.Name); // "chat"Console.WriteLine(agent.Model.Id); // "gpt-4o"Console.WriteLine(agent.Instructions); // the markdown bodyIndividual Stages
Section titled “Individual Stages”// Validate inputs against the agent's inputsvar inputs = Pipeline.ValidateInputs(agent, rawInputs);
// Render template with inputs → stringvar rendered = await Pipeline.RenderAsync(agent, inputs);
// Parse rendered string → List<Message>var messages = await Pipeline.ParseAsync(agent, rendered);
// Execute LLM call → raw responsevar response = await Pipeline.ExecuteAsync(agent, messages);
// Process raw response → clean resultvar result = await Pipeline.ProcessAsync(agent, response);Compound Operations
Section titled “Compound Operations”// Render + parse → List<Message>var messages = await Pipeline.PrepareAsync(agent, inputs);
// Execute + process → final resultvar result = await Pipeline.RunAsync(agent, messages);
// Execute only (raw response, no processing)var raw = await Pipeline.RunAsync(agent, messages, raw: true);Top-Level Invocation
Section titled “Top-Level Invocation”// Full pipeline: load + prepare + execute + processvar result = await Pipeline.InvokeAsync("chat.prompty", inputs);
// Or pass a pre-loaded agentvar result = await Pipeline.InvokeAsync(agent, inputs);Providers
Section titled “Providers”Each provider ships as a separate NuGet package with an executor and processor.
| Package | Provider Key | SDK | Connection |
|---|---|---|---|
Prompty.OpenAI | openai | OpenAI .NET SDK | ApiKeyConnection |
Prompty.Foundry | foundry | Azure OpenAI / Foundry SDK | FoundryConnection, ApiKeyConnection |
Prompty.Anthropic | anthropic | Anthropic .NET SDK | ApiKeyConnection |
Configure the provider in your .prompty frontmatter:
model: id: gpt-4o provider: openai connection: kind: key apiKey: ${env:OPENAI_API_KEY}model: id: gpt-4o provider: foundry connection: kind: key endpoint: ${env:AZURE_OPENAI_ENDPOINT} apiKey: ${env:AZURE_OPENAI_API_KEY}model: id: claude-sonnet-4-20250514 provider: anthropic connection: kind: key apiKey: ${env:ANTHROPIC_API_KEY}Invoker Registry
Section titled “Invoker Registry”Providers must be registered at startup before calling any pipeline methods.
Use PromptyBuilder for idiomatic fluent registration:
using Prompty.Core;using Prompty.OpenAI;
// Registers renderers, parser, and providers in one fluent callnew PromptyBuilder() .AddOpenAI();Multiple providers can be chained:
using Prompty.Core;using Prompty.OpenAI;using Prompty.Foundry;using Prompty.Anthropic;
new PromptyBuilder() .AddOpenAI() .AddFoundry() .AddAnthropic();Connections
Section titled “Connections”Use the ConnectionRegistry to pre-configure SDK clients for production use
(e.g., with managed identity or custom HTTP pipelines).
using Prompty.Core;
// Register a pre-configured clientConnectionRegistry.Register("my-openai", openAIClient);
// Retrieve it later (e.g., inside an executor)var client = ConnectionRegistry.Get("my-openai");
// CleanupConnectionRegistry.Remove("my-openai");ConnectionRegistry.Clear();Connection types available in the model:
| Type | kind | Fields |
|---|---|---|
ApiKeyConnection | key | endpoint, apiKey |
ReferenceConnection | reference | name |
RemoteConnection | remote | target, authenticationMode |
AnonymousConnection | anonymous | — |
FoundryConnection | foundry | endpoint, credential-based |
OAuthConnection | oauth | OAuth-based auth |
Streaming
Section titled “Streaming”Streaming uses the standard IAsyncEnumerable<T> pattern. The runtime provides
two stream types:
| Type | Description |
|---|---|
PromptyStream | IAsyncEnumerable<object> — raw SDK chunks |
ProcessedStream | IAsyncEnumerable<string> — extracted text content |
using Prompty.Core;
var agent = PromptyLoader.Load("chat.prompty");var messages = await Pipeline.PrepareAsync(agent, inputs);
// Get the raw streaming responsevar raw = await Pipeline.RunAsync(agent, messages, raw: true);
if (raw is PromptyStream stream){ await foreach (var chunk in stream) { Console.Write(chunk); }}Agent Mode
Section titled “Agent Mode”Use TurnAsync to run the agent loop — the runtime calls the LLM,
executes any requested tool functions, feeds results back, and repeats until
the model produces a final response.
using Prompty.Core;
var agent = PromptyLoader.Load("agent.prompty");var inputs = new Dictionary<string, object?> { ["question"] = "Weather in Seattle?" };
var tools = new Dictionary<string, Func<string, Task<string>>>{ ["get_weather"] = async (args) => { var parsed = ToolDispatch.ParseArguments(args); var city = parsed["city"]?.ToString() ?? "unknown"; return $"72°F and sunny in {city}"; }};
var result = await Pipeline.TurnAsync( agent, inputs, tools: tools, maxIterations: 10, maxLlmRetries: 3);
Console.WriteLine(result);The agent loop includes built-in resilience:
- Resilient JSON parsing —
ParseArguments()recovers from malformed tool arguments (markdown fences, trailing commas) - Tool error safety — tool exceptions are caught and fed back to the LLM
- LLM call retry — transient failures are retried with exponential backoff;
ExecuteErrorcarries the full conversation for resumption
Structured Output
Section titled “Structured Output”When outputs is defined in the .prompty frontmatter, the executor
automatically adds response_format to the API call and the processor
JSON-parses the response.
outputs: strict: true properties: - name: city kind: string - name: temperature kind: integervar result = await Pipeline.InvokeAsync("weather.prompty", inputs);// result is a parsed JSON object matching the schemaTool Dispatch
Section titled “Tool Dispatch”Prompty uses a two-layer registry for extensible tool support.
Layer 1: Name-Based (Specific Tools)
Section titled “Layer 1: Name-Based (Specific Tools)”Register individual tool functions by name:
using Prompty.Core;
ToolDispatch.RegisterTool("calculator", async (args) => "42");Layer 2: Kind-Based (Tool Handlers)
Section titled “Layer 2: Kind-Based (Tool Handlers)”Register a handler for an entire tool kind:
ToolDispatch.RegisterToolHandler("mcp", new McpToolHandler());Dispatch Order
Section titled “Dispatch Order”When the agent loop needs to execute a tool call, dispatch follows this order:
userToolsdictionary (passed toTurnAsync)- Name-based registry (
RegisterTool) - Kind-based handler (
RegisterToolHandler) - Wildcard
"*"handler (if registered) - Error — tool not found
// Parse tool arguments from the JSON string the LLM returnsvar args = ToolDispatch.ParseArguments(jsonArgs);
// Dispatch a tool call (follows the order above)var result = await ToolDispatch.DispatchAsync(agent, toolCall, userTools);Tool Types
Section titled “Tool Types”| Type | kind | Key Fields |
|---|---|---|
FunctionTool | function | Parameters (PropertySchema), Strict |
McpTool | mcp | Connection, ServerName, ApprovalMode |
OpenApiTool | openapi | Connection, Specification |
PromptyTool | prompty | References another .prompty file |
CustomTool | * | Connection, Options — wildcard catch-all |
Tracing
Section titled “Tracing”Register tracer backends to capture pipeline execution data.
using Prompty.Core.Tracing;
// Console outputTracer.Add("console", ConsoleTracer.Factory);
// OpenTelemetry spansOTelTracer.Register();
// JSON file tracesnew PromptyTracer(outputDir: "./traces").Register();Manual Spans
Section titled “Manual Spans”var result = await Trace.TraceAsync("operation-name", async (attr) =>{ attr("input", data); var output = await DoWork(); attr("output", output); return output;});Wrapping Functions
Section titled “Wrapping Functions”var wrappedFn = Trace.Wrap<string, string>("my-fn", async (input) =>{ return await Process(input);});
var result = await wrappedFn("hello");Interfaces
Section titled “Interfaces”The core interfaces for implementing custom invokers:
using Prompty.Core;
public interface IRenderer{ Task<string> RenderAsync( Prompty agent, string template, Dictionary<string, object?> inputs);}
public interface IParser{ Task<List<Message>> ParseAsync(Prompty agent, string rendered);}
public interface IExecutor{ Task<object> ExecuteAsync(Prompty agent, List<Message> messages); List<Message> FormatToolMessages( object rawResponse, List<ToolCall> toolCalls, List<string> toolResults, string? textContent = null);}
public interface IProcessor{ Task<object> ProcessAsync(Prompty agent, object response);}
// Optional: pre-render hook for custom template preprocessingpublic interface IPreRenderable{ (string template, Dictionary<string, object?> context) PreRender(string template);}Namespaces
Section titled “Namespaces”| Namespace | Contents |
|---|---|
Prompty.Core | Pipeline, PromptyLoader, ConnectionRegistry, ToolDispatch, InvokerRegistry, types |
Prompty.Core.Tracing | Tracer, Trace, OTelTracer, PromptyTracer, ConsoleTracer |
Prompty.OpenAI | OpenAIExecutor, OpenAIProcessor, WireFormat |
Prompty.Foundry | FoundryExecutor |
Prompty.Anthropic | AnthropicExecutor |
Environment Variables
Section titled “Environment Variables”The ${env:VAR} syntax in .prompty files works the same way as in Python and
TypeScript. Set environment variables or use a .env file in your project root:
OPENAI_API_KEY=sk-your-key-hereAZURE_OPENAI_ENDPOINT=https://myresource.openai.azure.com/AZURE_OPENAI_API_KEY=abc123ANTHROPIC_API_KEY=sk-ant-your-key-hereFOUNDRY_ENDPOINT=https://your-project.services.ai.azure.comFurther Reading
Section titled “Further Reading”- Core Concepts — architecture deep-dives
- How-To Guides — practical recipes
- API Reference — pipeline functions and signatures