Providers
Prompty uses a provider system to connect to different LLM backends. Each
provider has an executor (sends requests to the API) and a processor
(extracts results from responses). You set the provider in your .prompty file’s
model section.
model: id: gpt-4o provider: openai # ← provider key apiType: chat connection: kind: key endpoint: ${env:OPENAI_ENDPOINT} apiKey: ${env:OPENAI_API_KEY}OpenAI
Section titled “OpenAI”Direct access to the OpenAI API.
Provider key: openai
Supported API types: chat, responses, embedding, image
model: id: gpt-4o provider: openai apiType: chat connection: kind: key endpoint: https://api.openai.com/v1 apiKey: ${env:OPENAI_API_KEY} options: temperature: 0.7 maxOutputTokens: 1000model: id: text-embedding-3-small provider: openai apiType: embedding connection: kind: key apiKey: ${env:OPENAI_API_KEY}model: id: dall-e-3 provider: openai apiType: image connection: kind: key apiKey: ${env:OPENAI_API_KEY} options: additionalProperties: size: "1024x1024" quality: standard- SDKs: Python —
openaipackage · TypeScript —openainpm package · C# —OpenAINuGet package - Supports streaming via
PromptyStream/AsyncPromptyStream(Python), async iterables (TypeScript), andIAsyncEnumerable(C#) - Structured output is supported via
outputs→response_format - Agent mode is available via
turn()— it usesapiType: chatwith an automatic tool-calling loop
Microsoft Foundry
Section titled “Microsoft Foundry”Connect to models deployed through Microsoft Foundry (Azure AI Services). This provider covers both Foundry project endpoints (the recommended approach) and classic Azure OpenAI endpoints (legacy).
Provider key: foundry
Supported API types: chat, responses, embedding, image
model: id: gpt-4o provider: foundry apiType: chat connection: kind: key endpoint: ${env:AZURE_AI_PROJECT_ENDPOINT} apiKey: ${env:AZURE_AI_PROJECT_KEY} options: temperature: 0.7model: id: gpt-4o provider: foundry apiType: chat connection: kind: foundry endpoint: ${env:AZURE_AI_PROJECT_ENDPOINT}model: id: gpt-4o # your deployment name provider: foundry apiType: chat connection: kind: key endpoint: ${env:AZURE_OPENAI_ENDPOINT} apiKey: ${env:AZURE_OPENAI_API_KEY} options: temperature: 0.7model: id: text-embedding-3-small # your deployment name provider: foundry apiType: embedding connection: kind: key endpoint: ${env:AZURE_AI_PROJECT_ENDPOINT} apiKey: ${env:AZURE_AI_PROJECT_KEY}Endpoint Patterns
Section titled “Endpoint Patterns”| Endpoint Pattern | Example | Notes |
|---|---|---|
| Foundry project (recommended) | https://<resource>.services.ai.azure.com/api/projects/<project> | New-style Foundry project endpoint |
| Classic Azure OpenAI (legacy) | https://<resource>.openai.azure.com/ | Still supported via provider: foundry |
- SDKs: Python —
openai+azure-identitypackages · TypeScript —openai+@azure/identitynpm packages · C# —OpenAI+Azure.IdentityNuGet packages - Supports both API key and Microsoft Entra ID (managed identity /
DefaultAzureCredential) - Supports the same features as the OpenAI provider (streaming, structured output, agent mode)
- Model deployments are managed through the Azure AI Foundry portal
Anthropic
Section titled “Anthropic”Access Anthropic Claude models directly.
Provider key: anthropic
Supported API types: chat
model: id: claude-sonnet-4-6 provider: anthropic apiType: chat connection: kind: key endpoint: https://api.anthropic.com apiKey: ${env:ANTHROPIC_API_KEY} options: temperature: 0.7 maxOutputTokens: 1024- SDKs: Python —
anthropicpackage · TypeScript —@anthropic-ai/sdknpm package · C# —AnthropicNuGet package - The
endpointdefaults tohttps://api.anthropic.comand can typically be omitted - Tool calling is supported through Anthropic’s native tool use API
Provider Comparison
Section titled “Provider Comparison”| Feature | OpenAI | Microsoft Foundry | Anthropic |
|---|---|---|---|
chat | ✅ | ✅ | ✅ |
responses | ✅ | ✅ | ❌ |
embedding | ✅ | ✅ | ❌ |
image | ✅ | ✅ | ❌ |
agent (tool loop) | ✅ | ✅ | ❌ |
| Streaming | ✅ | ✅ | ✅ |
| Structured output | ✅ | ✅ | ✅ |
| Entra ID auth | ❌ | ✅ | ❌ |
Custom Providers
Section titled “Custom Providers”The provider system is extensible. You can create your own provider by implementing the executor and processor interfaces, then registering them with the runtime.
1. Implement the Protocols
Section titled “1. Implement the Protocols”from prompty.core.protocols import ExecutorProtocol, ProcessorProtocolfrom prompty.core.types import Message
class MyExecutor: def execute(self, agent, messages: list[Message], **kwargs): # Call your LLM API here ...
async def execute_async(self, agent, messages: list[Message], **kwargs): # Async variant ...
class MyProcessor: def process(self, agent, response, **kwargs): # Extract content from the API response ...
async def process_async(self, agent, response, **kwargs): ...import { ExecutorProtocol, ProcessorProtocol, Message } from "@prompty/core";
export class MyExecutor implements ExecutorProtocol { async execute(agent: Prompty, messages: Message[]): Promise<unknown> { // Call your LLM API here }}
export class MyProcessor implements ProcessorProtocol { async process(agent: Prompty, response: unknown): Promise<string> { // Extract content from the API response }}using Prompty.Core;
public class MyExecutor : IExecutor{ public async Task<object> ExecuteAsync( Prompty agent, IList<Message> messages) { // Call your LLM API here }}
public class MyProcessor : IProcessor{ public async Task<string> ProcessAsync( Prompty agent, object response) { // Extract content from the API response }}use prompty::types::Message;use async_trait::async_trait;
pub struct MyExecutor;
#[async_trait]impl prompty::Executor for MyExecutor { async fn execute( &self, agent: &prompty::Prompty, messages: &[Message], ) -> Result<serde_json::Value, Box<dyn std::error::Error>> { // Call your LLM API here todo!() }}
pub struct MyProcessor;
#[async_trait]impl prompty::Processor for MyProcessor { async fn process( &self, agent: &prompty::Prompty, response: serde_json::Value, ) -> Result<serde_json::Value, Box<dyn std::error::Error>> { // Extract content from the API response todo!() }}2. Register Your Provider
Section titled “2. Register Your Provider”In your package’s pyproject.toml:
[project.entry-points."prompty.executors"]myprovider = "my_package.executor:MyExecutor"
[project.entry-points."prompty.processors"]myprovider = "my_package.processor:MyProcessor"After installing your package, the runtime discovers your provider automatically via the entry point system.
Register the provider at startup:
import { InvokerRegistry } from "@prompty/core";import { MyExecutor, MyProcessor } from "./my-provider.js";
InvokerRegistry.registerExecutor("myprovider", new MyExecutor());InvokerRegistry.registerProcessor("myprovider", new MyProcessor());Register via the fluent builder or directly:
using Prompty.Core;
// Fluent buildernew PromptyBuilder() .AddProvider("myprovider", new MyExecutor(), new MyProcessor());
// Or register directlyInvokerRegistry.RegisterExecutor("myprovider", new MyExecutor());InvokerRegistry.RegisterProcessor("myprovider", new MyProcessor());Register at startup via function calls:
prompty::register_defaults();prompty::register_executor("myprovider", MyExecutor);prompty::register_processor("myprovider", MyProcessor);3. Use in .prompty Files
Section titled “3. Use in .prompty Files”model: id: my-model provider: myprovider connection: kind: key endpoint: ${env:MY_ENDPOINT} apiKey: ${env:MY_API_KEY}No changes to the Prompty codebase are needed — the .prompty file format is
the same regardless of which runtime you use.