Skip to content

Providers

Prompty uses a provider system to connect to different LLM backends. Each provider has an executor (sends requests to the API) and a processor (extracts results from responses). You set the provider in your .prompty file’s model section.

model:
id: gpt-4o
provider: openai # ← provider key
apiType: chat
connection:
kind: key
endpoint: ${env:OPENAI_ENDPOINT}
apiKey: ${env:OPENAI_API_KEY}

Direct access to the OpenAI API.

Provider key: openai

Supported API types: chat, responses, embedding, image

model:
id: gpt-4o
provider: openai
apiType: chat
connection:
kind: key
endpoint: https://api.openai.com/v1
apiKey: ${env:OPENAI_API_KEY}
options:
temperature: 0.7
maxOutputTokens: 1000
  • SDKs: Python — openai package · TypeScript — openai npm package · C# — OpenAI NuGet package
  • Supports streaming via PromptyStream / AsyncPromptyStream (Python), async iterables (TypeScript), and IAsyncEnumerable (C#)
  • Structured output is supported via outputsresponse_format
  • Agent mode is available via turn() — it uses apiType: chat with an automatic tool-calling loop

Connect to models deployed through Microsoft Foundry (Azure AI Services). This provider covers both Foundry project endpoints (the recommended approach) and classic Azure OpenAI endpoints (legacy).

Provider key: foundry

Supported API types: chat, responses, embedding, image

model:
id: gpt-4o
provider: foundry
apiType: chat
connection:
kind: key
endpoint: ${env:AZURE_AI_PROJECT_ENDPOINT}
apiKey: ${env:AZURE_AI_PROJECT_KEY}
options:
temperature: 0.7
Endpoint PatternExampleNotes
Foundry project (recommended)https://<resource>.services.ai.azure.com/api/projects/<project>New-style Foundry project endpoint
Classic Azure OpenAI (legacy)https://<resource>.openai.azure.com/Still supported via provider: foundry
  • SDKs: Python — openai + azure-identity packages · TypeScript — openai + @azure/identity npm packages · C# — OpenAI + Azure.Identity NuGet packages
  • Supports both API key and Microsoft Entra ID (managed identity / DefaultAzureCredential)
  • Supports the same features as the OpenAI provider (streaming, structured output, agent mode)
  • Model deployments are managed through the Azure AI Foundry portal

Access Anthropic Claude models directly.

Provider key: anthropic

Supported API types: chat

model:
id: claude-sonnet-4-6
provider: anthropic
apiType: chat
connection:
kind: key
endpoint: https://api.anthropic.com
apiKey: ${env:ANTHROPIC_API_KEY}
options:
temperature: 0.7
maxOutputTokens: 1024
  • SDKs: Python — anthropic package · TypeScript — @anthropic-ai/sdk npm package · C# — Anthropic NuGet package
  • The endpoint defaults to https://api.anthropic.com and can typically be omitted
  • Tool calling is supported through Anthropic’s native tool use API

FeatureOpenAIMicrosoft FoundryAnthropic
chat
responses
embedding
image
agent (tool loop)
Streaming
Structured output
Entra ID auth

The provider system is extensible. You can create your own provider by implementing the executor and processor interfaces, then registering them with the runtime.

from prompty.core.protocols import ExecutorProtocol, ProcessorProtocol
from prompty.core.types import Message
class MyExecutor:
def execute(self, agent, messages: list[Message], **kwargs):
# Call your LLM API here
...
async def execute_async(self, agent, messages: list[Message], **kwargs):
# Async variant
...
class MyProcessor:
def process(self, agent, response, **kwargs):
# Extract content from the API response
...
async def process_async(self, agent, response, **kwargs):
...

In your package’s pyproject.toml:

[project.entry-points."prompty.executors"]
myprovider = "my_package.executor:MyExecutor"
[project.entry-points."prompty.processors"]
myprovider = "my_package.processor:MyProcessor"

After installing your package, the runtime discovers your provider automatically via the entry point system.

model:
id: my-model
provider: myprovider
connection:
kind: key
endpoint: ${env:MY_ENDPOINT}
apiKey: ${env:MY_API_KEY}

No changes to the Prompty codebase are needed — the .prompty file format is the same regardless of which runtime you use.