Migration Guide (v1 → v2)
Overview
Section titled “Overview”Prompty v2 is a significant rewrite. Your .prompty files will
need to be updated to v2 syntax — v2 does not auto-migrate
v1 property names. This guide shows every change you need to make.
Breaking Changes
Section titled “Breaking Changes”Python ≥ 3.11 Required
Section titled “Python ≥ 3.11 Required”v1 supported Python 3.9+. v2 requires 3.11+ for modern syntax
features (X | Y unions, match/case, etc.).
Package Manager
Section titled “Package Manager”v2 recommends uv for Python environment management:
# v1pip install prompty
# v2uv pip install prompty[jinja2,openai]Extras Are Required
Section titled “Extras Are Required”In v1, all providers were bundled. In v2, install only the extras you need:
| Runtime | Install Command |
|---|---|
| Python | uv pip install prompty[jinja2,openai] |
| TypeScript | npm install @prompty/core @prompty/openai |
| C# | dotnet add package Prompty.OpenAI |
| Rust | cargo add prompty prompty-openai |
Python extras:
| Extra | What it includes |
|---|---|
jinja2 | Jinja2 renderer |
mustache | Mustache renderer |
openai | OpenAI provider |
azure | Azure OpenAI + identity (deprecated alias for foundry) |
otel | OpenTelemetry tracing |
all | Everything above |
Agent Mode API
Section titled “Agent Mode API”v1 used a special apiType in the .prompty file and smuggled tool functions via metadata["tool_functions"]:
# v1 — tool functions smuggled via metadataagent.metadata["tool_functions"] = { "get_weather": get_weather}result = prompty.invoke(agent, messages)
# v2 — tools passed explicitlyresult = prompty.turn( "agent.prompty", inputs={...}, tools={"get_weather": get_weather},)// v1 — no native agent mode in TypeScript v1
// v2 — tools passed explicitlyimport { turn } from "@prompty/core";
const result = await turn("agent.prompty", { inputs: { question: "What's the weather?" }, tools: { get_weather: getWeather },});// v1 — no native agent mode in C# v1
// v2 — tools passed explicitlyvar tools = new Dictionary<string, Delegate>{ ["get_weather"] = GetWeather};var result = await Pipeline.TurnAsync( "agent.prompty", new() { ["question"] = "What's the weather?" }, tools: tools);// Rust is new in v2 — no v1 equivalent
// v2 — tools passed explicitlyuse prompty;use serde_json::json;
prompty::register_tool_handler("get_weather", |args| { Box::pin(async move { let city = args["city"].as_str().unwrap_or("unknown"); Ok(json!(format!("72°F in {city}"))) })});
let result = prompty::turn_from_path( "agent.prompty", Some(&json!({ "question": "What's the weather?" })), None,).await?;Azure Credentials
Section titled “Azure Credentials”v1 silently fell back to DefaultAzureCredential when no
API key was provided. v2 requires explicit configuration via
the connection registry:
# v2 — explicit credential setupfrom openai import AzureOpenAIfrom azure.identity import ( DefaultAzureCredential, get_bearer_token_provider,)
client = AzureOpenAI( azure_endpoint=os.environ["AZURE_ENDPOINT"], azure_ad_token_provider=get_bearer_token_provider( DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default", ),)prompty.register_connection("azure", client=client)// v2 — explicit credential setupimport { registerConnection } from "@prompty/core";import { AzureOpenAI } from "openai";import { DefaultAzureCredential, getBearerTokenProvider } from "@azure/identity";
const credential = new DefaultAzureCredential();const client = new AzureOpenAI({ azureADTokenProvider: getBearerTokenProvider( credential, "https://cognitiveservices.azure.com/.default" ), endpoint: process.env.AZURE_ENDPOINT!,});registerConnection("azure", client);// v2 — explicit credential setupusing Azure.Identity;using Prompty.Core;
var credential = new DefaultAzureCredential();ConnectionRegistry.Register("azure", new AzureOpenAIConnectionOptions{ Endpoint = Environment.GetEnvironmentVariable("AZURE_ENDPOINT")!, Credential = credential,});// v2 — explicit credential setupuse prompty;use prompty_openai;
let credential = azure_identity::DefaultAzureCredential::new()?;let endpoint = std::env::var("AZURE_ENDPOINT")?;
prompty::register_connection("azure", prompty_openai::AzureConnectionOptions { endpoint, credential: Box::new(credential),});Then reference it in the .prompty file:
model: connection: kind: reference name: azureFile Format Changes
Section titled “File Format Changes”v2 uses updated property names. v1 files must be manually updated to v2 syntax — the v2 loader does not auto-convert old formats.
Model Configuration
Section titled “Model Configuration”# v1model: api: chat configuration: type: azure_openai azure_endpoint: ${env:ENDPOINT} api_key: ${env:KEY} parameters: max_tokens: 500 temperature: 0.7
# v2model: apiType: chat provider: foundry connection: kind: key endpoint: ${env:ENDPOINT} apiKey: ${env:KEY} options: maxOutputTokens: 500 temperature: 0.7Input Schema
Section titled “Input Schema”# v1inputs: name: type: string sample: World question: type: string
# v2inputs: - name: name kind: string default: World - name: question kind: stringTemplate Format
Section titled “Template Format”# v1template: jinja2
# v2 (jinja2 + prompty are defaults — omit entirely, or use structured form)template: format: kind: jinja2 parser: kind: prompty
# For non-default templates:template: format: kind: mustache parser: kind: promptyKey Renames
Section titled “Key Renames”| v1 Property | v2 Property |
|---|---|
model.api | model.apiType |
model.configuration | model.connection |
model.configuration.type | model.provider (azure_openai → foundry, openai → openai) |
model.parameters | model.options |
model.parameters.max_tokens | model.options.maxOutputTokens |
model.parameters.top_p | model.options.topP |
model.parameters.frequency_penalty | model.options.frequencyPenalty |
model.parameters.presence_penalty | model.options.presencePenalty |
model.parameters.stop | model.options.stopSequences |
inputs (dict) | inputs (list of Property) |
outputs | outputs |
inputs.X.type | kind |
inputs.X.sample | default |
New Features in v2
Section titled “New Features in v2”- Connection registry — register pre-configured clients for reuse across prompts
- Agent loop —
turn()with automatic tool-call execution and error recovery - Streaming hardening — proper handling of tool calls, refusals, and empty chunks
- Structured output —
outputs→response_formatfor type-safe JSON responses - Thread safety — renderer nonces use thread-local storage
- Entry-point discovery (Python) — third-party providers register as plugins
- Multi-runtime support — Python (
prompty), TypeScript (@prompty/core), C# (Prompty.Core), and Rust (prompty) all share the same.promptyfile format - C# runtime —
Prompty.Core,Prompty.OpenAI,Prompty.Foundry,Prompty.AnthropicNuGet packages