Skip to content

Migration Guide (v1 → v2)

Prompty v2 is a significant rewrite. Your .prompty files will need to be updated to v2 syntax — v2 does not auto-migrate v1 property names. This guide shows every change you need to make.

v1 supported Python 3.9+. v2 requires 3.11+ for modern syntax features (X | Y unions, match/case, etc.).

v2 recommends uv for Python environment management:

Terminal window
# v1
pip install prompty
# v2
uv pip install prompty[jinja2,openai]

In v1, all providers were bundled. In v2, install only the extras you need:

RuntimeInstall Command
Pythonuv pip install prompty[jinja2,openai]
TypeScriptnpm install @prompty/core @prompty/openai
C#dotnet add package Prompty.OpenAI
Rustcargo add prompty prompty-openai

Python extras:

ExtraWhat it includes
jinja2Jinja2 renderer
mustacheMustache renderer
openaiOpenAI provider
azureAzure OpenAI + identity (deprecated alias for foundry)
otelOpenTelemetry tracing
allEverything above

v1 used a special apiType in the .prompty file and smuggled tool functions via metadata["tool_functions"]:

# v1 — tool functions smuggled via metadata
agent.metadata["tool_functions"] = {
"get_weather": get_weather
}
result = prompty.invoke(agent, messages)
# v2 — tools passed explicitly
result = prompty.turn(
"agent.prompty",
inputs={...},
tools={"get_weather": get_weather},
)

v1 silently fell back to DefaultAzureCredential when no API key was provided. v2 requires explicit configuration via the connection registry:

# v2 — explicit credential setup
from openai import AzureOpenAI
from azure.identity import (
DefaultAzureCredential,
get_bearer_token_provider,
)
client = AzureOpenAI(
azure_endpoint=os.environ["AZURE_ENDPOINT"],
azure_ad_token_provider=get_bearer_token_provider(
DefaultAzureCredential(),
"https://cognitiveservices.azure.com/.default",
),
)
prompty.register_connection("azure", client=client)

Then reference it in the .prompty file:

model:
connection:
kind: reference
name: azure

v2 uses updated property names. v1 files must be manually updated to v2 syntax — the v2 loader does not auto-convert old formats.

# v1
model:
api: chat
configuration:
type: azure_openai
azure_endpoint: ${env:ENDPOINT}
api_key: ${env:KEY}
parameters:
max_tokens: 500
temperature: 0.7
# v2
model:
apiType: chat
provider: foundry
connection:
kind: key
endpoint: ${env:ENDPOINT}
apiKey: ${env:KEY}
options:
maxOutputTokens: 500
temperature: 0.7
# v1
inputs:
name:
type: string
sample: World
question:
type: string
# v2
inputs:
- name: name
kind: string
default: World
- name: question
kind: string
# v1
template: jinja2
# v2 (jinja2 + prompty are defaults — omit entirely, or use structured form)
template:
format:
kind: jinja2
parser:
kind: prompty
# For non-default templates:
template:
format:
kind: mustache
parser:
kind: prompty
v1 Propertyv2 Property
model.apimodel.apiType
model.configurationmodel.connection
model.configuration.typemodel.provider (azure_openaifoundry, openaiopenai)
model.parametersmodel.options
model.parameters.max_tokensmodel.options.maxOutputTokens
model.parameters.top_pmodel.options.topP
model.parameters.frequency_penaltymodel.options.frequencyPenalty
model.parameters.presence_penaltymodel.options.presencePenalty
model.parameters.stopmodel.options.stopSequences
inputs (dict)inputs (list of Property)
outputsoutputs
inputs.X.typekind
inputs.X.sampledefault
  • Connection registry — register pre-configured clients for reuse across prompts
  • Agent loopturn() with automatic tool-call execution and error recovery
  • Streaming hardening — proper handling of tool calls, refusals, and empty chunks
  • Structured outputoutputsresponse_format for type-safe JSON responses
  • Thread safety — renderer nonces use thread-local storage
  • Entry-point discovery (Python) — third-party providers register as plugins
  • Multi-runtime support — Python (prompty), TypeScript (@prompty/core), C# (Prompty.Core), and Rust (prompty) all share the same .prompty file format
  • C# runtimePrompty.Core, Prompty.OpenAI, Prompty.Foundry, Prompty.Anthropic NuGet packages