The .prompty File Format
A .prompty file is a plain-text asset that pairs configuration with
prompt instructions in a single, portable file. The top half is YAML
frontmatter; the bottom half is a markdown body that becomes the
instructions property on the loaded Prompty.
File Structure Overview
Section titled “File Structure Overview”Every .prompty file follows the same two-part layout:
--- ← frontmatter start(YAML) ← configuration: model, inputs, tools, template …--- ← frontmatter end(Markdown) ← body: role markers + template syntax → instructionsThe loader splits the file at the --- delimiters, parses the YAML into
typed Prompty schema objects, and assigns the markdown body to
agent.instructions.
%%{init: {"flowchart": {"wrappingWidth": 400}} }%%
flowchart TD
FM["FRONTMATTER
name, model, inputs, tools, template"]
Body["BODY
system: / user: / assistant:"]
FM -- "--- delimiter ---" --> Body
style FM fill:#dbeafe,stroke:#3b82f6,color:#1e293b
style Body fill:#d1fae5,stroke:#10b981,color:#065f46
After loading, each section maps to a typed property on the Prompty object:
| File Section | Prompty Property | Description |
|---|---|---|
name, description | .name, .description | Identity fields |
metadata | .metadata | Authors, tags, version, etc. |
model | .model | Model ID, provider, connection, options |
inputs | .inputs | Input property definitions |
outputs | .outputs | Output schema for structured output |
tools | .tools | Tool definitions (function, MCP, OpenAPI) |
template | .template | Renderer format + parser config |
| Markdown body | .instructions | The prompt text with role markers |
Frontmatter Properties
Section titled “Frontmatter Properties”The YAML frontmatter maps directly to the Prompty schema’s
Prompty type. Here is a summary — see the
Schema Reference for the full specification of every
property.
Identity
Section titled “Identity”| Property | Type | Description |
|---|---|---|
name | string | Unique name for the prompt |
displayName | string | Human-readable label |
description | string | What this prompt does |
Metadata
Section titled “Metadata”Arbitrary key-value pairs. Common conventions:
metadata: authors: [alice, bob] tags: [customer-support, v2] version: "1.0"Configures the LLM to call. Full form:
model: id: gpt-4o provider: foundry # or "openai" apiType: chat # chat | responses | embedding | image connection: kind: key endpoint: ${env:AZURE_OPENAI_ENDPOINT} apiKey: ${env:AZURE_OPENAI_API_KEY} options: temperature: 0.7 maxOutputTokens: 1000Or the shorthand — just a model name:
model: gpt-4oThis expands to { id: "gpt-4o" } with provider and connection
inherited from defaults or environment.
Input & Output Schema
Section titled “Input & Output Schema”Define the inputs your template expects and the structure of outputs.
Both inputs and outputs accept a list of properties or a
name-keyed dictionary — three equivalent forms are supported.
Form 1: Named List (recommended)
Section titled “Form 1: Named List (recommended)”Each property is an object with an explicit name field:
inputs: - name: question kind: string description: The user's question required: true - name: language kind: string default: EnglishThis is the most explicit form and the one used throughout these docs.
Form 2: Dictionary
Section titled “Form 2: Dictionary”Property names are the YAML keys; no name field needed:
inputs: question: kind: string description: The user's question required: true language: kind: string default: EnglishForm 3: Scalar Shorthand
Section titled “Form 3: Scalar Shorthand”When a property is just a default value, you can write the value directly.
The loader infers kind from the scalar type:
inputs: question: What is the meaning of life? # → kind: string, default: "What is..." maxResults: 10 # → kind: integer, default: 10 temperature: 0.7 # → kind: float, default: 0.7 verbose: false # → kind: boolean, default: falseKind inference rules:
| Scalar Type | Inferred kind |
|---|---|
| String | string |
| Integer | integer |
| Float | float |
| Boolean | boolean |
| List | array |
| Dict/Map | object |
Outputs
Section titled “Outputs”outputs follows the same syntax. Define outputs when you want
structured output (the executor
converts them to response_format):
outputs: - name: answer kind: string - name: confidence kind: floatProperty Fields
Section titled “Property Fields”| Field | Type | Default | Description |
|---|---|---|---|
name | string | — | Property name (required in list form) |
kind | string | — | Type: string, integer, float, boolean, array, object, thread, image, file, audio |
description | string | — | Human-readable description |
required | boolean | false | Whether the input must be provided |
default | any | — | Default value when input is not provided |
example | any | — | Example value (documentation/tooling only — never used at runtime) |
enumValues | any[] | — | Allowed values (enum constraint) |
A list of tool definitions the model can call:
tools: - name: get_weather kind: function description: Get the current weather parameters: - name: city kind: string required: truetools: - name: filesystem kind: mcp serverName: filesystem-server connection: kind: referencetools: - name: weather_api kind: openapi specification: ./weather.openapi.json connection: kind: key endpoint: https://api.weather.comtools: - name: my_tool kind: my_provider connection: kind: key endpoint: https://custom.example.com options: setting: valueTemplate
Section titled “Template”Configures the rendering engine and the message parser.
Since Jinja2 and the Prompty chat parser are the defaults, you can often
omit template entirely. When you do need to set it, string values
work at every level — format: jinja2 expands to format: { kind: jinja2 }:
# Minimal — only needed when changing away from defaultstemplate: format: mustache # string shorthand for { kind: mustache }# parser defaults to prompty — only specify it to overridetemplate: format: jinja2 parser: prompty # already the default, can be omitted# Full explicit form — use when you need extra optionstemplate: format: kind: jinja2 strict: true parser: kind: prompty options: trimWhitespace: trueThe Markdown Body
Section titled “The Markdown Body”Everything below the closing --- is the body. The loader assigns
it to agent.instructions. At runtime the body flows through two
stages:
- Renderer — expands template variables (
{{name}}) using the inputs you provide. - Parser — splits the rendered text on role markers into a
list[Message]ready for the LLM.
Role Markers
Section titled “Role Markers”Role markers are keywords on their own line followed by a colon. The parser recognises three roles:
| Marker | Resulting role |
|---|---|
system: | system |
user: | user |
assistant: | assistant |
Everything after a marker (until the next marker or end-of-file) becomes
the content of that message.
system:You are an AI assistant who helps people find information.
user:{{question}}
assistant:Let me help with that.
user:{{followUp}}Template Syntax
Section titled “Template Syntax”The default renderer is Jinja2. You can also use Mustache by
setting template.format.kind: mustache.
system:You are helping {{firstName}} {{lastName}}.
{% if context %}Here is some context:{{ context }}{% endif %}
{% for item in history %}- {{ item }}{% endfor %}
user:{{question}}system:You are helping {{firstName}} {{lastName}}.
{{#context}}Here is some context:{{context}}{{/context}}
{{#history}}- {{.}}{{/history}}
user:{{question}}Variable References in Frontmatter
Section titled “Variable References in Frontmatter”Frontmatter values can reference external data using ${protocol:value}
syntax. The loader resolves these at load time before the YAML is parsed
into typed objects.
Environment Variables
Section titled “Environment Variables”# Required — errors if AZURE_OPENAI_ENDPOINT is not setendpoint: ${env:AZURE_OPENAI_ENDPOINT}
# With a fallback default valueregion: ${env:AZURE_REGION:eastus}File References
Section titled “File References”# Load a JSON file inline (path relative to the .prompty file)connection: ${file:shared/azure-connection.json}Shorthand Syntax Reference
Section titled “Shorthand Syntax Reference”Prompty supports compact shorthands across several properties. Every shorthand expands to the full structured form during loading — the two are always equivalent.
Model Shorthand
Section titled “Model Shorthand”A plain string becomes Model(id: <string>):
# Shorthand — just the model namemodel: gpt-4o
# Equivalent full formmodel: id: gpt-4oInput Shorthand (Scalar → Property)
Section titled “Input Shorthand (Scalar → Property)”A scalar value under inputs (dict form) becomes a typed Property with
default set and kind inferred:
# Shorthandinputs: firstName: Jane maxResults: 10 verbose: true
# Equivalent full forminputs: - name: firstName kind: string default: Jane - name: maxResults kind: integer default: 10 - name: verbose kind: boolean default: trueTool Parameters
Section titled “Tool Parameters”FunctionTool parameters follow the same Property list or dict syntax as
inputs — there is no extra properties: wrapper:
# List formtools: - name: get_weather kind: function parameters: - name: city kind: string required: true
# Dict form (also valid)tools: - name: get_weather kind: function parameters: city: kind: string required: trueTemplate Shorthand
Section titled “Template Shorthand”String values work at every level inside template:
# String shorthand — format and parser accept plain stringstemplate: format: jinja2 # → { kind: jinja2 } parser: prompty # → { kind: prompty } (the default — can be omitted)Since parser defaults to prompty, the shortest explicit form is:
template: format: mustache # only needed to change away from jinja2The loader auto-migrates old v1 files that use the string form.
Complete Example
Section titled “Complete Example”Here is a full .prompty file using all the features described above:
---name: customer-supportdisplayName: Customer Support Agentdescription: Answers customer questions using context from their account.metadata: authors: [support-team] tags: [production, customer-facing] version: "2.1"
model: id: gpt-4o provider: foundry apiType: chat connection: kind: key endpoint: ${env:AZURE_OPENAI_ENDPOINT} apiKey: ${env:AZURE_OPENAI_API_KEY} options: temperature: 0.3 maxOutputTokens: 2000
inputs: - name: customerName kind: string description: Full name of the customer required: true - name: question kind: string description: The customer's question required: true - name: orderHistory kind: array description: Recent orders for context default: []
outputs: - name: answer kind: string - name: sentiment kind: string enumValues: [positive, neutral, negative]
tools: - name: lookup_order kind: function description: Look up an order by ID parameters: - name: orderId kind: string required: true
template: format: jinja2---system:You are a customer support agent for Contoso. Be helpful, concise,and empathetic. Always greet the customer by name.
You have access to the following order history:{% for order in orderHistory %}- Order #{{ order.id }}: {{ order.status }} ({{ order.date }}){% endfor %}
user:Hi, my name is {{customerName}}. {{question}}Run it with the Prompty runtime:
import prompty
# Load + render + parse + execute + process in one callresult = prompty.run( "customer-support.prompty", inputs={ "customerName": "Jane Doe", "question": "Where is my order #12345?", "orderHistory": [ {"id": "12345", "status": "shipped", "date": "2025-01-15"}, {"id": "12300", "status": "delivered", "date": "2025-01-02"}, ], },)import { invoke } from "@prompty/core";import "@prompty/foundry"; // registers "azure" provider
// Load + render + parse + execute + process in one callconst result = await invoke("customer-support.prompty", { inputs: { customerName: "Jane Doe", question: "Where is my order #12345?", orderHistory: [ { id: "12345", status: "shipped", date: "2025-01-15" }, { id: "12300", status: "delivered", date: "2025-01-02" }, ], },});using Prompty.Core;
// Load + render + parse + execute + process in one callvar result = await Pipeline.InvokeAsync("customer-support.prompty", new(){ ["customerName"] = "Jane Doe", ["question"] = "Where is my order #12345?", ["orderHistory"] = new[] { new { id = "12345", status = "shipped", date = "2025-01-15" }, new { id = "12300", status = "delivered", date = "2025-01-02" }, },});use serde_json::json;
prompty::register_defaults();prompty_foundry::register();
// Load + render + parse + execute + process in one calllet inputs = json!({ "customerName": "Jane Doe", "question": "Where is my order #12345?", "orderHistory": [ {"id": "12345", "status": "shipped", "date": "2025-01-15"}, {"id": "12300", "status": "delivered", "date": "2025-01-02"}, ],});let result = prompty::invoke_from_path( "customer-support.prompty", Some(&inputs),).await?;