API Reference
Pipeline Functions
Section titled “Pipeline Functions”load(path) → Prompty
Section titled “load(path) → Prompty”Parse a .prompty file into a typed Prompty object.
agent = prompty.load("chat.prompty")print(agent.name) # "chat"print(agent.model.id) # "gpt-4o"import { load } from "@prompty/core";const agent = await load("chat.prompty");using Prompty.Core;
var agent = PromptyLoader.Load("chat.prompty");Console.WriteLine(agent.Name); // "chat"Console.WriteLine(agent.Model.Id); // "gpt-4o"let agent = prompty::load("chat.prompty")?;println!("{}", agent.name()); // "chat"println!("{}", agent.model().id()); // "gpt-4o"render(agent, inputs) → str
Section titled “render(agent, inputs) → str”Render the template with inputs. Returns the raw rendered string before parsing into messages.
rendered = prompty.render(agent, inputs={"q": "Hi"})# "system:\nYou are helpful.\n\nuser:\nHi"import { render } from "@prompty/core";const rendered = await render(agent, { q: "Hi" });var rendered = await Pipeline.RenderAsync(agent, new() { ["q"] = "Hi" });// "system:\nYou are helpful.\n\nuser:\nHi"let rendered = prompty::render(&agent, Some(&json!({ "q": "Hi" }))).await?;// "system:\nYou are helpful.\n\nuser:\nHi"parse(agent, rendered) → list[Message]
Section titled “parse(agent, rendered) → list[Message]”Parse a rendered string into structured messages.
messages = prompty.parse(agent, rendered)# [Message(role="system", ...), Message(role="user", ...)]import { parse } from "@prompty/core";const messages = await parse(agent, rendered);var messages = await Pipeline.ParseAsync(agent, rendered);// List<Message> with system and user messageslet messages = prompty::parse(&agent, &rendered).await?;// Vec<Message> with system and user messagesprepare(agent, inputs) → list[Message]
Section titled “prepare(agent, inputs) → list[Message]”Composite: render + parse + thread expansion. Returns wire-ready messages.
messages = prompty.prepare(agent, inputs={"q": "Hi"})import { prepare } from "@prompty/core";const messages = await prepare(agent, { q: "Hi" });var messages = await Pipeline.PrepareAsync(agent, new() { ["q"] = "Hi" });let messages = prompty::prepare(&agent, Some(&json!({ "q": "Hi" }))).await?;run(agent, messages) → result
Section titled “run(agent, messages) → result”Composite: call the LLM executor then process the response.
result = prompty.run(agent, messages)# "Hello! How can I help you?"
# Pass raw=True to get the raw SDK responseresponse = prompty.run(agent, messages, raw=True)import { run } from "@prompty/core";const result = await run(agent, messages);var result = await Pipeline.RunAsync(agent, messages);// "Hello! How can I help you?"
// Pass raw: true to get the raw SDK responsevar response = await Pipeline.RunAsync(agent, messages, raw: true);let result = prompty::run(&agent, &messages).await?;// "Hello! How can I help you?"
// Pass raw flag to get the raw SDK responselet response = prompty::run_raw(&agent, &messages).await?;process(agent, response) → result
Section titled “process(agent, response) → result”Extract clean content from a raw LLM response.
result = prompty.process(agent, response)import { process } from "@prompty/core";const result = await process(agent, response);var result = await Pipeline.ProcessAsync(agent, response);let result = prompty::process(&agent, &response).await?;invoke(path_or_agent, inputs) → result
Section titled “invoke(path_or_agent, inputs) → result”One-shot pipeline: load → prepare → execute → process.
result = prompty.invoke("chat.prompty", inputs={"q": "Hi"})import { invoke } from "@prompty/core";const result = await invoke("chat.prompty", { q: "Hi" });var result = await Pipeline.InvokeAsync("chat.prompty", new() { ["q"] = "Hi" });let result = prompty::invoke_from_path("chat.prompty", Some(&json!({ "q": "Hi" }))).await?;validate_inputs(agent, inputs)
Section titled “validate_inputs(agent, inputs)”Check that all required inputs (those without defaults) are provided. Raises an error if any are missing.
prompty.validate_inputs(agent, {"name": "Jane"})# Raises ValueError if required fields are missingimport { validateInputs } from "@prompty/core";validateInputs(agent, { name: "Jane" });// Throws Error if required fields are missingPipeline.ValidateInputs(agent, new() { ["name"] = "Jane" });// Throws ArgumentException if required fields are missingprompty::validate_inputs(&agent, &json!({ "name": "Jane" }))?;// Returns Err(InvokerError) if required fields are missingAsync Variants
Section titled “Async Variants”Every pipeline function has an _async counterpart:
| Sync | Async |
|---|---|
load() | load_async() |
render() | render_async() |
parse() | parse_async() |
prepare() | prepare_async() |
run() | run_async() |
invoke() | invoke_async() |
process() | process_async() |
turn() | turn_async() |
All functions are async by default — they return Promise<T>.
Use await on every call:
const agent = load("chat.prompty"); // sync (file I/O only)const messages = await prepare(agent, inputs); // asyncconst result = await run(agent, messages); // asyncAll pipeline methods are async by convention — suffixed with Async:
var agent = PromptyLoader.Load("chat.prompty"); // syncvar messages = await Pipeline.PrepareAsync(agent, inputs); // asyncvar result = await Pipeline.RunAsync(agent, messages); // asyncAll pipeline functions are async by default — they return Result<T>.
Use .await on every call:
let agent = prompty::load("chat.prompty")?; // synclet messages = prompty::prepare(&agent, Some(&inputs)).await?; // asynclet result = prompty::run(&agent, &messages).await?; // asyncConversational Turns (Agent Mode)
Section titled “Conversational Turns (Agent Mode)”turn(path_or_agent, inputs, tools, ...)
Section titled “turn(path_or_agent, inputs, tools, ...)”Run a conversational turn with optional tool-calling loop.
def get_weather(location: str) -> str: return f"72°F and sunny in {location}"
result = prompty.turn( "agent.prompty", inputs={"question": "Weather in Seattle?"}, tools={"get_weather": get_weather}, max_iterations=10,)import { turn } from "@prompty/core";
function getWeather(location: string): string { return `72°F and sunny in ${location}`;}
const result = await turn( "agent.prompty", { question: "Weather in Seattle?" }, { get_weather: getWeather },);using Prompty.Core;
var tools = new Dictionary<string, Func<string, Task<string>>>{ ["get_weather"] = args => Task.FromResult("72°F and sunny")};
var result = await Pipeline.TurnAsync( "agent.prompty", new() { ["question"] = "Weather in Seattle?" }, tools: tools, maxIterations: 10);use prompty::{self, TurnOptions};use serde_json::json;
fn get_weather(location: &str) -> String { format!("72°F and sunny in {location}")}
prompty::register_tool_handler("get_weather", |args| { Box::pin(async move { let loc = args["location"].as_str().unwrap_or("unknown"); Ok(json!(get_weather(loc))) })});
let result = prompty::turn_from_path( "agent.prompty", Some(&json!({ "question": "Weather in Seattle?" })), Some(TurnOptions { max_iterations: Some(10), ..Default::default() }),).await?;Parameters:
| Parameter | Description |
|---|---|
prompt | Path to .prompty file or loaded Prompty object |
inputs | Template inputs dictionary |
tools | Tool function implementations (name → callable) |
max_iterations | Loop limit (default 10) |
raw | Skip processing, return raw response |
Error handling:
- Bad JSON in tool args → error sent to model for retry
- Tool exception → error string sent to model
- Missing tool → error message, no crash
- Max iterations exceeded → error raised
Connection Registry
Section titled “Connection Registry”Register pre-configured SDK clients by name so .prompty files
can reference them via connection.kind: reference.
from openai import AzureOpenAIfrom azure.identity import DefaultAzureCredential, get_bearer_token_providerimport prompty
client = AzureOpenAI( azure_endpoint=os.environ["AZURE_ENDPOINT"], azure_ad_token_provider=get_bearer_token_provider( DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default", ),)prompty.register_connection("my-foundry", client=client)
# Lookup and clearclient = prompty.get_connection("my-foundry")prompty.clear_connections() # useful in testsimport { registerConnection, getConnection, clearConnections } from "@prompty/core";import AzureOpenAI from "openai";
const client = new AzureOpenAI({ endpoint: process.env.AZURE_ENDPOINT, apiKey: process.env.AZURE_API_KEY,});registerConnection("my-foundry", client);
// Lookup and clearconst conn = getConnection("my-foundry");clearConnections(); // useful in testsusing Prompty.Core;using OpenAI;
var client = new AzureOpenAIClient( new Uri(Environment.GetEnvironmentVariable("AZURE_ENDPOINT")!), new ApiKeyCredential(Environment.GetEnvironmentVariable("AZURE_API_KEY")!));
ConnectionRegistry.Register("my-foundry", client);
// Lookup and clearvar conn = ConnectionRegistry.Get("my-foundry");ConnectionRegistry.Clear(); // useful in testsuse prompty;use serde_json::json;
prompty::register_connection("my-foundry", json!({ "kind": "key", "endpoint": std::env::var("AZURE_ENDPOINT")?, "apiKey": std::env::var("AZURE_API_KEY")?,}));
// Lookup and clearlet conn = prompty::get_connection("my-foundry");prompty::clear_connections(); // useful in testsThen in your .prompty file:
model: id: gpt-4o provider: foundry connection: kind: reference name: my-foundryStreaming
Section titled “Streaming”Set stream: true in model options:
agent = prompty.load("chat.prompty")messages = prompty.prepare(agent, inputs={...})agent.model.options.additionalProperties = {"stream": True}
response = prompty.run(agent, messages, raw=True)for chunk in prompty.process(agent, response): print(chunk, end="", flush=True)agent.model.options.additionalProperties = { stream: true };const response = await run(agent, messages, { raw: true });for await (const chunk of processResponse(agent, response)) { process.stdout.write(chunk);}agent.Model.Options.AdditionalProperties["stream"] = true;
var response = await Pipeline.RunAsync(agent, messages, raw: true);if (response is PromptyStream stream){ await foreach (var chunk in stream) { Console.Write(chunk); }}use prompty;use serde_json::json;use futures::StreamExt;
let agent = prompty::load("chat.prompty")?;let messages = prompty::prepare(&agent, Some(&json!({}))).await?;
let mut stream = prompty::run_stream(&agent, &messages).await?;while let Some(chunk) = stream.next().await { print!("{chunk}");}Structured Output
Section titled “Structured Output”Define outputs to get StructuredResult — a dict/object subclass
that carries the raw JSON for efficient type casting:
outputs: - name: city kind: string - name: temp kind: integerStructuredResult
Section titled “StructuredResult”Returned by the processor when outputs is defined. Behaves like a
dict (Python), object (TypeScript), or Dictionary<string, object?> (C#) —
fully backward compatible — but also stores the raw JSON string internally.
cast(result, target) → T
Section titled “cast(result, target) → T”Deserialize a result directly to a typed object. Uses the raw JSON
from StructuredResult when available — no dict→JSON→T round-trip.
from prompty import castfrom dataclasses import dataclass
@dataclassclass Weather: city: str temperature: int
weather = cast(result, Weather)# Supports: dataclass, Pydantic BaseModel, TypedDict, dict, list, primitivesimport { cast } from "@prompty/core";import { z } from "zod";
const Schema = z.object({ city: z.string(), temperature: z.number() });const weather = cast<z.infer<typeof Schema>>(result, Schema.parse);using Prompty.Core;
record Weather(string City, int Temperature);var weather = ((StructuredResult)result).Cast<Weather>();// Or: await Pipeline.InvokeAsync<Weather>("weather.prompty", inputs);use prompty;use serde::Deserialize;
#[derive(Deserialize)]struct Weather { city: String, temperature: i32,}
let weather: Weather = prompty::cast(&result)?;// Supports any type implementing serde::DeserializeGeneric invoke<T> / turn<T>
Section titled “Generic invoke<T> / turn<T>”Cast the final result in one step:
weather = invoke("weather.prompty", inputs={"city": "Seattle"}, target_type=Weather)weather = turn("agent.prompty", inputs={...}, tools=tools, target_type=Weather)const weather = await invoke("weather.prompty", { city: "Seattle" }, { validator: WeatherSchema.parse });const weather2 = await turn("agent.prompty", { ... }, tools, { validator: WeatherSchema.parse });var weather = await Pipeline.InvokeAsync<Weather>("weather.prompty", new() { ["city"] = "Seattle" });var weather2 = await Pipeline.TurnAsync<Weather>("agent.prompty", new() { ["question"] = "..." }, tools: tools);use prompty;use serde::Deserialize;use serde_json::json;
#[derive(Deserialize)]struct Weather { city: String, temperature: i32 }
let weather: Weather = prompty::invoke_from_path_typed( "weather.prompty", Some(&json!({ "city": "Seattle" })),).await?;
let weather2: Weather = prompty::turn_from_path_typed( "agent.prompty", Some(&json!({ "question": "..." })), None,).await?;Tracing
Section titled “Tracing”from prompty import Tracer, PromptyTracer, trace
# JSON file tracerTracer.add("json", PromptyTracer("./traces").tracer)
# OpenTelemetryfrom prompty.tracing.otel import otel_tracerTracer.add("otel", otel_tracer())
# Decorate your own functions@tracedef my_function(): ...import { Tracer, trace, consoleTracer } from "@prompty/core";
// Console tracerTracer.add("console", consoleTracer);
// OpenTelemetryimport { otelTracer } from "@prompty/core";Tracer.add("otel", otelTracer());
// Decorate your own functionsconst myFunction = trace(async () => { // ...}, "myFunction");using Prompty.Core;using Prompty.Core.Tracing;
// Console tracerTracer.Add("console", name => new ConsoleTracer(name));
// OpenTelemetryTracer.Add("otel", name => new OTelTracer(name));
// Trace your own methodsvar result = await Trace.TraceAsync("my-operation", async emitter =>{ emitter.Add("input", data); var output = await DoWork(); return output;});use prompty::{Tracer, PromptyTracer, console_tracer, trace_async};
// Console tracerTracer::register("console", console_tracer);
// JSON file tracerlet pt = PromptyTracer::new("./traces");Tracer::register("file", pt.tracer());
// OpenTelemetry (feature-gated)#[cfg(feature = "otel")]{ prompty::init_otel_stdout(); Tracer::register("otel", prompty::otel_tracer());}
// Trace your own async functionslet result = trace_async("my-operation", async { // your code here}).await;Key Types
Section titled “Key Types”| Type | Description |
|---|---|
Prompty | Root object — a loaded .prompty file |
Model | Model configuration (id, provider, connection, options) |
ModelOptions | LLM parameters (temperature, maxOutputTokens, etc.) |
Connection | Auth/endpoint config (ApiKey, Reference, Remote, Anonymous, Foundry, OAuth) |
Property | Input/output property definition (kind, default, required) |
Tool | Tool definition (FunctionTool, McpTool, OpenApiTool, PromptyTool, CustomTool) |
Template | Template engine config (format + parser) |
Message | Chat message with role and content parts |
StructuredResult | Dict/object subclass with raw JSON — returned when outputs is defined |
PromptyStream | Streaming wrapper with tracing support (sync) |
AsyncPromptyStream | Streaming wrapper with tracing support (async) |
ToolCall | Parsed tool call from LLM response |
For full type definitions, see the Schema Reference.
Further Reading
Section titled “Further Reading”- Schema Reference — all
.promptyfrontmatter types and properties - Getting Started — install and run your first prompt
- Pipeline Architecture — how the execution pipeline works
- How-To Guides — practical recipes for common tasks