Skip to content

Rust

Prompty for Rust requires Rust ≥ 1.85 (edition 2024) and an async runtime (Tokio).

Terminal window
# Core runtime
cargo add prompty
# Add a provider (pick one or more)
cargo add prompty-openai # OpenAI
cargo add prompty-foundry # Azure OpenAI / Foundry
cargo add prompty-anthropic # Anthropic Claude
CrateDescriptioncrates.io
promptyCore pipeline, types, registry, tracingcrates.io
prompty-openaiOpenAI executor & processorcrates.io
prompty-foundryAzure OpenAI / Foundry executor & processorcrates.io
prompty-anthropicAnthropic Claude executor & processorcrates.io

The core prompty crate has optional features:

FeatureWhat it enables
otelOpenTelemetry tracing backend (opentelemetry, opentelemetry_sdk, opentelemetry-stdout)

The prompty-foundry crate has:

FeatureWhat it enables
entra_idAzure Entra ID (AAD) authentication via azure_identity
Terminal window
# Enable OpenTelemetry tracing
cargo add prompty --features otel
# Enable Entra ID authentication for Foundry
cargo add prompty-foundry --features entra_id
use prompty;
use prompty_openai;
use serde_json::json;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// 1. Register providers (once at startup)
prompty::register_defaults();
prompty_openai::register();
// 2. Load and invoke
let result = prompty::invoke_from_path(
"greeting.prompty",
Some(&json!({ "userName": "Jane" })),
).await?;
println!("{result}");
Ok(())
}
// Load a .prompty file into a typed Prompty object
let agent = prompty::load("chat.prompty")?;
println!("{}", agent.name); // "chat"
println!("{}", agent.model.id); // "gpt-4o"
println!("{:?}", agent.instructions); // Some("the markdown body")
// Async loading (non-blocking file I/O)
let agent = prompty::load_async("chat.prompty").await?;
// Load from a string (no file needed)
let agent = prompty::load_from_string(raw_content, ".")?;
use serde_json::json;
let agent = prompty::load("chat.prompty")?;
let inputs = json!({ "q": "Hi" });
// Render template + parse role markers → Vec<Message>
let messages = prompty::prepare(&agent, Some(&inputs)).await?;
// Execute LLM + process response → serde_json::Value
let result = prompty::run(&agent, &messages).await?;
// One-shot: prepare + run
let result = prompty::invoke_agent(&agent, Some(&inputs)).await?;
// Load from path + invoke in one call
let result = prompty::invoke_from_path("chat.prompty", Some(&inputs)).await?;

Rust’s Prompty runtime is async-only — all pipeline functions are async fn and require a Tokio runtime. This is idiomatic for Rust I/O and network operations.

#[tokio::main]
async fn main() {
let result = prompty::invoke_from_path("chat.prompty", None).await.unwrap();
}

The turn() function runs an agent loop — the LLM can call tools, and the runtime executes them automatically until it produces a final response.

use prompty::{TurnOptions, Steering, AgentEvent};
use serde_json::json;
use std::sync::Arc;
// Register tool handlers
prompty::register_tool_handler("get_weather", |args| {
Box::pin(async move {
let city = args["city"].as_str().unwrap_or("unknown");
Ok(json!(format!("72°F and sunny in {city}")))
})
});
let agent = prompty::load("agent.prompty")?;
let options = TurnOptions {
max_iterations: Some(10),
max_llm_retries: Some(3),
events: Some(Arc::new(|event: AgentEvent| {
println!("Event: {event:?}");
})),
..Default::default()
};
let result = prompty::turn(
&agent,
Some(&json!({ "question": "What's the weather in Seattle?" })),
Some(options),
).await?;

The agent loop includes built-in resilience:

  • Resilient JSON parsing — recovers from malformed tool arguments (markdown fences, trailing commas)
  • Tool error safety — both exceptions and panic!s are caught via catch_unwind and fed back to the LLM
  • LLM call retry — transient failures are retried with exponential backoff; InvokerError::ExecuteRetryExhausted carries the full conversation for resumption
  • Cancellation — respects cancel token during backoff sleep via tokio::select!

Pre-register named connections for production use:

use serde_json::json;
// Register a named connection
prompty::register_connection("my-openai", json!({
"kind": "key",
"apiKey": std::env::var("OPENAI_API_KEY").unwrap(),
}));
// .prompty files can reference it:
// connection:
// kind: reference
// name: my-openai
use prompty::{Tracer, PromptyTracer, console_tracer, trace_async};
// Register a file-based tracer (writes .tracy JSON files)
let pt = PromptyTracer::new("./traces");
Tracer::register("json", pt.tracer());
// Register the console tracer
Tracer::register("console", console_tracer);
// Trace custom async functions
let result = trace_async("my_pipeline", json!({"query": q}), async {
prompty::invoke_from_path("search.prompty", Some(&inputs)).await
}).await?;
ProviderCrateRegistration KeyAuth
OpenAIprompty-openaiopenaiAPI key
Azure OpenAI / Foundryprompty-foundryfoundryAPI key or Entra ID
Anthropicprompty-anthropicanthropicAPI key

Register providers at startup — once registered, any .prompty file with a matching provider value will use that executor and processor:

prompty::register_defaults(); // renderers + parser
prompty_openai::register(); // "openai" executor + processor
prompty_foundry::register(); // "foundry" executor + processor
prompty_anthropic::register(); // "anthropic" executor + processor

Prompty resolves ${env:VAR} references in .prompty frontmatter from the process environment. Set them before loading:

Terminal window
export OPENAI_API_KEY=sk-your-key-here
export AZURE_OPENAI_ENDPOINT=https://myresource.openai.azure.com/
export AZURE_OPENAI_API_KEY=abc123
export ANTHROPIC_API_KEY=sk-ant-your-key-here

The Rust runtime covers the core Prompty pipeline comprehensively but has some gaps compared to the Python and TypeScript runtimes:

FeatureStatusNotes
Chat completionsAll providers
StreamingPromptyStream with tracing
Agent loop / turn()Events, cancellation, guardrails, steering
Structured outputoutputSchemaresponse_format
Tracing.tracy, console, OpenTelemetry
Custom providersImplement Executor + Processor traits
Embeddings APIapiType: embedding not yet supported
Images APIapiType: image not yet supported
Responses APIOpenAI Responses API not yet supported
MCP toolsMCP tool kind not yet supported
OpenAPI toolsOpenAPI tool kind not yet supported

These features are planned for future releases. Contributions welcome!