Tracing & Observability
Overview
Section titled “Overview”Every pipeline call in Prompty is automatically traced. The tracing system uses a pluggable backend architecture — register as many trace consumers as you need. Traces capture the full lifecycle of a prompt: loading, rendering, parsing, execution, and processing.
Out of the box, tracing is a zero-overhead no-op. It only becomes active when you register one or more backends.
Architecture
Section titled “Architecture”flowchart TD
subgraph Sources["Trace Sources"]
Pipeline["Pipeline Stages\nload → render → parse → run"]
Decorator["@trace decorator\nCaptures name, args, return,\nduration, errors"]
UserFns["Your Functions\n@trace-decorated code"]
end
Pipeline --> Registry
Decorator --> Registry
UserFns --> Registry
Registry["Tracer Registry\nTracer.add(name, callback)\nDispatches to all registered backends"]
Registry --> Console["Console\nconsole_tracer\nPrints to stdout"]
Registry --> JSON["JSON File\nPromptyTracer\n.tracy files to disk"]
Registry --> OTel["OpenTelemetry\notel_tracer()\nSpans → any OTel collector"]
style Sources fill:none,stroke:#3b82f6,stroke-dasharray:5 5
style Pipeline fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
style Decorator fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
style UserFns fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
style Registry fill:#1d4ed8,stroke:#1e40af,color:#fff
style Console fill:#f0fdf4,stroke:#10b981,color:#065f46
style JSON fill:#fffbeb,stroke:#f59e0b,color:#92400e
style OTel fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
Tracer Registry
Section titled “Tracer Registry”Register trace backends at application startup. Each backend is a callback function that receives structured trace data. You can register as many as you like — every trace event is dispatched to all registered backends.
from prompty import Tracer, PromptyTracer
# JSON file tracer — writes structured traces to diskTracer.add("json", PromptyTracer("./traces").tracer)
# Console tracer — prints to stdoutfrom prompty.tracing.tracer import console_tracerTracer.add("console", console_tracer)import { Tracer, PromptyTracer, consoleTracer } from "@prompty/core";
// JSON file tracer — writes structured traces to diskconst promptyTracer = new PromptyTracer("./traces");Tracer.add("json", promptyTracer.tracer);
// Console tracer — prints to stdoutTracer.add("console", consoleTracer);using Prompty.Core.Tracing;
// Console tracer — prints to stdoutTracer.Add("console", ConsoleTracer.Factory);
// JSON file tracer — writes .tracy files to disknew PromptyTracer().Register();use prompty::{Tracer, PromptyTracer, console_tracer};
// JSON file tracer — writes structured traces to disklet pt = PromptyTracer::new("./traces");Tracer::register("json", pt.tracer());
// Console tracer — prints to stdoutTracer::register("console", console_tracer);The @trace Decorator
Section titled “The @trace Decorator”Wrap any function to include it in the trace tree. When a traced function calls other traced functions (including Prompty’s built-in pipeline), they appear as nested child spans.
from prompty import trace, invoke
@tracedef my_business_logic(query: str) -> str: result = invoke("search.prompty", inputs={"q": query}) return resultimport { trace, invoke } from "@prompty/core";
async function myBusinessLogic(query: string): Promise<string> { const result = await invoke("search.prompty", { inputs: { q: query } }); return process(result);}
const tracedLogic = trace(myBusinessLogic, "myBusinessLogic");using Prompty.Core;using Prompty.Core.Tracing;
var result = await Trace.TraceAsync("myBusinessLogic", async (attr) =>{ attr("query", query); var output = await Pipeline.InvokeAsync("search.prompty", new() { ["q"] = query }); attr("output", output); return output;});use prompty::trace_async;use serde_json::json;
let result = trace_async("my_business_logic", json!({"query": &query}), async { let output = prompty::invoke_from_path( "search.prompty", Some(&json!({"q": &query})), ).await?; Ok(output)}).await;The decorator automatically captures:
| Field | Description |
|---|---|
| Function name | The __name__ of the decorated function |
| Arguments | All positional and keyword arguments |
| Return value | The function’s return value |
| Duration | Wall-clock time from entry to exit |
| Exceptions | Any exception raised (re-raised after tracing) |
PromptyTracer
Section titled “PromptyTracer”The built-in JSON file backend for local development and debugging. It writes
one .tracy file per top-level trace to the specified output directory.
from prompty import Tracer, PromptyTracer
tracer = PromptyTracer("./traces")Tracer.add("json", tracer.tracer)import { Tracer, PromptyTracer } from "@prompty/core";
const tracer = new PromptyTracer("./traces");Tracer.add("json", tracer.tracer);using Prompty.Core.Tracing;
// JSON file tracer — writes .tracy files to disknew PromptyTracer().Register();use prompty::{Tracer, PromptyTracer};
let tracer = PromptyTracer::new("./traces");Tracer::register("json", tracer.tracer());Each .tracy filecontains structured JSON with the full trace tree — every
span, its duration, inputs, outputs, and any nested child spans. These files
are human-readable and easy to inspect or post-process.
OpenTelemetry Integration
Section titled “OpenTelemetry Integration”For production observability, Prompty integrates with OpenTelemetry. Each trace becomes a set of OTel spans, compatible with any collector — Azure Monitor, Jaeger, Zipkin, Datadog, and more.
from prompty.tracing.otel import otel_tracerfrom prompty import Tracer
Tracer.add("otel", otel_tracer())import { Tracer } from "@prompty/core";import { otelTracer } from "@prompty/core/tracing/otel";
Tracer.add("otel", otelTracer());using Prompty.Core.Tracing;
// Register OpenTelemetry backendOTelTracer.Register();#[cfg(feature = "otel")]{ prompty::init_otel_stdout(); prompty::Tracer::register("otel", prompty::otel_tracer());}Combining Backends
Section titled “Combining Backends”You can register multiple backends simultaneously — for example, OTel for production monitoring and console output for local debugging:
from prompty import Tracer, PromptyTracerfrom prompty.tracing.tracer import console_tracerfrom prompty.tracing.otel import otel_tracer
# Production: send to OTel collectorTracer.add("otel", otel_tracer())
# Development: also log to consoleTracer.add("console", console_tracer)
# Debugging: also write .tracy filesTracer.add("json", PromptyTracer("./traces").tracer)import { Tracer, PromptyTracer, consoleTracer } from "@prompty/core";import { otelTracer } from "@prompty/core/tracing/otel";
// Production: send to OTel collectorTracer.add("otel", otelTracer());
// Development: also log to consoleTracer.add("console", consoleTracer);
// Debugging: also write .tracy filesTracer.add("json", new PromptyTracer("./traces").tracer);using Prompty.Core.Tracing;
// Production: send to OTel collectorOTelTracer.Register();
// Development: also log to consoleTracer.Add("console", ConsoleTracer.Factory);
// Debugging: also write .tracy filesnew PromptyTracer().Register();use prompty::{Tracer, PromptyTracer, console_tracer};
// Production: send to OTel collector#[cfg(feature = "otel")]{ prompty::init_otel_stdout(); Tracer::register("otel", prompty::otel_tracer());}
// Development: also log to consoleTracer::register("console", console_tracer);
// Debugging: also write .tracy fileslet pt = PromptyTracer::new("./traces");Tracer::register("json", pt.tracer());Building a Custom Tracer
Section titled “Building a Custom Tracer”The built-in backends cover the most common cases, but you can build your own tracer to send spans anywhere — a database, a webhook, a custom dashboard, or a cost-tracking service.
A tracer backend is a factory function that receives a span name and returns
a callback. The callback receives (key, value) pairs as the span executes.
When the span ends, the backend should flush or finalize.
The Factory Interface
Section titled “The Factory Interface”In Python, a tracer factory is a context manager that yields an
add(key, value) callback. Use @contextlib.contextmanager for the
simplest approach:
import contextlibfrom collections.abc import Callable, Iteratorfrom typing import Any
@contextlib.contextmanagerdef my_tracer(name: str) -> Iterator[Callable[[str, Any], None]]: # Called when a span starts — set up resources span_data: dict[str, Any] = {"name": name}
def add(key: str, value: Any) -> None: # Called for each (key, value) emission span_data[key] = value
try: yield add finally: # Called when the span ends — flush / finalize print(f"Span complete: {span_data}")Register it:
from prompty import Tracer
Tracer.add("my_backend", my_tracer)In TypeScript, a tracer factory is a function that receives a span name
and returns a (key, value) callback (or null to skip the span):
import { Tracer, type TracerFactory } from "@prompty/core";
const myTracer: TracerFactory = (signature: string) => { // Called when a span starts — set up resources const spanData: Record<string, unknown> = { name: signature };
// Return the (key, value) callback return (key: string, value: unknown) => { if (key === "__end__") { // Span is ending — flush / finalize console.log("Span complete:", spanData); return; } // Called for each (key, value) emission spanData[key] = value; };};
Tracer.add("my_backend", myTracer);Return null from the factory to skip tracing for a particular span.
In C#, implement ITracerSpan and provide a TracerFactory delegate:
using Prompty.Core.Tracing;
public class MyTracerSpan : ITracerSpan{ private readonly string _name; private readonly Dictionary<string, object?> _data = new();
public MyTracerSpan(string name) { _name = name; }
public void Emit(string key, object? value) { // Called for each (key, value) emission _data[key] = value; }
public void Dispose() { // Called when the span ends — flush / finalize Console.WriteLine($"Span complete: {_name} ({_data.Count} entries)"); }}
// Register itTracer.Add("my_backend", spanName => new MyTracerSpan(spanName));In Rust, a tracer factory is a function that receives a span name and
returns a (key, value) callback:
use prompty::Tracer;use std::collections::HashMap;use std::sync::{Arc, Mutex};
fn my_tracer(name: &str) -> Box<dyn Fn(&str, &serde_json::Value) + Send + Sync> { let span_data = Arc::new(Mutex::new(HashMap::new())); span_data.lock().unwrap().insert( "name".to_string(), serde_json::json!(name), );
let data = span_data.clone(); Box::new(move |key: &str, value: &serde_json::Value| { if key == "__end__" { println!("Span complete: {:?}", data.lock().unwrap()); return; } data.lock().unwrap().insert(key.to_string(), value.clone()); })}
Tracer::register("my_backend", my_tracer);Standard Keys
Section titled “Standard Keys”Every traced span emits a set of standard keys. Your backend will receive these automatically for every pipeline call:
| Key | Type | Description |
|---|---|---|
signature | string | Fully-qualified function name (e.g. prompty.core.pipeline.run) |
inputs | object | Serialized input parameters (already sanitized) |
result | any | Serialized return value, or exception details on error |
Executors and processors emit additional keys like token usage objects. Your
backend can inspect these to build cost dashboards or usage reports.
Example: HTTP Webhook Tracer
Section titled “Example: HTTP Webhook Tracer”A practical example — post completed spans to an HTTP endpoint:
import contextlibimport jsonfrom collections.abc import Callable, Iteratorfrom datetime import datetimefrom typing import Anyfrom urllib.request import Request, urlopen
@contextlib.contextmanagerdef webhook_tracer(name: str) -> Iterator[Callable[[str, Any], None]]: span: dict[str, Any] = {"name": name, "started_at": datetime.utcnow().isoformat()}
def add(key: str, value: Any) -> None: span[key] = value
try: yield add finally: span["ended_at"] = datetime.utcnow().isoformat() try: req = Request( "https://example.com/traces", data=json.dumps(span, default=str).encode(), headers={"Content-Type": "application/json"}, method="POST", ) urlopen(req, timeout=5) except Exception: pass # Never block the pipeline
Tracer.add("webhook", webhook_tracer)import { Tracer, type TracerFactory } from "@prompty/core";
const webhookTracer: TracerFactory = (signature: string) => { const span: Record<string, unknown> = { name: signature, startedAt: new Date().toISOString(), };
return (key: string, value: unknown) => { if (key === "__end__") { span.endedAt = new Date().toISOString(); fetch("https://example.com/traces", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify(span), }).catch(() => {}); // Never block the pipeline return; } span[key] = value; };};
Tracer.add("webhook", webhookTracer);using System.Text;using System.Text.Json;using Prompty.Core.Tracing;
public class WebhookTracerSpan : ITracerSpan{ private readonly Dictionary<string, object?> _data;
public WebhookTracerSpan(string name) { _data = new() { ["name"] = name, ["startedAt"] = DateTime.UtcNow.ToString("o"), }; }
public void Emit(string key, object? value) => _data[key] = value;
public void Dispose() { _data["endedAt"] = DateTime.UtcNow.ToString("o"); try { using var client = new HttpClient(); var json = JsonSerializer.Serialize(_data); client.PostAsync("https://example.com/traces", new StringContent(json, Encoding.UTF8, "application/json")) .GetAwaiter().GetResult(); } catch { // Never block the pipeline } }}
Tracer.Add("webhook", name => new WebhookTracerSpan(name));use prompty::Tracer;use std::collections::HashMap;use std::sync::{Arc, Mutex};
fn webhook_tracer(name: &str) -> Box<dyn Fn(&str, &serde_json::Value) + Send + Sync> { let span = Arc::new(Mutex::new(HashMap::from([ ("name".into(), serde_json::json!(name)), ("started_at".into(), serde_json::json!(chrono::Utc::now().to_rfc3339())), ])));
let data = span.clone(); Box::new(move |key: &str, value: &serde_json::Value| { if key == "__end__" { let mut d = data.lock().unwrap(); d.insert("ended_at".into(), serde_json::json!(chrono::Utc::now().to_rfc3339())); let json = serde_json::to_string(&*d).unwrap_or_default(); // Fire-and-forget POST — never block the pipeline let _ = reqwest::blocking::Client::new() .post("https://example.com/traces") .header("Content-Type", "application/json") .body(json) .send(); return; } data.lock().unwrap().insert(key.to_string(), value.clone()); })}
Tracer::register("webhook", webhook_tracer);What Gets Traced
Section titled “What Gets Traced”Prompty automatically traces every pipeline stage. You don’t need to add
@trace to use built-in tracing — it’s wired into the core pipeline.
| Pipeline Stage | What’s Captured |
|---|---|
| load | File path, frontmatter parsing, legacy migration warnings |
| render | Template engine, input variables, rendered output |
| parse | Parser type, role markers found, message count |
| prepare | Combined render + parse, thread expansion |
| invoke | Model, provider, API type, request payload |
| run | LLM call — token usage, latency, full response |
| process | Response extraction, content type, tool calls |
LLM Call Details
Section titled “LLM Call Details”When the executor calls the LLM, the trace includes:
- Model identifier — which model was called
- Token usage — prompt tokens, completion tokens, total
- Latency — round-trip time for the API call
- Response — the full model response (content, tool calls, finish reason)
- Streaming — if streaming, traces flush when the stream is fully consumed
TypeScript Support
Section titled “TypeScript Support”The Prompty TypeScript runtime (@prompty/core) includes the same tracing
capabilities with a pluggable backend architecture. All the patterns shown
above — Tracer.add(), trace(), PromptyTracer, and consoleTracer —
are available as TypeScript imports as shown in the code examples.
Disabling Tracing
Section titled “Disabling Tracing”Tracing is disabled by default. If you never call Tracer.add(), the
tracing system is effectively a no-op with zero overhead — the decorator
and pipeline hooks short-circuit immediately when no backends are registered.
To disable tracing after it’s been enabled, simply don’t register any backends on the next application restart. There is no explicit “disable” API because the default state is already off.
# No Tracer.add() calls → tracing is a no-opfrom prompty import load, run
agent = load("my-prompt.prompty")result = run(agent, inputs={"query": "hello"})# No traces produced — zero overhead// No Tracer.add() calls → tracing is a no-opimport { load, run } from "@prompty/core";import "@prompty/openai";
const agent = load("my-prompt.prompty");const result = await run(agent, [{ role: "user", content: "hello" }]);// No traces produced — zero overhead// No Tracer.Add() calls → tracing is a no-opusing Prompty.Core;
var agent = PromptyLoader.Load("my-prompt.prompty");var result = await Pipeline.InvokeAsync(agent, new() { ["query"] = "hello" });// No traces produced — zero overhead// No Tracer::register() calls → tracing is a no-oplet inputs = serde_json::json!({"query": "hello"});let result = prompty::invoke_from_path("my-prompt.prompty", Some(&inputs)).await?;// No traces produced — zero overhead