Skip to content

Structured Output

A prompt that returns structured JSON matching a schema you define — no manual parsing, no malformed output. The LLM is constrained to return exactly the fields you specify, and the Prompty processor auto-parses the result.


Add an outputs block to your .prompty frontmatter. Each property needs a name, kind (type), and optional description:

weather.prompty
---
name: weather-report
description: Returns structured weather data for a city
model:
id: gpt-4o-mini
provider: openai
apiType: chat
connection:
kind: key
apiKey: ${env:OPENAI_API_KEY}
options:
temperature: 0.3
inputs:
- name: city
kind: string
default: Seattle
outputs:
- name: city
kind: string
description: The city name
- name: temperature
kind: integer
description: Temperature in degrees Fahrenheit
- name: conditions
kind: string
description: Current weather conditions (e.g. sunny, cloudy, rain)
---
system:
You are a weather data API. Return the current weather for the requested city.
user:
What's the weather in {{city}}?

Prompty converts outputs into OpenAI’s response_format with type: "json_schema" and strict mode enabled — the model must return valid JSON with exactly those fields.


from prompty import invoke
result = invoke("weather.prompty", inputs={"city": "Seattle"})
# result is a StructuredResult (dict subclass) — no JSON.parse needed
print(result["city"]) # "Seattle"
print(result["temperature"]) # 62
print(result["conditions"]) # "Partly cloudy"
print(type(result)) # <class 'StructuredResult'>

The async variant works identically:

from prompty import invoke_async
result = await invoke_async("weather.prompty", inputs={"city": "Seattle"})
print(result["temperature"]) # 62

For complex responses, use kind: object with nested properties:

detailed-weather.prompty
---
name: detailed-weather
model:
id: gpt-4o-mini
provider: openai
apiType: chat
connection:
kind: key
apiKey: ${env:OPENAI_API_KEY}
outputs:
- name: city
kind: string
- name: current
kind: object
properties:
- name: temperature
kind: integer
description: Temperature in °F
- name: humidity
kind: integer
description: Humidity percentage
- name: conditions
kind: string
- name: forecast
kind: array
description: Next 3 days forecast
---
system:
Return current weather and a 3-day forecast for the requested city.
user:
Weather for {{city}}?
from prompty import invoke
result = invoke("detailed-weather.prompty", inputs={"city": "Portland"})
print(result["city"]) # "Portland"
print(result["current"]["temperature"]) # 58
print(result["current"]["humidity"]) # 72
print(result["forecast"]) # [{"day": "Mon", ...}, ...]

When you need a typed object instead of a dictionary, use cast(). It deserializes directly from the raw JSON — no dict→JSON→T round-trip.

from dataclasses import dataclass
from prompty import invoke, cast
@dataclass
class WeatherReport:
city: str
temperature: int
conditions: str
result = invoke("weather.prompty", inputs={"city": "Seattle"})
report = cast(result, WeatherReport)
print(report.city) # "Seattle"
print(type(report)) # <class 'WeatherReport'>
# Or as a one-liner with target_type:
report = invoke("weather.prompty", inputs={"city": "Seattle"}, target_type=WeatherReport)

Works with Pydantic too (uses model_validate_json for optimal performance):

from pydantic import BaseModel
class WeatherReport(BaseModel):
city: str
temperature: int
conditions: str
report = invoke("weather.prompty", inputs={"city": "Seattle"}, target_type=WeatherReport)

The runtime generates this wire format automatically:

{
"type": "json_schema",
"json_schema": {
"name": "output_schema",
"strict": true,
"schema": {
"type": "object",
"properties": {
"city": { "type": "string" },
"temperature": { "type": "integer" },
"conditions": { "type": "string" }
},
"required": ["city", "temperature", "conditions"],
"additionalProperties": false
}
}
}

All properties are marked required and additionalProperties is set to false — the model returns exactly the fields you specified, nothing more.


A full, tested example you can copy and run:

structured_output.py
"""Structured output with JSON schema.
This example uses outputs schema to get structured JSON from the LLM.
Used in: how-to/structured-output.mdx
"""
from __future__ import annotations
from prompty import invoke, load
agent = load("structured-output.prompty")
result = invoke(agent, inputs={"city": "Seattle"})
print(f"City: {result['city']}")
print(f"Temperature: {result['temperature']}°F")
print(f"Conditions: {result['conditions']}")