Skip to content

Tools

Tools extend what an LLM can do beyond generating text. Define them in the .prompty frontmatter under the tools: key. When the prompt executes, the runtime passes tool definitions to the LLM as part of the API call. If the model decides to use a tool, the agent loop handles calling the function and feeding the result back into the conversation.

tools:
- name: get_weather
kind: function
description: Get current weather for a city
parameters:
- name: city
kind: string
required: true

Every tool has a name, a kind that determines its type, and an optional description. Beyond that, each kind carries its own fields. Prompty supports five tool kinds — plus a wildcard catch-all.


flowchart TD
    Tool["Tool (base)\nname · kind · description · bindings"]
    Tool --> FunctionTool["FunctionTool\nkind: function\nparameters · strict\n→ local function call"]
    Tool --> PromptyTool["PromptyTool\nkind: prompty\npath · mode\n→ child .prompty execution"]
    Tool --> McpTool["McpTool\nkind: mcp\nserverName · connection\napprovalMode · allowedTools"]
    Tool --> OpenApiTool["OpenApiTool\nkind: openapi\nconnection · specification\n→ REST API call"]
    Tool --> CustomTool["CustomTool\nkind: * (wildcard)\nconnection · options\n→ provider-specific"]

    Dispatch["Tool.load_kind(data) resolves kind → concrete class\nUnknown kinds automatically become CustomTool"]

    style Tool fill:#1d4ed8,stroke:#1e40af,color:#fff
    style FunctionTool fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
    style PromptyTool fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
    style McpTool fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
    style OpenApiTool fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
    style CustomTool fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
    style Dispatch fill:#fefce8,stroke:#f59e0b,color:#92400e

Function tools define local functions that the runtime can call directly when the LLM requests them. This is the most common tool type — you provide a function name, description, and a parameter schema, and the executor maps tool calls to your Python functions at runtime.

tools:
- name: get_weather
kind: function
description: Get current weather for a city
parameters:
- name: city
kind: string
description: City name
required: true
- name: units
kind: string
description: Temperature units
default: celsius

Set strict: true to constrain the LLM to output only arguments that match the exact parameter schema. This adds "strict": true to the function definition and "additionalProperties": false to the JSON Schema sent to the API — preventing the model from hallucinating extra parameters.

tools:
- name: get_weather
kind: function
description: Get current weather for a city
strict: true
parameters:
- name: city
kind: string
required: true

Prompty tools let one .prompty file call another as a tool. Instead of writing a function, you point at a child .prompty file — the runtime loads it, sends the LLM-provided arguments as inputs, and returns the result.

tools:
- name: summarize
kind: prompty
description: Summarize a piece of text
path: ./summarize.prompty
mode: single
parameters:
- name: text
kind: string
description: The text to summarize
bindings:
- name: context
input: document
FieldDescription
pathRelative path to the child .prompty file (resolved from the parent’s location)
modesingle (default) — one LLM call via prepare + run. agentic — full agent loop via turn
parametersOptional parameter schema. If omitted, the child’s inputs are used automatically
bindingsMap parent inputs → child parameters (the bound parameters are stripped from the wire schema)

When sent to the LLM, a PromptyTool is projected as a standard function tool. The runtime:

  1. Loads the child .prompty file
  2. Uses its inputs as the function parameters
  3. Strips any bound parameter names from the schema
  4. Uses the tool’s description (or the child’s if not set)

The LLM sees it as a regular function call — it doesn’t know it’s backed by another prompt.


MCP (Model Context Protocol) tools connect to an external MCP server that exposes a set of capabilities. You reference the server by name and optionally restrict which tools the model can access.

tools:
- name: filesystem
kind: mcp
serverName: fs-server
connection:
kind: reference
name: my-mcp-server
approvalMode:
kind: always
allowedTools:
- read_file
- list_directory
FieldDescription
serverNameIdentifier of the MCP server to connect to
connectionHow to reach the server (any Connection type)
approvalModeWhen tool calls need approval — always, never, or specify (with per-tool lists)
allowedToolsWhitelist of tool names the model may invoke (optional)

OpenAPI tools let the LLM call a REST API described by an OpenAPI specification. Prompty reads the spec to understand available operations and translates tool calls into HTTP requests.

tools:
- name: weather_api
kind: openapi
connection:
kind: key
endpoint: https://api.weather.com
apiKey: ${env:WEATHER_API_KEY}
specification: ./weather.openapi.json
FieldDescription
connectionEndpoint and auth for the API (any Connection type)
specificationPath to an OpenAPI JSON/YAML spec (relative to the .prompty file)

Any kind value that doesn’t match function, mcp, or openapi is caught by the CustomTool wildcard. This is the extensibility escape hatch — use it to integrate with tool providers that Prompty doesn’t have built-in support for.

tools:
- name: my_tool
kind: my_custom_provider
connection:
kind: key
endpoint: https://custom.example.com
options:
setting: value
FieldDescription
connectionOptional connection for the custom provider
optionsFree-form dictionary passed through to the provider

The runtime loads these as CustomTool instances. Your executor or a plugin is responsible for interpreting the kind and options at execution time.


All tool types support optional bindings that map between the tool’s parameters and the prompt’s input schema. Use bindings when the tool’s parameter names don’t match your prompt’s input variable names.

tools:
- name: search
kind: function
description: Search for documents
bindings:
- name: query
input: userQuestion
parameters:
- name: query
kind: string
required: true

In this example, the tool parameter query is bound to the prompt input userQuestion — so the value of userQuestion is automatically passed as query when the tool is invoked.


Tools defined in the frontmatter are sent to the LLM as part of the API request. To actually execute the tool calls the model returns, use turn() — which runs the agent loop: call the LLM, execute any requested tools, feed results back, and repeat until the model produces a final response.

from prompty import load, turn, tool, bind_tools
@tool
def get_weather(city: str, units: str = "celsius") -> str:
"""Get the current weather for a city."""
return f"72°F and sunny in {city}"
agent = load("agent.prompty")
tools = bind_tools(agent, [get_weather])
result = turn(agent, inputs={"question": "Weather in Seattle?"}, tools=tools)
result = turn(
agent,
inputs={"question": "Weather in Seattle?"},
tools=tools,
)
print(result)

Prompty uses a two-layer registry for extensible tool support. This lets you add custom tool kinds (beyond the built-in function, mcp, openapi) and control both how they’re presented to the LLM and how they’re executed.

Layer 1: Wire Projection — registerTool()

Section titled “Layer 1: Wire Projection — registerTool()”

The tool projector converts a tool definition from your .prompty frontmatter into the wire format the LLM expects (typically an OpenAI-style function definition).

from prompty import register_tool
def my_tool_projector(tool) -> dict:
"""Convert a custom tool kind into an OpenAI function definition."""
return {
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
"parameters": {
"type": "object",
"properties": {
p.name: {"type": p.kind, "description": p.description}
for p in tool.parameters.properties
},
},
},
}
register_tool("my_provider", my_tool_projector)

When the executor sends tools to the LLM, it calls each tool’s registered projector to produce the wire-format definition. Built-in kinds (function, prompty) have projectors pre-registered.

Layer 2: Dispatch — registerToolHandler()

Section titled “Layer 2: Dispatch — registerToolHandler()”

The tool handler is called during the agent loop when the LLM requests a tool call. It receives the tool definition and the arguments from the LLM, and returns the result string.

from prompty import register_tool_handler
def my_tool_handler(tool, args: dict) -> str:
"""Execute a custom tool kind during the agent loop."""
# Use tool.connection, tool.options, etc.
result = call_my_service(tool.connection.endpoint, args)
return str(result)
register_tool_handler("my_provider", my_tool_handler)
ScenarioRegister projector?Register handler?
Custom tool kind (e.g., a proprietary API)✅ Yes✅ Yes
Override wire format for built-in kind✅ YesNo — use tools dict in turn
Custom execution for built-in kindNo✅ Yes
Standard function toolsNo — pre-registeredNo — use tools dict