Skip to content

Agent with Tool Calling

An agent is a prompt that can call your functions. The flow:

  1. You define tools in the .prompty frontmatter
  2. You register the matching functions in your code
  3. The runtime sends the tools to the LLM
  4. If the LLM returns a tool_calls response, the runtime calls your function, appends the result to the conversation, and calls the LLM again
  5. This loops until the LLM returns a normal text response
User message
→ LLM (with tool definitions)
→ tool_calls: get_weather("Seattle")
→ Your function returns "72°F and sunny"
→ LLM (with tool result in context)
→ "The weather in Seattle is 72°F and sunny!"

from prompty import tool
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"72°F and sunny in {city}"
@tool
def get_time(timezone: str) -> str:
"""Get the current time in a timezone."""
from datetime import datetime, timezone as tz
return datetime.now(tz.utc).isoformat()

Create agent.prompty:

---
name: weather-agent
description: An agent that can check weather and time
model:
id: gpt-4o-mini
provider: openai
apiType: chat
connection:
kind: key
apiKey: ${env:OPENAI_API_KEY}
options:
temperature: 0
tools:
- name: get_weather
kind: function
description: Get the current weather for a city
parameters:
- name: city
kind: string
description: The city name, e.g. "Seattle"
required: true
- name: get_time
kind: function
description: Get the current time in a timezone
parameters:
- name: timezone
kind: string
description: IANA timezone, e.g. "America/New_York"
required: true
inputs:
- name: question
kind: string
default: What's the weather in Seattle?
---
system:
You are a helpful assistant with access to weather and time tools.
Always use the tools when the user asks about weather or time.
user:
{{question}}

from prompty import load, turn, tool, bind_tools
# Define your tool functions with @tool
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"72°F and sunny in {city}"
@tool
def get_time(timezone: str) -> str:
"""Get the current time in a timezone."""
from datetime import datetime, timezone as tz
return datetime.now(tz.utc).isoformat()
# Load and validate tools against the .prompty declarations
agent = load("agent.prompty")
tools = bind_tools(agent, [get_weather, get_time])
# Execute with the agent loop — tools are called automatically
result = turn(
agent,
inputs={"question": "What's the weather in Seattle and the time in Tokyo?"},
tools=tools,
)
print(result)
# → "The weather in Seattle is 72°F and sunny, and the current time in Tokyo is ..."

If your tools call external APIs, use async functions to avoid blocking:

import httpx
from prompty import load, turn_async, tool, bind_tools
@tool
async def get_weather(city: str) -> str:
"""Get the current weather for a city."""
async with httpx.AsyncClient() as client:
resp = await client.get(f"https://api.weather.com/v1/{city}")
data = resp.json()
return f"{data['temp']}°F, {data['condition']}"
async def main():
agent = load("agent.prompty")
tools = bind_tools(agent, [get_weather])
result = await turn_async(
agent,
inputs={"question": "Weather in London?"},
tools=tools,
)
print(result)

You can define as many tools as needed. Here’s a more complete agent with database and search capabilities:

---
name: research-agent
model:
id: gpt-4o
provider: openai
apiType: chat
connection:
kind: key
apiKey: ${env:OPENAI_API_KEY}
options:
temperature: 0
tools:
- name: search_docs
kind: function
description: Search internal documentation
parameters:
- name: query
kind: string
description: The search query
required: true
- name: limit
kind: integer
description: Max number of results (default 5)
- name: get_user
kind: function
description: Look up a user by email
parameters:
- name: email
kind: string
description: The user's email address
required: true
- name: send_email
kind: function
description: Send an email to a user
parameters:
- name: to
kind: string
description: Recipient email
required: true
- name: subject
kind: string
description: Email subject
required: true
- name: body
kind: string
description: Email body
required: true
inputs:
- name: request
kind: string
---
system:
You are an office assistant.You can search docs, look up users, and send emails.
Always confirm before sending emails.
user:
{{request}}
from prompty import load, turn, tool, bind_tools
@tool
def search_docs(query: str, limit: int = 5) -> str:
"""Search internal documentation."""
return f"Found {limit} results for '{query}'"
@tool
def get_user(email: str) -> str:
"""Look up a user by email."""
return '{"name": "Jane Doe", "email": "jane@example.com", "role": "Engineer"}'
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to a user."""
return f"Email sent to {to}"
agent = load("research-agent.prompty")
tools = bind_tools(agent, [search_docs, get_user, send_email])
result = turn(
agent,
inputs={"request": "Find docs about onboarding and email a summary to jane@example.com"},
tools=tools,
)

from prompty import load, turn, tool, bind_tools
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"72°F and sunny in {city}"
agent = load("agent.prompty")
# bind_tools catches mismatches immediately — no waiting for the LLM
try:
tools = bind_tools(agent, [get_weather])
# If agent.prompty also declares get_time, bind_tools warns
# that it's unbound (but doesn't error — it may be in the registry)
result = turn(agent, inputs={"question": "Weather?"}, tools=tools)
except ValueError as e:
print(f"Binding error: {e}")
except Exception as e:
print(f"Execution error: {e}")

Common pitfalls:

IssueCauseFix
ValueError: Tool 'X' not foundFunction not registeredAdd it to tool_functions
Agent loops foreverLLM keeps calling toolsSet maxOutputTokens or add “respond when done” to the system prompt
Wrong arguments passedSchema mismatchEnsure parameters in .prompty match your function signature
Tool returns non-stringRuntime expects stringAlways return a string from tool functions (use json.dumps() for objects)

A full, tested example you can copy and run:

agent_tool_calling.py
"""Agent with tool calling — register tools and run the agent loop.
This example shows how to define tools and run an agent that calls them.
Used in: how-to/agent-tool-calling.mdx
"""
from __future__ import annotations
from prompty import turn, load
from prompty.core import tool
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"72°F and sunny in {city}"
agent = load("chat-agent.prompty")
result = turn(
agent,
inputs={"question": "What's the weather in Seattle?"},
tools={"get_weather": get_weather},
)
print(result)