Skip to content

Use with Microsoft Foundry

  • A Microsoft Foundry project or an Azure OpenAI resource with a deployed model (e.g., gpt-4o-mini)
  • The resource endpoint:
    • Foundry project endpoint (recommended): https://<resource>.services.ai.azure.com/api/projects/<project>
    • Classic Azure OpenAI endpoint (legacy): https://<resource>.openai.azure.com/
  • Either an API key or Microsoft Entra ID credentials
Terminal window
# API key auth
pip install prompty[jinja2,foundry]
# Entra ID auth (adds azure-identity)
pip install prompty[jinja2,foundry] azure-identity

The simplest approach — use an API key from the Azure AI Foundry portal or Azure Portal.

Create foundry-chat.prompty:

---
name: foundry-chat
description: Chat completion with Microsoft Foundry (API key)
model:
id: gpt-4o-mini
provider: foundry
apiType: chat
connection:
kind: key
endpoint: ${env:AZURE_AI_PROJECT_ENDPOINT}
apiKey: ${env:AZURE_AI_PROJECT_KEY}
options:
temperature: 0.7
maxOutputTokens: 1024
inputs:
- name: question
kind: string
default: What is Microsoft Foundry?
---
system:
You are a helpful assistant. Answer concisely.
user:
{{question}}
import prompty
result = prompty.invoke(
"foundry-chat.prompty",
inputs={"question": "What services does Microsoft Foundry offer?"},
)
print(result)
Terminal window
# .env — Foundry project endpoint (recommended)
AZURE_AI_PROJECT_ENDPOINT=https://my-resource.services.ai.azure.com/api/projects/my-project
AZURE_AI_PROJECT_KEY=abc123...
# .env — Classic Azure OpenAI endpoint (legacy, also works)
# AZURE_OPENAI_ENDPOINT=https://my-resource.openai.azure.com/
# AZURE_OPENAI_API_KEY=abc123...

Option B: Microsoft Entra ID Authentication

Section titled “Option B: Microsoft Entra ID Authentication”

For production workloads, use Microsoft Entra ID — no API keys to manage or rotate. This section covers two approaches:

  • DefaultAzureCredential — convenient for local development; automatically discovers your Azure CLI, VS Code, or managed identity credentials.
  • ManagedIdentityCredentialrecommended for deployed services (App Service, Container Apps, AKS, Functions). Explicit, no fallback chain, no risk of accidentally using developer credentials in production.

Create foundry-chat-aad.prompty:

---
name: foundry-chat-aad
description: Chat completion with Microsoft Foundry (Microsoft Entra ID)
model:
id: gpt-4o-mini
provider: foundry
apiType: chat
connection:
kind: reference
name: foundry_default
options:
temperature: 0.7
maxOutputTokens: 1024
inputs:
- name: question
kind: string
default: What is Microsoft Foundry?

2. Register the Connection and Run (Local Development)

Section titled “2. Register the Connection and Run (Local Development)”

During development, DefaultAzureCredential automatically finds your Azure CLI or VS Code login:

import os
from azure.identity import DefaultAzureCredential
import prompty
# Register the named connection before loading the prompt
prompty.register_connection(
"foundry_default",
{
"endpoint": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
"credential": DefaultAzureCredential(),
},
)
result = prompty.invoke(
"foundry-chat-aad.prompty",
inputs={"question": "What is Microsoft Foundry?"},
)
print(result)

3. Production Deployment: Use Managed Identity

Section titled “3. Production Deployment: Use Managed Identity”

Step 1: Enable Managed Identity on your Azure hosting resource:

Terminal window
# App Service
az webapp identity assign --name <app-name> --resource-group <rg>
# Container Apps
az containerapp identity assign --name <app-name> --resource-group <rg> --system-assigned
# Azure Kubernetes Service (pod identity)
az aks update --name <cluster> --resource-group <rg> --enable-oidc-issuer --enable-workload-identity

Step 2: Assign the RBAC role — grant only what’s needed (principle of least privilege):

Terminal window
az role assignment create \
--assignee <managed-identity-object-id> \
--role "Cognitive Services OpenAI User" \
--scope /subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.CognitiveServices/accounts/<resource>

Step 3: Use ManagedIdentityCredential in your code:

import os
from azure.identity import ManagedIdentityCredential
import prompty
prompty.register_connection(
"foundry_default",
{
"endpoint": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
"credential": ManagedIdentityCredential(),
},
)
result = prompty.invoke(
"foundry-chat-aad.prompty",
inputs={"question": "What is Microsoft Foundry?"},
)
print(result)

API Key (kind: key)Entra ID — DefaultAzureCredentialEntra ID — ManagedIdentityCredential
SetupCopy key from portalaz login or VS CodeEnable managed identity + RBAC
SecurityKey in env var — can leakToken-based — no static secretToken-based — no static secret
Credential sourceStatic stringAuto-discovers (CLI, VS Code, MI, …)Managed identity only — no fallbacks
RotationManualAutomaticAutomatic
Best forQuick prototypingLocal developmentProduction deployments

If multiple .prompty files share the same Foundry config, extract it into a JSON file and reference it with ${file:...}:

shared/foundry-connection.json:

{
"kind": "key",
"endpoint": "https://my-resource.services.ai.azure.com/api/projects/my-project",
"apiKey": "${env:AZURE_AI_PROJECT_KEY}"
}

my-prompt.prompty:

---
name: shared-connection-demo
model:
id: gpt-4o-mini
provider: foundry
connection: ${file:shared/foundry-connection.json}
---
system:
You are a helpful assistant.
user:
{{question}}

This avoids duplicating endpoint and auth config across prompts.


.env
AZURE_AI_PROJECT_ENDPOINT=https://my-resource.services.ai.azure.com/api/projects/my-project
AZURE_AI_PROJECT_KEY=abc123... # Only needed for API key auth
# Optional: specify deployment names per model type
AZURE_OPENAI_CHAT_DEPLOYMENT=gpt-4o-mini
AZURE_OPENAI_EMBEDDING_DEPLOYMENT=text-embedding-3-small