
Microsoft Agent Framework: Build Your First AI Agent
Summary
Step-by-step guide to building AI agents with Microsoft Agent Framework v1.0 using Python.
What Is Microsoft Agent Framework?
Microsoft Agent Framework is a production-ready, open-source framework for building AI agents and multi-agent workflows. It reached v1.0 in 2026 and supports both Python and .NET. It integrates with Azure OpenAI, OpenAI, Anthropic Claude, Google Gemini, Amazon Bedrock, and Ollama.
Think of it as a toolkit that handles the hard parts — LLM communication, tool calling, streaming, orchestration — so you focus on what your agent actually does.
Prerequisites
- Python 3.10 or higher
- An OpenAI API key (or Azure OpenAI endpoint)
- Basic Python knowledge
Step 1: Install the Framework
Install the full package with pip:
pip install agent-framework
Or install only what you need:
# Core + OpenAI support only
pip install agent-framework-core
# Core + Azure AI Foundry integration
pip install agent-framework-foundry
Step 2: Create Your First Agent
Create a file called my_agent.py:
import asyncio
from agent_framework import Agent
from agent_framework.openai import OpenAIChatClient
agent = Agent(
client=OpenAIChatClient(model="gpt-4o"),
name="HelloBot",
instructions="You are a friendly assistant. Keep answers brief."
)
result = asyncio.run(agent.run("What are 3 uses of AI agents?"))
print(result)
Example output:
1. Customer support automation — handling FAQs and ticket routing
2. Code generation — writing, reviewing, and debugging code
3. Research synthesis — gathering and summarizing information from multiple sources
Step 3: Add Tool Calling
Give your agent tools (functions) it can call. Use Python type annotations so the agent knows what each parameter does.
from typing import Annotated
from pydantic import Field
def get_weather(
location: Annotated[str, Field(description="City name")],
) -> str:
"""Get current weather for a location."""
# Replace with real API call
return f"Weather in {location}: 22°C, sunny"
agent = Agent(
client=OpenAIChatClient(model="gpt-4o"),
name="WeatherBot",
instructions="Help users check the weather.",
tools=[get_weather]
)
result = asyncio.run(agent.run("What's the weather in Tokyo?"))
print(result)
Example output:
The weather in Tokyo is currently 22°C and sunny!
Step 4: Enable Streaming
For real-time token-by-token output, use stream=True:
import asyncio
from agent_framework import Agent
from agent_framework.openai import OpenAIChatClient
async def main():
agent = Agent(
client=OpenAIChatClient(model="gpt-4o"),
name="StreamBot",
instructions="You write short poems."
)
async for chunk in agent.run("Write a haiku about coding", stream=True):
print(chunk, end="", flush=True)
asyncio.run(main())
Step 5: Build a Multi-Agent Workflow
The framework supports orchestration patterns: sequential, concurrent, handoff, and group chat. Here's a sequential example where a researcher passes findings to a writer:
from agent_framework import Agent, SequentialOrchestrator
from agent_framework.openai import OpenAIChatClient
client = OpenAIChatClient(model="gpt-4o")
researcher = Agent(
client=client,
name="Researcher",
instructions="Find key facts about a topic. Output bullet points."
)
writer = Agent(
client=client,
name="Writer",
instructions="Turn research bullets into a polished paragraph."
)
orchestrator = SequentialOrchestrator(agents=[researcher, writer])
result = asyncio.run(orchestrator.run("Benefits of agentic AI"))
print(result)
The Researcher agent produces bullet-point facts, then the Writer agent transforms them into a polished paragraph — all automated.
Supported LLM Providers
| Provider | Install Package | Client Class |
|---|---|---|
| OpenAI | agent-framework-core | OpenAIChatClient |
| Azure OpenAI | agent-framework-core | AzureOpenAIChatClient |
| Azure AI Foundry | agent-framework-foundry | FoundryChatClient |
| Anthropic Claude | agent-framework-anthropic | AnthropicChatClient |
| Google Gemini | agent-framework-google | GeminiChatClient |
| Ollama (Local) | agent-framework-ollama | OllamaChatClient |
Key Takeaways
- Production-ready — v1.0 is stable with long-term support
- Multi-provider — swap LLMs without rewriting agent logic
- Tool calling — give agents real-world capabilities via Python functions
- Multi-agent — orchestrate sequential, concurrent, or handoff workflows
- Streaming — real-time token output for responsive UIs
Next Steps
- Explore the GitHub repo for more samples
- Read the official docs on Microsoft Learn
- Try connecting MCP servers for external tool integration
- Build a group-chat agent workflow for complex tasks
Comments
Be the first to comment