AI MCP Agents Architecture

MCP + Agents: What They Are, When to Combine Them

There’s a question I keep running into: should I use MCP or build an agent?

It’s the wrong question. MCP and agents solve different problems. The real question is which combination fits your situation — and there are three patterns worth knowing.

The Building Blocks

Before the patterns, here’s the vocabulary:

MCP Servers are standardized tool interfaces. Think of them as USB ports — any AI can call any tool through a standard protocol. You build a Dataverse MCP server once, and every AI client or agent can plug into it.

Agents are self-contained AI with reasoning. They have their own “brain” (an LLM), their own tools, and the ability to plan multi-step actions. Think of them as smart devices — not just peripherals waiting for instructions, but systems that decide what to do.

Plugins / Functions are the capabilities an agent invokes — the peripherals it uses to act on the world. Read a record, call an API, send an email.

Guardrails / Filters are safety infrastructure. Not instructions inside a prompt, but code that runs below the agent — circuit breakers that can’t be bypassed by prompt injection.

The key insight:

MCP = how tools expose capabilities (the interface) Agent = how AI decides what to do (the reasoning)

They’re complementary, not competing.

Three Patterns

Pattern 1: AI Client + MCP

The simplest combination. An AI coding assistant (Claude, Copilot, Cursor) connects to an MCP server. The AI reasons about what to do; the MCP server executes it.

Real example: Explore a Dataverse schema in seconds — not days of clicking through maker tools. An AI assistant connected to a Dataverse MCP server lets you ask “show me all entities related to booking” and get structured answers immediately.

This is the pattern I use daily. No agent framework, no orchestration — just a smart AI client with access to the right tools.

Pattern 2: Agent Standalone

The agent has its own LLM, its own plugins, and its own guardrails. It’s a self-contained system with everything built in.

Real example: Compare two D365 environments for schema drift — guaranteed read-only. A Semantic Kernel agent with Azure OpenAI, where infrastructure-level filters enforce read-only access. No prompt can bypass it because the safety layer runs below the model.

This works when you need tight control over what the agent can do and the tools are specific to this one use case.

Pattern 3: Agent + MCP

The best of both worlds. The agent brings reasoning, planning, and guardrails. MCP brings standardized, reusable tool access.

Real example: Your AI assistant explores schemas by day. Your agent audits environments by night. Same MCP server, two consumers. Build the tool once, plug it in anywhere.

This is where it gets interesting — and where the compounding value kicks in.

How Each Pattern Flows

1 AI Client + MCP
User
AI Client
MCP Server
External System
AI reasons, MCP executes
2 Agent Standalone
User
Prompt Filter
Agent + LLM
Function Filter
Plugins
External System
Own brain + own tools
3 Agent + MCP
User
Prompt Filter
Agent + LLM
Function Filter
MCP Server
External System
Agent reasoning + standardized tools
AI / Agent MCP Server Plugins Guardrails External

Compare patterns 2 and 3 — the only difference is Plugins swapped for MCP Server. Same guardrails, same reasoning, but the tools come through a standardized protocol instead of baked-in code.

Why Complement Agents with MCP?

Three reasons:

Reuse what you already built

Built a Dataverse MCP server for your AI coding assistant? Your agent can consume it immediately — no rewriting integrations as plugins. The investment you made in tooling pays off twice: once for the AI client, again for every agent that connects.

Clean ownership boundaries

The D365 team owns the MCP server. The AI team builds agents that consume it. When the schema changes, the MCP team updates one server — every consumer gets the fix. No coordination overhead, no duplicate integration work.

Ecosystem leverage

As the MCP ecosystem grows, your agent gets new capabilities without writing code. A community-built MCP server for SharePoint, Azure DevOps, or Power Platform becomes a tool your agent can call — same protocol, zero glue code.

MCP servers are investments that compound. Every new consumer — AI client or agent — gets the value for free.

Guardrails: Infrastructure, Not Instructions

One thing the architecture diagram makes clear: in patterns 2 and 3, there are filter layers that sit between the user and the agent, and between the agent and its tools. These aren’t prompt instructions — they’re code.

Prompt-based safetyInfrastructure-level safety
Lives inside the promptRuns below the agent
Can be overridden by injectionCannot be bypassed by prompts
Per-agent, easy to forgetApplies to all requests, always
”Please don’t do bad things”Code that enforces the rules

No prompt injection can bypass a filter that runs before the model sees the input. That’s the difference between instructions and infrastructure.

In Semantic Kernel, these map to concrete filter types:

LayerFilterPurpose
1IPromptRenderFilterChecks user input before it reaches the model
2IAutoFunctionInvocationFilterCatches when AI decides to call a dangerous function
3IFunctionInvocationFilterValidates function names and argument values
4Azure OpenAI (built-in)Model-level content filtering

Every step passes through filters. The agent earns the right to execute.

Picking Your Pattern

The question isn’t MCP or agents. It’s which combination, for which problem.

  • Need tool access from an existing AI client? → Pattern 1 (AI Client + MCP)
  • Need a controlled, self-contained system? → Pattern 2 (Agent Standalone)
  • Need agent reasoning with reusable, standardized tools? → Pattern 3 (Agent + MCP)

Start with the simplest pattern that solves your problem. You can always layer in more sophistication later — that’s the whole point of building on open protocols.