
Not heard enough about MCP yet?
APIs vs MCP? Who builds what? and other useful tangents
Shreyas Subramanian
Amazon Employee
Published Apr 15, 2025
Last Modified Apr 18, 2025
Unlike traditional APIs that require rigid pre-defined integrations, MCP acts as a universal translator between large language models and enterprise systems (and other loosely defined "tools"), maintaining context across interactions and enabling real-time discovery of resources. You knew this part already. Let's answer some common questions that have come up (including whats in the title). Later in this post, we'll dissect an actual MCP implementation for Amazon Bedrock Knowledge Bases to understand how this protocol bridges the gap between human-like queries and machine-readable data.
Let's address this confusion directly: Yes, developers still create interfaces to data and tools - but MCP fundamentally changes how, when, and by whom these interfaces are used.
The key difference is that MCP creates a standardized way for these interfaces to be connected at runtime by users rather than hardcoded at design time by developers.
API integrations are like that socket behind your microwave. You never touch it. You can change the microwave but that's costly. If the microwave breaks, you need your engineering team to fix it. Or worse, call a contractor and eat the cost of delays.
MCP is like a loose hanging power strip. It can be moved and extended. Power strips may also come with surge protection built in, and can save your microwave. They also may come with USB ports and a wireless charging pad. However, your microwave might still need replacing/updating.
Large Language Models face three critical limitations:
- Knowledge Cutoffs: LLMs can't access information beyond their training data
- Tool Manipulation: LLMs can't directly interact with external systems
- Context Windows: LLMs have limited memory for conversation history
Traditional solutions involve developers creating custom API integrations for each use case. This means:
- Every new data source requires developer intervention
- Updates need new code deployments
- Users/Agents are limited to what developers anticipated
MCP creates a standardized protocol for runtime connections, solving these issues without requiring constant developer updates.
Traditional APIs are fixed connections buried in your infrastructure, while MCP provides flexible connection points that users can access, move, and extend without calling in specialists.
In traditional API integrations, developers must anticipate every tool and data source users might need, then hardcode those connections. With MCP, the application provides a standardized "socket" that users can plug different tools into as needed.
Yes, Tools Still Need Building - But Who Uses Them Changes
Let's be crystal clear: MCP doesn't eliminate the need for interfaces to data and functionality.

The key difference is in the separation of concerns. With MCP:
- Tool developers build MCP-compatible interfaces to their systems
- AI application developers implement the MCP standard
- Agents discover, and select which tools to connect and when

This runtime flexibility is impossible with traditional API approaches. Let's take a break and look at when To Use Each Approach
Need | Traditional APIs | MCP |
---|---|---|
Dependence | Independently developed | May depend on APIs! |
Simple, predefined workflows | ✓ Often simpler | May introduce build time complexity, but using in runtime is easier |
Tool connections/functional flow | Limited to what's built | ✓ User-selected at runtime |
Enterprise data access | Requires custom integration, or could use exiting APIs | ✓ Connect existing tools via clients |
Standardization | Multiple API integrations | ✓ Standardized connections |
Security | Custom security per API | ✓ Consistent security model |
Also, MCP is not a replacement to APIs. Its a standard protocol that simplifies agent-tool communication (for now). So if you are not using agents/tools etc., but you still want to use Agents, please do educate us on what the use case would be!
Let's take a look at a real world example. In the flow diagram below, a user asks the AI agent to invest his/her portfolio. In this particular implementation we assume the agent needs to double check with the user as to which data sources to use (this is not necessary, and can be autonmous). The MCP server allows the agent to then discover tools, and restrict its usage to the tools the user responds with (fidelity and market data).
Without MCP, the AI application developer would need to build integrations with every possible financial institution and data source. With MCP, the user simply connects the AI to whatever financial tools they already are approved to use.
Ask yourself:
- what would happen if Fidelity changes their underlying APIs?
- could there be multiple MCP clients for the same set of APIs? (yes)
- how would authentication work at each level?
- ...and more. many of these questions don't have a single definitive answer

Ok, lets look at the Bedrock Knowledgebases MCP published recently:
This can be found here - https://github.com/awslabs/mcp/tree/main/src/bedrock-kb-retrieval-mcp-server/awslabs/bedrock_kb_retrieval_mcp_server . Overall the flow looks like this:

Now let's dive into parts of the code:
Let's start with the server...
The server imports clients to use Bedrock's underlying APIs:
The server imports clients to use Bedrock's underlying APIs:
In server.py, we define one resource and one tool:
- The resource acts as a dynamic registry of available knowledge bases
- The tool handles natural language queries with automated result processing
Let's take a look within the "knowledgebases" client folder**:**
Key client implementation snippets you should focus on:
"Aha!", you may say, "caught red handed. I see you actually made those API calls inside the client files".
Exactly! But here's the crucial distinction:
Exactly! But here's the crucial distinction:
Abstraction Level
- Client Responsibilities
- API Client: Needs service-specific SDK/credentials
- MCP Client: Only needs MCP protocol implementation
- Result Processing
- API Response: Raw service response
- MCP Response: Standardized, pre-processed results with reranking/filtering
So, to do the same thing via APIs vs. via MCP:
The advantages of doing the same thing via MCP are:
- Discovery Automation: Clients find KBs through
resource://knowledgebases
- Query Standardization: Natural language processing handled by protocol
- Security Decoupling: MCP server manages credentials, clients only need protocol access
No, SSE doesn't make your agents run faster. It's about the communication mechanism, not processing speed. Server-Sent Events (SSE) is one of the primary transport mechanisms in MCP, and there are some common misconceptions about what it does. Let's clarify:

What SSE Actually Is
SSE (Server-Sent Events) is one of MCP's built-in transport types that handles how messages are transmitted between clients and servers. It's specifically designed for server-to-client streaming over HTTP, with separate HTTP POST requests handling client-to-server communication.
Under the hood, MCP uses JSON-RPC 2.0 as its wire format, and the transport layer (whether SSE or another option) is responsible for converting MCP protocol messages into JSON-RPC format for transmission.
The code example shows how to configure MCP to use SSE transport:
MCP supports multiple transport options:
- SSE: Good for server-to-client streaming, particularly useful in environments with restricted networks where WebSockets might be blocked. Uses HTTP for all communication.
- stdio (Standard Input/Output): Useful for local integrations, command-line tools, and simple process communication. Particularly valuable when building shell scripts or command-line utilities.
Important security note: When using SSE transport, be aware of potential DNS rebinding attacks. Always validate Origin headers, avoid binding servers to all network interfaces (use localhost instead), and implement proper authentication.
SSE | STDIO |
---|---|
Allows a client to send data to a server through the standard input and receive responses via the standard output streams. Primarily used for inter-process communication within the same system (Local, synchronous) | Allows servers to push real-time updates to web clients over a single, long-lived HTTP connection (network based, real-time) |
Local and fast, does not need a network connection | Latency subject to network connection |
particularly suitable for command-line tools and local integrations where the client and server operate within the same process. | allows for efficient, one-way communication from the server to the client, making it suitable for applications that require real-time data updates. |
multiple connections subject to local resources (usually single client) | allows servers to handle multiple client connections efficiently |
no native support for authentication | supports features like authentication (JWT, API keys) |
Server:
Client
Server:
Client
And now you're a certified MCP ninja! Maybe not. Seriously, though, MCP isn't replacing APIs - it's creating a standardized communication layer between Agents and tools (for now) that uses APIs as implementation details, not primary interfaces.
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.