
MCP: Transform Database Admin Tasks with LLMs
MCP powers agentic LLMs using AWS strands-agents SDK to automate PostgreSQL performance investigation and optimization tasks for developers.
baruch
Amazon Employee
Published Jun 8, 2025
Last Modified Jun 10, 2025
The Model Context Protocol (MCP) represents a significant breakthrough in how we can utilize Large Language Models (LLMs) beyond simple text generation. MCP establishes a structured framework that enables LLMs to execute external commands and interact with systems outside their typical environment. This capability transforms LLMs from passive text generators into active tools that can interface with databases, run terminal commands, and process the results—all while maintaining the context of the conversation. By providing LLMs with this "capability to act," MCP opens new possibilities for automating complex technical workflows that previously required human expertise.
PostgreSQL has earned its reputation as one of the most powerful open-source relational database systems available today. With its robust feature set, extensibility, and strong adherence to SQL standards, PostgreSQL powers critical applications across countless organizations. However, proper administration of PostgreSQL databases requires deep technical knowledge spanning query optimization, indexing strategies, backup procedures, replication configuration, security hardening, and performance tuning. The PostgreSQL ecosystem has evolved significantly over decades, resulting in a complex system with numerous configuration options and maintenance considerations that must be carefully managed to ensure optimal operation.
For software developers, the world of database administration often remains an intimidating territory. While developers may feel comfortable writing application code that interacts with databases through ORMs or basic queries, the deeper aspects of database administration require a fundamentally different skill set. Many developers struggle when confronted with tasks like diagnosing query performance issues, planning capacity, configuring replication, or implementing disaster recovery strategies. The PostgreSQL documentation, though comprehensive, spans thousands of pages across multiple versions, making it challenging to quickly find relevant solutions to specific problems. This knowledge gap frequently results in suboptimal database configurations, performance bottlenecks, and increased operational risk when developers attempt to manage database systems without proper expertise.
By combining the natural language understanding capabilities of LLMs with the execution abilities provided by MCP, we can create powerful assistants that help bridge this knowledge gap. These MCP-enabled LLMs can analyze database health, suggest optimization strategies, generate appropriate SQL commands, and even execute them when authorized—all while explaining their reasoning in plain language. This approach democratizes database administration knowledge and provides both experienced DBAs and developers with a powerful tool to enhance their PostgreSQL management capabilities.
AWS's newly released Strands Agents SDK is revolutionizing the AI development landscape. This open-source toolkit strips away the traditional complexities of building AI agents, enabling developers to create sophisticated, production-ready solutions with minimal code. Built on a model-driven approach and battle-tested by AWS, Strands Agents transforms what once took months into a matter of days, making advanced AI agent development accessible to developers of all skill levels."
In the following sections, I'll demonstrate practical examples of how MCP-enabled LLMs can assist with common PostgreSQL administration tasks, from performance tuning to simple maintenance, making database administration more accessible while maintaining best practices and security considerations.
I leveraged the Strands Agents SDK, a powerful framework that excels in multi-agent orchestration and seamless MCP server integration. While this demonstration uses Anthropic's Claude-3-sonnet model, one of Strands' key advantages is its model-agnostic architecture. The SDK supports a diverse range of models, from local Ollama deployments to cloud-based solutions like OpenAI and LLaMA, giving developers the flexibility to choose the best model for their specific use case."
in this example we have 2 files
- agent.py - the agent that will communicate with the MCP server and the LLM
- postgresqlperf.py - the MCP server that will communicate with the database and contain all the logic for providing the performance information
This is the agent.py code
Configuring the Brain: Setting Up Claude 3.7 Sonnet via AWS Bedrock A critical component of our database performance investigation system is the large language model that powers the agent's reasoning capabilities. Let's examine the code that establishes our AI foundation
The
temperature=0.3
parameter is deliberately set low to produce consistent, deterministic responses. In database administration, precision is critical - we want the AI to provide reliable and repeatable analysis rather than creative interpretations of database metrics.The
MCPClient
instantiation creates the client-side counterpart to our server-side MCP implementation. While our previous code (mcp.run(transport='streamable-http')
) set up the server that exposes PostgreSQL tools, this code creates the client that allows Claude to access those tools.In this section, I'll walk through the heart of our solution: the Model Context Protocol (MCP) server implementation that powers our database performance investigation capabilities. This code establishes a specialized interface that allows Large Language Models to directly interact with PostgreSQL databases through well-defined tools.
The MCP server acts as a bridge between the natural language capabilities of LLMs and the technical commands needed for database administration. By registering PostgreSQL-specific functions as MCP tools, Each tool is carefully designed with proper error handling and clear documentation.
Lest look at the some of the code (the rest of the code will be available in my github )
The
FastMCP
server provides a Server-Sent Events (SSE) transport, allowing for responsive communication between the AI agent and the database tools. This architecture enables real-time performance investigation while maintaining a clean separation between the AI layer and the database execution layer.This section is using psycopg adapter to create a connection to the database.
This section is using psycopg adapter to run a query on the database.
Within our MCP server architecture, each database administration capability is implemented as a discrete "tool" that LLMs can invoke when needed. Let's examine
get_table_names()
to understand how these tools function in the context of our larger system:This decorator registers the function with our MCP server, making it discoverable by LLMs using the AWS strands-agents SDK. The descriptive metadata and type hints provide the AI with critical information about how and when to use this capability.
This seemingly simple docstring plays a crucial role in our PostgreSQL performance agent. When working with MCP and LLMs, these documentation blocks serve as more than just developer guidance they become part of the functional API that the AI uses to understand tool capabilities.
When our model encounters a database performance question, it analyzes these docstrings to determine
- Tool Purpose: The first line clearly states this tool retrieves table names from the database
- Default Behavior: It understands that without parameters, the tool will search the 'public' schema
- Parameter Options: It learns it can modify the search by providing a different schema name
- Return Value Format: It expects a list of strings representing table names
The Database Operation, this code performs a specific PostgreSQL administrative task - querying the information_schema to discover available tables.
Each tool includes robust error handling that catches database-specific exceptions and provides meaningful feedback. Instead of crashing when database issues occur, our MCP tools degrade gracefully, allowing the LLM to receive predictable responses and adapt its investigation strategy accordingly.
This single line of code represents a crucial aspect of our MCP server implementation that deserves special attention:
What This Line Does ?
This command launches our MCP server with a specific communication protocol configuration. Let me break down its significance:
- Server Activation:
mcp.run()
starts the MCP server we initialized earlier withFastMCP("postgresqlperf")
, making all registered database tools available for external calls. - SSE Transport Layer: The
transport='sse'
parameter configures the server to use Server-Sent Events as its communication protocol.
⚠️ Notice you will need to configure your AWS access key first and install python UV
First we will run the MCPserver:
Then we can run the agent :
This is example of the agent output as we can see in the output the agent identifyied that i am creating index and offered some wheys to improve the index creation.
Ready to take your PostgreSQL AI assistant further? Here are some actionable next steps:
Check out the AWS Strands Agents SDK and explore the documentation to discover advanced agent patterns and best practices for building production-grade AI assistants.
Check out the AWS Strands Agents SDK and explore the documentation to discover advanced agent patterns and best practices for building production-grade AI assistants.
By building on this foundation, you can create an increasingly sophisticated PostgreSQL administration assistant that bridges the gap between complex database management and developer-friendly interactions.
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.