Simplifying Amazon Neptune Integration with MCP Servers
Update 5/16/2025 - The Neptune Query server has been moved to the AWSLabs MCP repo and this post has been updated to reflect that new location
Recently, Amazon Neptune has released several example Amazon Neptune MCP servers , demonstrating how you can use Model Context Protocol (MCP) to simplify the integration of Amazon Neptune into your Generative AI applications.
This tutorial demonstrates how to use these Neptune MCP servers to streamline your interactions and accelerate development cycles. Whether you're a seasoned Neptune developer or just beginning your graph database journey, these servers offer enhanced productivity tools and workflow automation capabilities that can improve your development experience.
In this post, we'll walk through:
Installing and configuring Neptune MCP servers
Using the neptune-query server to interact with your Neptune database from an MCP-enabled Generative AI application
Considerations when using these MCP servers
In a future post, we will demonstrate how to leverage the neptune-memory server to build a knowledge graph of your interactions to maintain a memory across chats.
Prerequisites
Before we get started, there are a few prerequisites you need to have installed on your system or in your AWS account.
To run these servers, you must install uv following the directions here. You will also need to install Python 3.12 using uv python install 3.12
An MCP client - There are a variety of MCP client applications available such as Cursor, Cline, Claude Code, and many others that support adding MCP servers, each with varying degrees of support. While each of these has their own specific uses and advantages, in this post I will be using Anthropic’s Claude Desktop to demonstrate how you can leverage these servers.
An Amazon Neptune Database or an Amazon Neptune Analytics graph - Verify that your MCP client has network access to the Neptune endpoint for your graph/cluster
The AWS CLI with appropriate credentials configured as the MCP server uses the credentials chain of the CLI to provide authentication. Please refer to these directions for options and configuration.
Installation and Setup
Let's set up the Neptune MCP servers by adding the appropriate MCP configuration to your client. This part looks a bit different depending on which client you're using, but it usually involves creating or editing a JSON file with the server's commands, arguments, and environment variables. Don't worry - each server in the repository comes with a handy README file that walks you through what configuration it needs.
In this post, we are using Claude Desktop as our MCP client and the configuration to use the neptune-query server is below:
When specifying the Neptune Endpoint, the following formats are expected: For Neptune Database: neptune-db://<Cluster Endpoint> For Neptune Analytics: neptune-graph://<graph identifier>
Using these MCP Servers
Now that we have configured our MCP server(s) let’s start up our client and verify that everything is working. With our client is running, let's check what we have to work with. In MCP, we have primatives called "tools" - think of them as special features that can perform specific actions. Let's start by asking our client what tools we have available and what each one can do. This will give us a good idea of all the different ways we can interact with our server.
Verifying the MCP server is workingAwesome, looks like our client has successfully connected to the server. Let's take a look at our graph's schema. One of the powerful things about MCP servers is how easily they work together with other client features. In this case, we're going to use the client to do three things in one go: pull up our graph's schema, create a visual representation of it, and give us a summary of how the graph is structured.
Viewing the graph schema For this example, we're working with an air routes dataset. This includes information about locations, airports, and flight routes. Let's see how all these pieces fit together in our graph. Since we're working with an LLM, let's put it to work helping us explore our data. Let's ask our client to suggest some interesting questions we might want to ask about our air routes graph. This is a nice way to get some ideas about what kind of information we can extract from our data.
What questions can I ask Now that we have these suggested questions, let's put them to use. We can ask the LLM to create a query for any of these questions, and then run it against our Neptune database. This saves us from having to write the query ourselves - the LLM can translate our plain English question into the database query language for us.
Running an query Let’s take a moment to look at what just happened - the LLM did three things for us. First, it created the query. Then it ran that query to get the data from our Neptune database. Finally, it gave us a clear summary of what it found. If you're curious about the technical details, you can see the actual query that the LLM came up with to get this information from the database.
openCypher Query Just a quick note about queries - while we've been using openCypher in our examples, that's not your only choice. If you prefer, you can also use TinkerPop Gremlin queries. Our setup works great with both, so don't feel locked into one or the other. Pick whichever you know best or whichever makes more sense for what you're trying to do. The flexibility is there, so use what works for you!
Running a Gremlin query
Gremlin queryIf you check both results, you'll notice they give us the same information, just using different query languages. This really shows how the Neptune MCP server simplifies working with Amazon Neptune graphs. Without touching any code, we were able to:
Get a clear picture of our graph's structure, both visually and in text form
Figure out what kind of information our graph contains and what questions it can help answer
Create and run queries using either openCypher or Gremlin - whatever works best for you
The takeaway? These MCP servers make it much easier to work with Amazon Neptune. Instead of dealing with complex code, you can focus on getting the information you need through simple conversations.
Considerations when using MCP servers
When leveraging MCP servers in your development workflow, there are some considerations and best practices you should follow to help ensure secure, efficient, and high-quality implementations:
Security First
Code Review: Always conduct thorough security reviews of MCP-generated code before deploying to any environment
Credential Management: Implement least-privilege access principles when configuring AWS credentials for MCP servers. Misconfigured credentials can allow a server a higher level of access to your graph, such as allowing mutation queries, which can cause unintended consequences.
Security Scanning: Integrate automated security scanning tools into your CI/CD pipeline to evaluate MCP-generated infrastructure code
Development Guidelines
Keep Servers Updated: Maintain MCP servers with the latest security patches. MCP is a relatively new protocol
Developer Oversight: Use MCP capabilities as development accelerators while maintaining human oversight and expertise in critical decisions
Documentation: Maintain clear documentation of MCP server configurations and any customizations
Coming Up Next
That wraps up this introduction to Neptune MCP servers and how to use them for database queries. We've covered the basics, but there's more exciting stuff to come! In our next post, we'll show you something really practical - how to give your AI application a "memory" by creating a knowledge graph. The cool part? This memory will be available whether you're starting a new chat, switching users, or using different tools. Stay tuned!
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.