
Build a Knowledge Graph with MCP Memory and Amazon Neptune
Demonstration of how to leverage an MCP Memory server on Amazon Neptune to build a Knowledge Graph
Dave Bechberger
Amazon Employee
Published Apr 30, 2025
Note: For foundational knowledge about MCP and its benefits, please refer to the Introduction on the MCP website and this post on a Model Context Protocol (MCP) and Amazon Bedrock. For information on example Amazon Neptune MCP servers, please refer to the blog post Simplifying Amazon Neptune Integration with MCP Servers.
Knowledge graphs are becoming increasingly useful when working with Generative AI, as they help model how different pieces of information connect to each other. Up until now, building these graphs has been pretty challenging - you needed to know how to code, understand graph data modeling, work with specialized query languages, and handle complex tasks like entity extraction and resolution. That's a lot to learn just to get started!
We're going to show you an easier way to build a knowledge graph - by simply having a conversation with an AI assistant. In this post, we'll walk through using
neptune-memory
along with an LLM of your choice, in our case we will use Anthropic's Claude, and Amazon Neptune to create a knowledge graph through conversation, no complex coding required!Before we get started, there are a few prerequisites you need to have installed on your system or in your AWS account.
- To run these servers, you must install
uv
following the directions here. You will also need to install Python 3.12 usinguv python install 3.12
- An MCP client - There are a variety of MCP client applications available such as Cursor, Cline, Claude Code, etc., but for this post I will be using Anthropic’s Claude Desktop to demonstrate how you can leverage these servers.
- An Amazon Neptune Database or an Amazon Neptune Analytics graph - Verify that your MCP client has network access to the Neptune endpoint for your graph/cluster
- The AWS CLI with appropriate credentials configured as the MCP server uses the credentials chain of the CLI to provide authentication. Please refer to these directions for options and configuration.
Once the prerequisites are configured, the next step is to install and configure the
neptune-memory
MCP server. While the configuration may vary based on the client used, the configuration for Claude Desktop for the neptune-memory
server looks like:When specifying the Neptune Endpoint, the following formats are expected:
For Neptune Database:
For Neptune Analytics:
For Neptune Database:
neptune-db://<Cluster Endpoint>
For Neptune Analytics:
neptune-graph://<graph identifier>
Let's talk about the
neptune-memory
MCP server we'll be using to build our knowledge graph. This server helps systems (like AI agents) remember information across different conversations by creating what we call a "Fact" knowledge graph. Think of it as a way to connect and store important pieces of information. Here's how it works:The graph uses three main building blocks:
- Entity - These are the "facts" we want to remember. Each one has its own ID, a type, and a list of observations. They show up as nodes in the graph.
- Relation - These show how different facts connect to each other. They appear as lines (or edges) connecting two entities.
- Observation - These are extra details about each fact, stored as text attached to the entity nodes.
This straightforward setup lets us create a web of connected information - kind of like a digital memory bank, that helps us understand how different pieces of information relate to each other. It's particularly useful when we want to get a clear picture of a specific topic.
To demonstrate how we can create a fact knowledge graph, let’s choose a specific topic, in this case let’s build a graph about me. To start, let’s first connect to our memory and see what information already exists.

It looks like our graph is empty, so let’s start by adding some information to a prompt.

Let's see what happens when we run the prompt. The LLM goes through the text and picks out important pieces like People and Technologies, along with how they're connected to each other. It then uses the
neptune-memory
MCP server to add this information to our graph using openCypher statements (don't worry if that sounds technical - the system handles it for us). Want to see what we've created? We can ask for a visualization of our knowledge graph to get a clear picture of how everything fits together.
Looking at the visualization, we can see that our LLM has done a nice job pulling out important information and showing how different pieces connect to each other. While this is useful, we've only used information we directly provided - and knowledge graphs really shine when they can connect dots from different sources. Let's take this a step further and see what happens when we let the LLM tap into its broader knowledge. For example, we can ask it to tell us more about Dave Bechberger and Amazon Neptune, adding these extra details to our graph.

Nice! It looks like Claude dug up some interesting tidbits. For Dave Bechberger, it found out he's an author, which is pretty cool. And for Amazon Neptune, we now know when the service was launched. These are great examples of how an LLM can fill in gaps with publicly available info. Let's take a look at our updated knowledge graph and see how these new facts fit into the bigger picture.

As you can see, our knowledge graph now includes all these new connections and facts alongside our original information. One of the handy things about setting this up is that we can now ask our LLM to pull information from the graph and give us useful summaries of what it knows. Think of it as having a smart assistant that can connect the dots between different pieces of information we've collected.

Storing and retrieving data in one session is useful, but the real magic happens when we use this information across different chats, tools, and even with different AI assistants. Let's test this out. Go ahead and open up a fresh chat in Claude Desktop. Now, ask this new chat about some of the information we've stored in our graph. Pretty neat, right? You'll see that we can still pull up all that knowledge we've gathered, even in our new conversation. This is what makes a persistent memory so powerful - it's like having a shared brain that different AIs can tap into whenever they need it.

neptune-memory
MCP server. The cool thing is, we didn't have to write any code at all, but we still managed to:- Set up a "fact" knowledge graph that serves as a memory bank
- Add information from our conversations to the graph
- Expand the graph with extra public information that Claude knew about
- Use this stored knowledge across different chat sessions
Bottom line? Adding these MCP servers to your workflow makes it much easier to work with Amazon Neptune when building knowledge graphs. No complex coding required - just straightforward conversations that get the job done.
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.