AWS Logo
Menu
Build a Knowledge Graph with Amazon Neptune and the Strands Agent SDK

Build a Knowledge Graph with Amazon Neptune and the Strands Agent SDK

See how to leverage the Strands Agent SDK with Amazon Neptune to build and interact with a knowledge graph.

Dave Bechberger
Amazon Employee
Published Jun 2, 2025

Build a Knowledge Graph with Amazon Neptune and the Strands Agent SDK

A couple of weeks back, AWS released a new open source agent framework library, the Strands Agent SDK. Since I've been knee-deep in MCP server stuff lately, I thought, "why not give it a spin?" I was particularly curious to see how well it would play with Amazon Neptune. You know how it goes - sometimes these integrations can be tricky, so I try it out and let you all know what I found.

Setup

To get started, I had to install the Strands Agent SDK and the Strands Agent Tools, which provides a set of pre-built tool primitives for simplifying common tasks like calling AWS services or using MCP servers which I was planning on leveraging. I also had to install this uvx since I am going to be running the Amazon Neptune MCP servers and uvx is required.
In addition to installing these libraries, I also needed an Amazon Neptune instance. With the initial setup out of the way, I was ready to get started.

Using the use_aws Tool

To start things off, I decided to try one of the tools they provided, called use_aws. This tool lets you use AWS services directly, which is pretty helpful. It's works as a wrapper around boto3, which allows me to work with both Neptune Database and Neptune Analytics. For my test run, I decided to try running an openCypher query on my Neptune Analytics graph containing the air-routes dataset, which has information about airports and flights.
The use_aws tool made it easy to interact with my Neptune Analytics graph. After setting up some basic settings, such as the LLM for my agent, I was able to declare a new tool passing it the service name, neptune-graph, the operation name, execute_query, and the required parameters.
Running that gave me the following response.
So my agent managed to query the database and summarize the results.
While interesting, when working with agents, I'll probably just want to ask a question in plain English, you know? So let's see how it handles asking, "Find me a flight from ANC to SEA in my graph?"
In this example, my agent did generate and run a correct query, but it took several iterations, in this case 3, to get a proper query.
In the first attempt, it generated a query but didn't have the right label names for the properties, so while it can write and execute the query, nothing was returned. On the next try, the agent looked up the proper schema for the graph. On the third attempt, it used that schema to generate the correct query. Success!
I have to say, I did appreciate that persistent, iterative approach that Strands took here. Sure, it took a few rounds, but I also didn’t give it any additional hints or context, so the first attempts were just shots in the dark. I suspect that few of us would get it right without knowing the schema either ;)

Using the Amazon Neptune MCP servers

For my next experiment, I decided to try something different and use the MCP integration in Strands along with the Amazon Neptune MCP server. The code here took a similar approach to above in declaring my agent and LLM. The biggest difference here was that I used the MCPClient tool to instantiate and run my MCP server by passing it the proper arguments. I did also add some custom instructions to the system prompt to let my agent know that it should first fetch the schema and to ensure proper casing of properties whenever it writes a query.
Note: For more details on the Amazon Neptune MCP servers, check out these blog posts:
Simplifying Amazon Neptune Integration with MCP Servers
Build a Knowledge Graph with MCP Memory and Amazon Neptune
Running this code, gave the following response.
With the custom prompting I gave, I noticed that my agent first checks the schema and then generates and runs the query, which worked correctly on the first attempt.
While providing agents the ability to run queries is quite powerful, I thought it would be interesting to see if we could leverage our agent and the underlying LLM to automate and augment a knowledge graph similar to the approach taken in this blog post.
To achieve this, I decided to use a combination of a Perplexity MCP server to research information on a topic and a Neptune Memory MCP server to store that information in our knowledge graph.
Given the topic of this post, I decided to have my agent research, store, and summarize key considerations for building and using a memory server with the Strands SDK.
Running this code, we see the following results.
You know, that's a pretty solid summary of the main points we need to think about. Let's take a peek at our knowledge graph and see the crucial entities and relationships that have been stored, it's always good to have that kind of information handy, am I right?
Knowledge Graph
Knowledge Graph
Taking a look at our knowledge graph, I see that we know have the key points from above connected together to allow for later retrieval and reuse.

Conclusion

Overall, I'm actually pretty happy with my experiments, and two things really stood out to me:
First off, I found the SDK user-friendly. I didn't have to jump through hoops to get things running, and most important, it just worked. Admittedly, my examples above are pretty straightforward, but they worked expected, no odd bugs or cryptic error messages that you get from initial releases of a toolkit. I haven't tried out some of the more complex features such as observability, evaluation, multi-agent collaboration etc. so there may be features where this isn't true. However, my initial experience has been positive enough to try out those features, instead of waiting.
Second, the docs were much clearer and more complete than I see on even much more mature open-source projects. Docs are hard to build and maintain, and docs for open source projects are even harder. They could benefit from some more examples that go beyond the “Hello World” type. Strands do have a samples GitHub repo (here) which (currently) has some nice use case focused examples and will hopefully continue to be updated with some other tools specific examples as well.
After this test drive, I'm honestly excited about what's next. I've got the basics down, and my next step is to start building some more complex examples.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments