AWS Logo
Menu
Knowledge Graphs and Generative AI (GraphRAG) with Amazon Neptune and LlamaIndex (Part 1)  - Natural Language Querying

Knowledge Graphs and Generative AI (GraphRAG) with Amazon Neptune and LlamaIndex (Part 1) - Natural Language Querying

How to use LlamaIndex and Amazon Bedrock to translate a natural language question into a openCypher graph queries.

Dave Bechberger
Amazon Employee
Published Aug 12, 2024
Last Modified Oct 7, 2024
In this blog series, we will explore various methods that can be used with LlamaIndex to create applications built on top of LlamaIndex to satisfy these workflows. In each post, we will cover an aspect of how to use LlamaIndex with Amazon Neptune. We will not go into detail on the architecture of LlamaIndex so if you are not familiar with the concepts and terminology I suggest you check out the documentation here.
I have been actively working with several open-source toolkits in this area and have recently added support for Amazon Neptune, both Database and Analytics, to LlamaIndex, a popular open-source LLM framework. After analyzing various customer use cases, four main workflows have emerged for knowledge graph and generative AI applications. Customers who have already invested in graph technologies are exploring the following approaches to better leverage these technologies:
Natural language querying - Enabling users to input questions in natural language, which are then automatically translated into graph-based queries, providing intuitive access to information.
Knowledge graph retrieval - Utilizing large language models (LLMs) to extract key entities and question types from natural language input, then executing targeted queries against a knowledge graph to retrieve relevant information.
Customers who have not yet adopted graph technologies are primarily interested in the following approaches to better unlock the potential of their data:
Knowledge graph generation - Creating knowledge graphs by ingesting and processing structured and unstructured data, automatically revealing hidden connections and insights within an organization's information landscape.
Knowledge graph enhanced retrieval-augmented generation (GraphRAG) - Enhancing traditional retrieval-augmented generation (RAG) architectures with the contextual richness of a knowledge graph, enabling more coherent and informed generative AI responses.
In this post, we will explore how to use LlamaIndex and Amazon Bedrock to translate a natural language question into a structured graph query, specifically in the openCypher query language. This query will then be executed on data stored in your Amazon Neptune database. This introductory post will cover the basics needed to set up and operate this type of system. More advanced topics, such as prompt engineering, model tuning, and query rewriting, will be addressed in a future post.

Natural Language Querying using LlamaIndex

LlamaIndex consists tooling designed to create and interact with large language model indexes. It facilitates the storage, searching, and querying of textual data using advanced vector database techniques in conjunction with large language models like GPT. This enables efficient and effective retrieval of relevant information from extensive text corpora.
Natural language querying is the ability to interact with computer systems using human language, rather than structured query languages or complex programming commands. It allows users to ask questions or provide instructions in their native language, and the system processes this input to understand the intent and provide relevant information or perform the requested action.
Dining By Friends Schema
Note: Amazon Neptune also supports Natural Language querying with Langchain via our integrations with QA Chains for openCypher, Gremlin, and SPARQL.
In LlamaIndex we will be using the TextToCypherRetriever class of the PropertyGraphIndex to take the schema of the graph and the question, generate an openCypher query, and then execute that query.
The data we'll be working with in this post is from the book Graph Databases in Action by Manning Publications. The book demonstrates common graph data access patterns to build a fictitious application called "DiningByFriends." This application leverages friend relationships and user ratings to provide personalized restaurant recommendations.. Below is what the schema of our application.
Note: To try this out for yourself as you go through this post, you can download a notebook from our Amazon Neptune Generative AI Samples repository on Github, here.

Installing our dependencies

To get started building our application, the first step is to set up all the required dependencies. In this example, we'll need to install the following components:
  • The core package for LlamaIndex
  • Packages for Amazon Bedrock, which we'll be using as our large language model (LLM)
  • Packages for Amazon Neptune, which will serve as our data store
By installing these key dependencies, we'll have the necessary tools and infrastructure in place to translate natural language questions into structured graph queries, and then execute those queries against the data stored in our Amazon Neptune database.

Prerequisites

For this post we will be using Amazon Neptune Database as our data store so you must have a Neptune Database configured. The methodology presented here will also work with Neptune Analytics and we will call out where the code differs. To run the code in this post will also require permissions to run Amazon Bedrock models, specifically Claude v3 Sonnet and Titan Embedding v1.

Setting up our LLM

With our dependencies installed, let’s start by connecting our application to the hosted LLM models in Amazon Bedrock. For our application, we are going to primarily use Bedrock to provide the natural language interactions to the user.
Note: When we create the PropertyGraphIndex later we must provide it an embedding model, so even though we are creating one, it is not used.
We do this by instantiating the appropriate classes and passing in the model names we want to use. For generating the document embeddings, we are using Titan Embeddings and for the natural language interactions we chose Anthropic Claude v3 Sonnet hosted in Amazon Bedrock.
Now that we have defined our models, let’s configure our application to use them. While you can set these individually LlamaIndex provides a global Settings object which sets the settings for all modules in the application. In this example, we’ll set the LLM and embedding model to the values we defined above.
Settings.llm = llm
Settings.embed_model = embed_modelThat’s all we have to do to globally set out our LLM’s, how easy. With all our setup out of the way, let’s set up our PropertyGraphIndex.

Setting up our GraphStore

Our next step is to create a PropertyGraphStore for our Amazon Neptune Database using the NeptuneDatabasePropertyGraphStore, specifying the cluster endpoint.
To use Amazon Neptune Analytics, you create a PropertyGraphStore for our Amazon Neptune Database using the NeptuneAnalyticsPropertyGraphStore, specifying the graph identifier.
Now that we have to define our PropertyGraphIndex which is a feature in LlamaIndex. To read more about the features, check out this blog post, it is a great read.

Setting up our Index

Now that we have our LLMs and Graph Stores configured, it’s time to set up our index. In this case we are going to use the from_existing method since we already have data loaded into the graph.
With all this ceremony finally completed, we have one more step to set up our TextToCypherRetriever.

Setting up our Retriever

The TextToCypherRetriever is the core component that powers the natural language querying feature in our system. This is an area where LlamaIndex does a significant amount of the heavy lifting for us.
Here's how the TextToCypherRetriever works:
  1. When given a natural language question, the retriever combines the schema information of the graph database with the user's question.
  2. It then provides this combined input to the large language model (LLM), which generates an equivalent openCypher graph query.
  3. Once the LLM returns the generated query, the TextToCypherRetriever executes that query against the graph store and returns the results to the user.
By leveraging LlamaIndex's capabilities, the TextToCypherRetriever is able to seamlessly translate natural language questions into the appropriate graph database query language. This allows users to interact with the graph data in a more intuitive and user-friendly manner, without needing to be experts in the underlying query language..
To set up our retriever, we create it as shown below, providing it our index.
The example provided represents the most basic form of the retriever. However, the retriever offers many other customization options beyond this simplistic version. These include the ability to:
  • Customize the prompts used to generate the responses
  • Inject functions to validate the returned query
  • Limit the available fields that can be included in the results
These customization capabilities allow for extensive tailoring of the retriever's output, which is often necessary for production-ready use cases.
To maintain focus in this post, we will exclude a detailed discussion of these advanced retriever options and best practices. Instead, we will cover the core functionality demonstrated in the earlier example. A future blog post will dive deeper into the nuances of prompt engineering, result validation, and field limiting to further optimize the retriever for real-world applications.

Querying our graph

Note: Natural language queries are arbitrary queries on your database, so applying proper controls on access is critical to reduce the risk to data.
Taking all the pieces from above, we are now ready to ask questions of our graph. To achieve this, use the retrieve method on our retriever and pass it out question.
Which provides us the results:

The results returned from our initial query not only include the relevant values from the graph, but also display the generated openCypher query itself. This is an added benefit of working with natural language querying - it provides a valuable learning path for understanding the underlying graph query languages.
While the previous query was interesting, it doesn't fully showcase the power of using a graph database. One of the most powerful capabilities of graph databases is the ability to perform recursive traversals over the data to uncover unknown connections. Let's try a query that requires such a recursive search.
The initial query we ran returned the results we requested, but they don't quite capture the information we're really interested in. In this case, we want to understand the specific connections between the entities, not just the raw data. When working with natural language queries, it's often necessary to be more precise about the desired output format to get the answers we're looking for. Luckily, it's easy to iterate on the query and refine it.
Let's try running that same query again, but this time we'll be more specific in how we want the results structured.
While the previous query provided some interesting insights, it doesn't really showcase the true power of using a graph database. One of the most powerful capabilities of graph databases is the ability to perform recursive traversals over the data to uncover unknown connections.
Let's try a new query that requires a recursive search to surface these deeper relationships in the data.
Ahh, much better. In this case, we can see that Dave and Denise are directly connected. So far we have just shown some relatively easier graph queries so what if we try something more difficult, one that requires using much more of the graph to answer the question.

One of the impressive aspects of this approach is that we're able to get the desired results without having to deeply learn the intricacies of complex query languages like openCypher. Up to this point, we've been progressing smoothly, with our natural language queries successfully translating into the appropriate structured graph queries.
However, it's important to recognize the inherent nature of large language models (LLMs) - they will not always generate the correct query, even if the original natural language question was clear. When the generated query does not produce the expected results, we'll need to employ techniques such as query validation, prompt tuning, and query rewriting to troubleshoot and refine the process.
These troubleshooting and optimization techniques are essential, but deserve a dedicated blog post of their own. So be on the lookout for a future article that explores these more advanced topics in depth.

Next Steps

In this post, we examined the basic steps required to perform natural language querying using LlamaIndex with Amazon Neptune. In future posts, we will examine how to use LlamaIndex to perform some of the other common Knowledge Graph and Generative AI workflows, next up is Knowledge Graph Retrieval.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

1 Comment