AWS Logo
Menu
Executive Session: Unlocking data for GenAI using Graph Databases

Executive Session: Unlocking data for GenAI using Graph Databases

In November 2024, I had the honor of presenting to a room of financial services executives on how graph databases, especially GraphRAG, improves the ability for LLMs to understand your data. Here is the recap.

Brian O'Keefe (AWS)
Amazon Employee
Published Nov 15, 2024
I recently had the privilege to speak at AWS's "The Meta Llama Advantage: unlocking multi-modal GenAI capabilities for Financial Services" event at JFK14, presenting "Executive Session II | Unlocking data for GenAI using Graph Databases". Due to the positive response and requests for slides and source code from the demo I gave, I decided to write a summary here and include the links to the artifacts. You can download the slides or view the summary below with its key talking points.
Slide with bullets describing how graphs work more like a mind map tool than excel spreadsheets
Technical challenges solved by graphs
  • Graphs are a special data type that stores data as nodes and edges
  • Graph databases adopt this data type as a specialized database optimized to organize data in ways best for graphs, specifically storing relationships as first class data, not metadata to be calculated at runtime and performing inline with walking a graph, not combining tables of data
  • Graphs do not have to be stored in a graph database, but they perform better when they are, especially at scale
Slide showing how knowledge graphs help you link disparate data sources and augment AI/ML
What is a Knowledge Graph?
This isn’t a new concept...The result is improved search results by introducing context and relevance. Anyone near my age probably remembers the mid-90s…how many search engines were there? Yahoo, Altavista, Excite, Infoseek, Ask Jeeves, Northern Light, Jumpstation and more. What allowed Google to leap ahead of all of them? Their PageRank algorithm. PageRank was essentially a knowledge graph algorithm and it differentiated them from the other text relevancy based engines. Remember that point.
We can augment ML and AI with our Knowledge Graph. And again, this has been happening long before GenAI, but it certainly applicable to today’s world of GenAI. We add the context and the hidden connections as inputs to the model and it allows us to train our models with more insightful parameters.
In the finserv industry, analyzing vast amounts of disparate data is critical and differentiating
Why do Financial Services Executive care?
I think this is self explanatory
The information that is most relevant may not necessarily be the closest in literal meaning
The information that is most relevant may not necessarily be the closest in literal meaning
Key information needed to properly address a query or provide a satisfactory answer may not be found in the parts of the text or context that seem to be the most directly connected or aligned with the question at a surface level.
Instead, the most valuable insights or details for resolving the question could be present in less obvious or more indirect parts of the available information.
The key concept here is that Retrieval Augmented Generation (RAG) uses semantic similarity of the text to infer relevance whereas graphs are looking at the relationships between the entities present in the text to infer relevance.
A RAG application may miss key relevant information
How does a vector search see it?
In a RAG world, the LLM sees “we sell widgets” + “widgets are hot in the UK” = jackpot!
In GraphRAG it sees the nuances of the non-obvious but related factors
In GraphRAG, it sees the non-obvious but related facts
The less semantically similar but related information is incorporated into the Graph RAG search, unveiling "while there is a lot of demand, some logistical issues that are likely going to impact sales."
Graph RAG is much more explainable and auditable than RAG
Explainable and auditable
Let's not forget about how much more explainable and auditable graph search is than vector search. Which is more explainable, an array of 1024 floating numbers side-by-side or a node + edge representation? I took some very simple concepts and generated embeddings for them then calculated their similarity. Can you explain why ("dog" and "puppy") is 12% “less similar” than ("cat" and "kitten")? I’m guessing there is not a single person in the world that could explain it either because these are giant black boxes.
At this point, we got to a demo. Maybe some day I'll record it and add it here, but until then if you'd like to reproduce it, the notebook is available. Make sure to pay close attention to the prerequisites and add an issue to the GitHub repo if you have trouble.
GraphRAG is more expensive and computationally difficult, so make sure it fits your needs
Considerations before using GraphRAG
Finally, the dirty little secret...you don't get all this better functionality for free today. GraphRAG is certainly is more expensive, especially if you need the newest most computationally complex models. You need to have some graph expertise to understand and guide the LLM during graph creation. It isn’t perfect. There is a learning curve. GraphRAG relies heavily on more complex LLM calls which can lead to slower response times. Can you get away with RAG for a majority of your calls and route to GraphRAG when you need it? If you can, consider it.

Closing Remarks

I was a bit surprised by how many people told me they had never heard of GraphRAG prior to my presentation. If you are one of them, or even if you had, hopefully these talking points helped you understand how graphs are the hot topic for overcoming issues in traditional LLM + RAG applications. I hope you came away with an appreciation behind the strengths and why you it needs to be on your radar for GenAI workloads. As a Principal Neptune Specialist Solutions Architect, I am deep into the world of graphs, so follow me here on community.aws or on LinkedIn if you are interested in hearing more about graphs.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments