logo
Menu
Bridging the Efficiency Gap: Mastering LLM Caching for Next-Generation AI (Part 1)

Bridging the Efficiency Gap: Mastering LLM Caching for Next-Generation AI (Part 1)

LLM caching refers to the process of storing and managing the intermediate computations and outputs generated by language models, allowing for rapid retrieval and reuse in subsequent queries or tasks. In this first part of a blog series, we'll explore the fundamental principles of LLM caching, delve into the various caching architectures and implementations that can be employed.

Uri Rosenberg
Amazon Employee
Published Aug 4, 2024
Last Modified Aug 7, 2024
As the use of large language models (LLMs) continues to proliferate across a wide range of industries and applications, the need to optimize their performance and efficiency has become increasingly critical. One of the key strategies for unlocking the full potential of these powerful AI systems is the implementation of effective caching techniques.
LLM caching refers to the process of storing and managing the intermediate computations and outputs generated by language models, allowing for rapid retrieval and reuse in subsequent queries or tasks. By reducing the computational overhead required to process recurring inputs or similar requests, caching can dramatically improve the responsiveness and throughput of LLMs, enabling them to operate at greater scale and with higher levels of cost-efficiency.
In this blog post, we'll explore the fundamental principles of LLM caching, delve into the various caching architectures and algorithms that can be employed, and examine real-world case studies that demonstrate the transformative impact of these techniques on the performance of state-of-the-art language models. Whether you're a machine learning engineer, a data scientist, or an AI enthusiast, this guide will provide you with the insights and strategies you need to optimize the caching of your LLMs for maximum impact.

LLM Caching Design Patterns

Single LLM Caching Layer

The simplest implementation is the single caching layer. In the context of large language models (LLMs) this refers to a dedicated storage and retrieval mechanism that sits between the actual language model and the end-user or application.
Single caching layer arch
Single caching layer
The caching layer acts as an intermediary, intercepting requests, checking if the response is already available in the cache, and serving that cached data back to the user if so. This single caching layer helps to optimize the overall performance and scalability of the LLM-powered system.

Multi-Layer Caching

A multi-layer caching approach for large language models (LLMs) refers to using multiple levels or tiers of caching, each with a different purpose and design.
Multi-layer caching
Multi-layer caching
Here's a short example of a multi-layer caching system for a large language model (LLM), where the first layer uses exact key matching and the second layer uses semantic caching:
Let's say a user asks the LLM-powered system the question: "What is the capital of France?"
  1. First Cache Layer (Exact Key Matching):
    • The first layer is the input cache, which uses exact key matching to check if the same question has been asked before.
    • In this case, the input cache doesn't find an exact match for "What is the capital of France?", so it passes the request on to the next layer.
  2. Second Cache Layer(Semantic Caching):
    • The second layer is the output cache, which uses semantic caching techniques to identify similar questions that have been asked before.
    • The semantic caching algorithm analyzes the meaning and intent of the question, and finds that a previous query "What is the capital city of France?" has a very similar semantic structure and can be used to provide the response.
    • The output cache then retrieves the cached response for "What is the capital city of France?" and returns it to the user, without needing to send the request to the LLM.
By using this multi-layer approach, the system can first check for an exact match in the input cache, and if that fails, it can then leverage the more advanced semantic caching in the output cache to find a relevant response. This helps to maximize the chances of serving a cached result, reducing the computational load on the LLM and improving the overall responsiveness of the system.
The combination of exact key matching and semantic caching in a multi-layer architecture allows the system to balance speed, precision, and flexibility in serving user requests, leading to a more efficient and effective LLM-powered solution.

RAG Caching layer: pre-retrieval caching

In the context of Retrieval Augmented Generation (RAG) models, a single caching layer where the caching happens before the retrieval from the knowledge.
Pre-retrival Cache
Pre-retrival Cache
In a RAG model, the process of generating an output response typically involves two main steps:
  1. Retrieval: The model first retrieves relevant information or passages from an external knowledge base (e.g., a database, Wikipedia, or a custom corpus) based on the input prompt or query.
  2. Generation: The retrieved information is then used as input, along with the original prompt, to a language model that generates the final output response.
A caching layer in this context would be placed before the retrieval step, intercepting the input prompts or queries and checking if the corresponding retrieval results are already available in the cache.
By caching the retrieval results before the actual knowledge base lookup, this single caching layer can significantly improve the overall performance and efficiency of the RAG model. It reduces the computational overhead and latency associated with the retrieval step, especially for frequently occurring or similar prompts or queries.
This caching approach can be particularly beneficial in scenarios where the knowledge base is large or expensive to query, as it helps to minimize the number of actual knowledge base lookups required.

RAG Caching layer: post-retrieval caching

Another option of Retrieval Augmented Generation (RAG) applications is a single caching layer where the caching happens after the retrieval from the knowledge base. This is a design where the retrieved information is cached for future use, rather than caching the input prompts or queries.
post-retrival caching
post-retrival caching
Here's how this single caching layer would work in a RAG model:
  1. Prompt/Query Arrives: A user submits an input prompt or query to the RAG model.
  2. Retrieval: The RAG model initiates the retrieval process, accessing the external knowledge base (e.g., a database, Wikipedia, or a custom corpus) to find the most relevant information or passages based on the input prompt or query.
  3. Caching Layer Stores: After the retrieval step, the caching layer intercepts the retrieved information with prompt and stores it in the cache, associating it with the original input prompt or query.
  4. Generation: In case of a cache miss, the retrieved information, along with the original prompt or query, is then passed to the language model component of the RAG model to generate the final output response. However, if there's a cache hit - the cached response is returned.
This caching layer approach can be beneficial in scenarios where the knowledge base is small, fast, or cheap to query, but also updated frequently - thus the caching is checked agaist "fresh" retrieved documents. By caching the retrieved information, the system can quickly serve relevant results to the user, without the need to repeatedly invoking the LLM.

Exact vs Semantic caching

The choice between exact key caching and semantic caching for large language models (LLMs) involves a trade-off between speed and flexibility.

Exact key caching

Exact key caching, where the system looks for an identical match to the input prompt or query, is the fastest approach as it simply retrieves the pre-computed response from the cache. However, this method is limited to handling only the exact same inputs that have been seen before, and lacks the ability to generalize to similar or paraphrased queries.
In the following screenshots, we see exact caching in action. First we ask the LLM "What is OpenAI?", and the response is generated within 3.89 seconds. The second time we invoked the same question we get the response within 33.8 milliseconds (plus, we also saved the LLM invocation cost and usage). However, any variation from that exact question (even introducing an extra space!), would result in cache miss and the response we see is given within 5.15 seconds.
Exact key caching example
Exact key caching example
Pros
Easy to implement and manage
Cons
Might not be efficient enough
Recommended use-cases
  • Search (Keyword)/Short inputs
  • POC/Early stage (placeholder)
  • As L1 caching

Semantic caching

On the other hand, semantic caching employs more advanced natural language processing techniques to analyze the meaning and intent behind the input, rather than just the literal text. This allows the system to identify relevant cached responses even for inputs that may not be an exact match, providing greater flexibility and coverage.
In the following screenshots, we share a semantic caching example. First we ask the LLM "What is cache eviction policy?", and the response is generated within 4.38 seconds. The second time we asked the same question but we added an extra space. Still, the question semantics was similar which resulted in a cache hit and a response time of 1.04 seconds.
Semantic cache example
Semantic cache example
However, semantic caching also introduces another complexity: caching hit false positives. Consider the following example:
Semantic caching hit  false positive
Semantic caching hit false positive
Here we are asking the LLM to generate a joke, and then asking to generate 2 jokes. The semantic caching mechanism concluded these 2 requests are similar enough and resulted with a cache hit even though the user explicitly asked for 2 jokes! The same is true for false negatives in case of cache misses. Thus, implementing a semantic caching layer would need to optimize a binary classification metric such as hit ratio. The hit ratio refers to the proportion of requests or queries that can be successfully served from the cache. Specifically, the hit ratio is calculated as:
Hit Ratio = Number of Cache Hits / Total Number of Requests

End of Part 1

In this first part of our blog series, we explored several caching design patterns and architectures that can be applied to LLM-powered applications.
In the second part of this blog series, we will delve into the practical implementation details of building a caching layer for LLMs. We will cover topics such as cache design, replacement policies, and integration with existing LLM-powered architectures, equipping you with the knowledge to optimize the performance of your own large language model applications.
Go to Part 2
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments