
How Hypotenuse AI uses AWS to make LLMs more factually accurate
Hypotenuse AI is an AI writer built for ecommerce brands and SEO teams to manage and create content. Here's how they do it in a more contextual and factually accurate way with AWS OpenSearch.
- Joshua Wong, Founder & CEO, Hypotenuse AI
- Ng Shi Hui, Marketing Lead, Hypotenuse AI
- AI writer, Hypotenuse AI
- Glendon Thaiw, Startup Solutions Architect, AWS
- Outdated information: LLMs are trained on a fixed dataset before having it go live. Once it’s live, it doesn’t continue to train and learn new data. No matter how up-to-date the dataset is, it will quickly go stale, as new information and news are created every day—whether about newly elected politicians, the weather today, or recent celebrity events. This means that when questioned about new information, an LLM might simply not know about it at all, and may end up hallucinating its response to ensure it still predicts the next word.
- Incomplete representation: LLMs don’t have explicit memory or a database they can reference. Even if it has been trained on the exact facts before, it can still make mistakes or misremember what it has been trained on.
- Limitations of its training data: LLMs are trained on large datasets that may contain inaccuracies and biases, which the model might learn and reproduce.
- Insufficient data: To satisfy the inquirer, the LLM might try to produce a response even if it doesn’t have the data to support it. That makes it likely to be false and inaccurate.
- Misinterpretation: The way a question or prompt is phrased can lead to the model misinterpreting the user’s intent, therefore pulling from unrelated contexts and data points and generating fabricated responses.
By using relevant and precise data provided by OpenSearch to augment the LLM’s inputs, the LLM can generate a much more accurate response and make use of that information rather than hallucinating its own.
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.