How To Choose Your LLM
Large language models (LLM) can generate new stories, summarizing texts, and even performing advanced tasks like reasoning and problem solving, which is not only impressive but also remarkable due to their accessibility and easy integration into applications. In this blog, I will provide you with the tools to understand how LLMs work and select the optimal one for your needs.
Elizabeth Fuentes
Amazon Employee
Published Dec 5, 2023
Last Modified Apr 12, 2024
Generative Artificial Inteligence (Generative AI) has made remarkable progress in 2022, pushing the boundaries with its ability to generate content mimicking human creativity in text, images, audio, and video.
The abilities of Generative AI stem from deep learning models (Fig. 1), which are trained using vast amounts of data. Deep learning models, after extensive training on billions of examples, become what is called "foundation models" (FM). Large language models (LLMs) are one kind of FM that leverage these foundation models for generative capabilities like reasoning, problem-solving and creative expression at a human level. They are capable of understanding language and performing complex tasks through natural conversation.
Over the past few decades, artificial intelligence has been steadily advancing. However, what makes recent advances in generative AI remarkable is its accessibility and easy integration into applications.
In this blog, I'll provide you with the tools to understand the workings of LLMs and select the optimal one for your needs.
There are a lot popular LLMs, some of those more advanced LLMs have been trained on far more data than others. The additional training empowers them to tackle complex tasks and engage in advanced conversations.
Nonetheless, their operation remains the same: users provide instructions or tasks in natural language, and the LLM generates a response based on what the model "thinks" could be the continuation of the prompt. (Fig. 2).
The art of building a good prompt is called prompt engineering. It is a discipline with specific techniques for developing and refining prompts that allow language models to have effective outputs. Prompt engineering is focused on optimizing prompts for efficient and helpful responses from language models.
With a well-designed prompt, the model's pre-trained abilities can be leveraged to serve novel queries within its scope. Two of the most well-known Prompt Engineer techniques are:
For tasks that do not require prior examples to understand the context of the task that is required. For example, classification.
Zero-shot capabilities refer to the ability of large language models to complete tasks they did not train it on. However, they still face limitations when performing complex tasks with only a short initial prompt without guidance. Few-shot Learning improves model performance on difficult tasks by incorporating demonstrations or in-context learning.
π Tip: Put the LLM in context of what its role is, for example: "You are a travel assistant".
Learn about prompt engineering:
To make this decision, I am going to list some aspects that I consider to be most important:
What will be the need that the LLM is going to solve in the application. The functionalities with the highest usage are:
- Summarize
- Classification
- Question Answering
- Code generation
- Content writing
- Instruccion following
- Multilingual Task
- Embedding: translate the text into a vector representation.
As I mentioned before, there are advanced models capable of handling complex tasks and multitasking. For Example, Llama-2-13b-chat is a powerful LLM for managing conversations, but only in English.
You can select a model that can satisfy all your requirements at once, or create decoupled applications with multiple specialized models for each task.
π Remember: Use prompt engineer to generate desired outputs.
There are LLM specialized in certain tasks, capable of speaking one language or more than one. Itβs important to define if your application will speak only one language or more than one before choosing the LLM. For example, Titan Text Express is multilingual, unlike Titan Text Lite, which only talks in English.
π Tip: : If the LLM you need doesn't have the desired language function, try using a multilenguial LLM for translation or Amazon Translate before sending the prompt.
A context window refers to the length of text an AI model can handle and reply to at once, this text, in most LLMs, is measured in tokens.
Regarding
tokens
, are like the individual building blocks that make up words. For example:- In English, a single token is typically around 4 characters long.
- A token is approximately 3/4 of a word.
- 100 tokens equate to roughly 75 words.
This code snippet shows how to determine the token count using Jurassic-2 Ultra with Amazon Bedrock.
Breaking down the response:
Let's find out the token count in both the Prompt Input and Generated Output(completion):
Prompt Input:
Generated Output:
As there are open source LLMs, there are other payments, depending on the provider, modality and model, however, they all take the number of tokens into consideration.
Referring to the modality of paid LLMs:
β
Only Inference: When invoke the model as an API, the pricing corresponds to the number of incoming and outgoing tokens (Fig. 5). Amazon Bedrock is fully managed service offers the option to use LLMs through an API call, with a choice between on-demand or Provisioned Throughput to save costs, see pricing here and pricing examples here.
β
Customization (fine-tuning): when it is necessary to fine-tuning the model to a specific need (Fig. 6). In this type of pricing to the previous value, you must add the new training and the storage of the new model. Amazon Bedrock also offers a mode for customization (fine-tuning).
For those who need to experience more there is Amazon SageMaker JumpStart, which allows you, within several functionalities, to train and tune models before deployment with a jupyter notebook. Amazon SageMaker JumpStart has available this models, and check the pricing here.
Take a look at this chart of some available Amazon Bedrock models for a broader perspective when making comparisons.
Provider | Model | Supported use cases | Languages | Max tokens Context Window |
---|---|---|---|---|
Anthropic | Claude v2 | Thoughtful dialogue, content creation, complex reasoning, creativity, and coding | English and multiple other languages | ~100k |
Anthropic | Claude v1.3 | Text generation, Conversational, Coding | English and multiple other languages | ~100k |
Cohere | Command | Chat, text generation, text summarization. | English | 4K |
AI21 Labs | Jurassic-2 Ultra | Question answering, summarization, draft generation, advanced information extraction, ideation for tasks requiring intricate reasoning and logic. | English, Spanish, French, German, Portuguese, Italian, Dutch | 8,192 |
AI21 Labs | Jurassic-2 Mid | Question answering, summarization, draft generation, advanced information extraction, ideation. | English, Spanish, French, German, Portuguese, Italian, Dutch | 8,192 |
Amazon | Titan Text Generation 1 (G1)- Lite | Open-ended text generation, brainstorming, summarization, code generation, table creation, data formatting, paraphrasing, chain of thought, rewrite, extraction, Q&A, and chat. | English | 4K |
Amazon | Titan Text Generation 1 (G1) - Express | Retrieval augmented generation, open-ended text generation, brainstorming, summarization, code generation, table creation, data formatting, paraphrasing, chain of thought, rewrite, extraction, Q&A, and chat. | 100+ languages | 8K |
Amazon | Titan Embeddings | Translates text into a numerical representation, Text retrieval, semantic similarity, and clustering. | 25+ languages | 8K |
Meta | Llama-2-13b-chat | Assistant-like chat | English | 13B |
Thank you for joining this reading where I explain how LLMs work, and how to improve response using the prompt engineering technique. You learned how to choose the best one for your application based on features such as:
- The LLMβs mission in the application: what problem will the LLM help me to solve?
- The language: Do I need the LLM to understand in multiple languages?
- Length Of Context Window: The amount of text in the input request and generated output.
- Pricing: Where I need to know the cost of the LLM that fits my needs and also ask myself: Are the LLMs available sufficient for what I need? If not, do I need to do fine-tuning?
Finally, you saw what a comparison chart built with some of the available Amazon Bedrock models looks like.
π Some links for you to continue learning and building:
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.