Amazon Bedrock Parte 1a
Some of the fundamental parts of Amazon Bedrock, which is a great generative artificial intelligence tool, is that we can interact with different and powerful models.
Published Aug 14, 2024
Amazon Bedrock is a fully managed tool that allows us to develop generative artificial intelligence applications in a secure, private, and responsible manner.
Let's get into what the framework of artificial intelligence is and its different fields, for example machine learning and deep learning.
From what we see currently, generative artificial intelligence is a field within artificial intelligence and in this case it focuses on the development of algorithms that can create new content, in this case generative, such as text, images, audio, video, even code, and unlike traditional artificial intelligence, which focuses on analyzing and understanding existing data, generative artificial intelligence has the ability to generate completely new and original data.
That is the difference between this and artificial intelligence at the moment.
Some common terms that are already within what is generative artificial intelligence are FM, LLM, embeddings, fine tuning, knowledge base, agents, tokens, transformers, synthetic data.
One of the main points is, for example, what GenAI is, the LLMs, which we know as the large language model, since they are basic models, trained in the architecture of transformers. And these can perform a wide range of tasks, natural language processing, NLP, such as text generation, classification, summary, since these LLMs have been considered revolutionary, due to the capacity to generate coherent text.
Like what you do when you enter gpt chat and you execute a prompt, a request, and this generates a coherent text.
Well, all of these are close to prediction models, symbolic because they generate the next word, that is, they are predictive, and given the sequence of words, the term length, and this refers to the number of parameters (numerical values that are adjusted during the training of a model to learn patterns and relationships in the data) trained in these models, and it is for billions of parameters and one of these examples:
Like GPT-3.5 and LLaMA 3 are known for their ability to:
1. Generate coherent and contextual text
2. Understand natural language
3. Perform language processing tasks such as translation, summarization, and answering questions.
These models are trained on large amounts of text and use deep learning techniques to learn patterns and relationships in language.
Claude 3, for example, is a language model developed by Anthropic that focuses on text generation and natural language understanding.
GPT-3.5 is an advanced version of the GPT-3 model, developed by OpenAI, that has demonstrated impressive capabilities in text generation and language processing tasks.
LLaMA 3 is another language model developed by Meta AI that focuses on text generation and natural language understanding.
Parameter counts are:
• GPT-3: 175 billion parameters
• LLaMA 3: 70 billion parameters
• Claude 3: 12 billion parameters
The more parameters a model has, the more capable it is of generating complex and coherent content. However, it also requires more computational resources and data to train.
How is it that generative artificial intelligence is divided from deep learning because it takes many extracts from deep learning, but with the difference that this is generative why is there so much difference currently now between even though they descend from the same branch? Well, because of the processing power (the processors) it makes generative artificial intelligence easier now, that is what has made it advance in totally spectacular leaps. Here we make a small comparison between what is artificial intelligence, both discriminative and generative, and we can see the Deep learning environment and we have those predominant approaches that are used to address artificial intelligence tasks, for example, what is the discriminative part of Deep learning, we can say that both the discriminative and the generative pursue different objectives, they share the same fundamental purpose, which is to extract knowledge from data, but the discriminative specializes in classifying and predicting categories from the input data, the objective is to learn the relationship between the input characteristics and the corresponding output, in this case, what you can have as an output, for example, object or category, and that category. What is sentiment analysis? That is on the part of the discriminative part, on the other hand, generative learning focuses on generating new data and this simulates the distribution of the training data. The objective is to learn the underlying structure of the data and use this knowledge to create new examples that are consistent with the original data, which is what we use within generative artificial intelligence. Discriminative AI focuses on predicting, while generative AI focuses on generating. Deep learning is a technique that can be used in both categories.
Amazon Bedrock allows developers to efficiently create and deploy large language models (LLMs). Some ways to use different LLM models is through Model Selection, as it offers a variety of pre-trained LLM models, such as Claude3 Llama3 and others with which you can perform Model Deployment, to use them in applications such as natural language processing, sentiment analysis, text classification, etc.
We will continue soon with Part 2.