
How to prepare for AWS Certified AI Practitioner Foundation exam?
Master AWS AI Practitioner Exam: Proven Resources, Mindmaps & Videos to Fast-Track Your AIF-C01 Certification Success!
- Detailed insights into prompt engineering techniques
- In-depth exploration of Amazon Bedrock's features
- Understanding of Responsible AI principles
- Comprehensive overview of AWS machine learning services
- Practical knowledge about AI security and guardrails
- What is prompt engineering? Prompt engineering is the art of creating and refining input prompts to get the best results from LLMs across many applications, by carefully choosing words, phrases, sentences, and punctuation. Prompt engineering is the art of communicating with an LLM. High-quality prompts condition the LLM to generate desired or better responses. The detailed guidance provided within this document is applicable across all LLMs within Amazon Bedrock.
- What is a prompt? Prompts are user inputs guiding Amazon Bedrock's LLMs to produce task-relevant outputs.
- Few-shot prompting vs. zero-shot prompting
- Inference parameters – Values that can be adjusted during model inference to influence a response. Inference parameters can affect how varied responses are and can also limit the length of a response or the occurrence of specified sequences. For more information and definitions of specific inference parameters, see Influence response generation with inference parameters.
- Influence model responses with inference parameters
Temperature At lower values, temperature increases the likelihood of higher-probability tokens while simultaneously decreasing the likelihood of lower-probability tokens. When temperature is set to higher values, it increases the likelihood of lower-probability tokens and decreases the likelihood of higher-probability tokens.
Top K With lower values of Top K, the model removes lower-probability tokens from consideration. When Top K is set to higher values, it allows lower-probability tokens to be part of the generation process.
Top P At lower values, Top P removes lower-probability tokens from the potential token selection. When Top P is increased, it allows lower-probability tokens to have a chance of being selected during text generation.
- Build and scale generative AI applications with foundation models
- Amazon Bedrock, a fully managed service, offers access to numerous powerful foundation models (FMs) from leading AI providers through one API, enabling the creation of secure, private, and ethical generative AI applications.
- Using Amazon Bedrock, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources.
- Access to FMs from AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon for text generation, summarization, question answering, image generation, and more
- Model Access, Playgrounds, API and Fine Tuning - A user-friendly graphical interface in the AWS Management Console in which you can experiment with running model inference to familiarize yourself with Amazon Bedrock. Use the playground to test out the effects of different models, configurations, and inference parameters on the responses generated for different prompts that you enter. For more information, see Generate responses in a visual interface using playgrounds.
- Overview of Amazon Titan model
- Privately fine-tune FMs with your own labeled datasets or continue pretraining with unlabeled data to adapt models to your specific domain or industry.
- Retrieval Augmented Generation (RAG): Enrich FM responses with relevant data from your company's knowledge bases using the Knowledge Bases for Amazon Bedrock feature. The process of querying and retrieving information from a data source in order to augment a generated response to a prompt.
- Agents: Create agents that can plan and execute complex, multi-step tasks across your enterprise systems, knowledge bases, and APIs. Automate the insurance claim lifecycle using Amazon Bedrock Agents and Knowledge Base
- If you wish to see Amazon Knowledge Base and agents in action, do check out this Generative AI - Amazon Bedrock Zero to Hero workshop delivered by me.
- Amazon Bedrock, a fully managed service, provides access to powerful foundation models (FMs) via a single API, offering the tools for secure, private, and responsible generative AI application development.
- On-Demand and Batch: This mode allows you to use FMs on a pay-as-you-go basis without having to make any time-based term commitments
- Provisioned Throughput - A level of throughput that you purchase for a base or custom model in order to increase the amount and/or rate of tokens processed during model inference. When you purchase Provisioned Throughput for a model, a provisioned model is created that can be used to carry out model inference.
- Content filters - Adjust filter strengths to block input prompts or model responses containing harmful content
- Denied topics - Define a set of topics that are undesirable in your application. These topics will be blocked if detected in user queries or model responses
- Sensitive information filters - Amazon Bedrock Guardrails detects sensitive information such as personally identifiable information (PIIs) in input prompts or model responses. You can also configure sensitive information specific to your use case or organization by defining it with regular expressions (regex). Block or mask sensitive information such as personally identifiable information (PII) or custom regex in user inputs and model responses
- Word filters - Configure filters to block undesirable words, phrases, and profanity. Such words can include offensive terms, competitor names etc
- Contextual grounding check - Detect and filter hallucinations in model responses based on grounding in a source and relevance to the user query.
- Amazon Augmented AI (Amazon A2I) - Implement human review of machine learning predictions
- Amazon Bedrock - Build and scale generative AI applications with
- Amazon Comprehend - Analyze unstructured text
- Amazon Fraud Detector - Detect more online fraud faster using machine learning
- Amazon Kendra - Enterprise search service powered by ML
- Amazon Lex - Build voice and text chatbots
- Amazon Personalize - Add real-time recommendations to your apps
- Amazon Titan - A generative-AI powered assistant
- Amazon Rekognition - Search and analyze images
- Amazon SageMaker - Build, train, and deploy machine learning models
- Amazon Textract - Easily extract text and data from virtually any document
- Amazon Transcribe - Powerful speech recognition
- Amazon Translate - Powerful neural machine translation
- Key definitions before you book your exam! Token - A sequence of characters that a model can interpret or predict as a single unit of meaning. For example, with text models, a token could correspond not just to a word, but also to a part of a word with grammatical meaning (such as "-ed"), a punctuation mark (such as "?"), or a common phrase (such as "a lot").
- Lookup for Exam Prep Enhanced Course on Skillbuilder: AWS Certified AI Practitioner (AIF-C01) which includes labs, exam-style questions, and flash cards (8 hours)
- You can simply download X-Mind software (you can use basic features with no license needed) and the mind map from Google Drive. If you're preparing for the AWS Certified Machine Learning Engineer - Associate exam, give it a read.