logo
Master Prompt Engineering: Build AI apps with Claude & PartyRock

Master Prompt Engineering: Build AI apps with Claude & PartyRock

In this article, you'll learn about prompt engineering techniques and best practices, explore practical strategies to boost your productivity with Claude, and get step-by-step guidance on how to use PartyRock to build AI apps without coding, simply by crafting prompts.

VS
Viktoria Semaan
Amazon Employee
Published Feb 5, 2024
Last Modified Feb 10, 2024

It's remarkable how quickly generative AI is transforming our lives! In November 2022, OpenAI introduced ChatGPT. Now, just over a year later, AI assistants powered by Large Language Models (LLMs) have changed the way we perform our professional and personal tasks. Every day, I find myself using AI tools to improve my productivity as a developer, content creator, investor, and even as a parent.
After experimenting with numerous AI tools last year, I'm here to share insights to help you get the most out of AI assistants in your own life. To effectively use AI assistants, we need to learn how to speak their language, almost like a parent guiding their children :) We speak with LLMs by designing prompts.
To illustrate how prompts influence the responses of LLMs, let’s look at an example. Since English isn’t my first language, I often turn to AI chatbots for grammar checks.
❌ You might simply ask an AI assistant to fix grammar: <Text> But sometimes, AI assistants try to be overly helpful, rephrasing your words and adding extra context. As you can see on the video below, Claude AI assistant rephrased my original text extending it from 5 lines to 6 lines, which wasn’t my intention.
✔️ To steer the AI to the desired outcome, I learned to use clear and detailed instructions. By refining my prompt, I got corrected text along with explanations of all the changes: Act as you are an English teacher. Please help me to fix my message. I want you to strictly correct my grammar mistakes and typos and keep my natural tone of voice:
Message: <Message>
Claude AI Prompt examples
Claude AI Prompt examples
In this article, I'll guide you through the process of designing prompts to effectively communicate with LLMs, leading to increased efficiency, enhanced personalization, and more accurate responses. I recommend trying out the examples yourself with Claude as we go along - this hands-on experience will enhance your skills for prompt engineering. In addition to improving your ability to converse with Claude, I will show how to build AI-powered apps using just prompts, all within the free PartyRock playground.

Prompt engineering is a rapidly evolving field that focuses on developing, designing, and optimizing prompts to enhance the output of LLMs for your needs. This skill enables you to direct the model's behavior and customize its responses to meet your specific objectives. And guess what? It's become so vital that there's even a new job title for it – AI prompt engineer. The first job postings for AI prompt engineers emerged in 2023, and the demand for these roles is surging, with salaries reaching to over $300,000 per year.
Mastering the art of writing effective prompts is now essential across various roles and industries, not only for prompt engineers. Consider software development, for example. A Stack Overflow survey has found that 70% of developers are already using AI coding tools or plan to. Developers are leveraging AI to speed up the software development cycle by automating tasks from writing code, running tests, to even building entire solutions with AI. This was something that seemed completely impossible just 2 years ago!
Now, let’s dive into practical implementation to level up your interaction with Claude. Claude is an AI chatbot developed by Anthropic and released in March 2023. In November 2023, Anthropic launched the Claude 2.1 version that powers claude.ai, which we will be using.

Prompt structure and techniques can vary based on the specific LLM being used. Claude is specifically developed to produce text within the framework of an Assistant role, aimed at facilitating an ongoing interaction between a Human User and an AI Assistant.
A prompt includes various elements, each serving a specific purpose in guiding the response of LLM. Let’s look at a breakdown of some common elements.
  • Instruction: “Act as you are an experienced Data Scientist. Consider the context below to answer the user’s questions with a friendly tone. Provide a short concise answer.”
The instruction outlines the task or guidance on how the model should behave, including the tone it should adopt. Be clear and specific with instructions. Avoid saying what not to do instead focus on the task at hand. Assigning a role helps with style of output and tone of voice.
  • Context: “You are at a meeting with customers from a manufacturing company that are interested in starting developing generative AI applications. They have a question.
The context supplies additional information to describe the environment and guide the model's responses. This may encompass background information, relevant documents, and examples.
  • User Input:We would like to establish success criteria for our first POC. Can you suggest metrics?
User input presents the direct query that the model is expected to address. It can also offer further direction on structuring the response, such as instructions to "think step by step" or examples of the desired format for the output.
Claude Prompt Structure
Claude Prompt Structure
When you send a prompt to an AI Assistant, it creates a response that gets stored in what's known as the context window. This means that for any questions you ask afterward, the model can use the information from earlier interactions to provide more relevant answers, because it remembers the previous context.
The quality of the instructions you provide in the prompt influences the quality of its outputs, particularly for complex tasks. There are many techniques used to improve the accuracy, safety, and capabilities of AI assistants. As this field continues to mature, new prompt engineering techniques continue to emerge as users strive for better outcomes and alignment.
Next, let’s explore at four common Prompt Engineering Techniques. Feel free to try examples below using claude.ai to follow along.

Zero-shot prompting is a technique where you ask the model to do something without providing any previous examples of how to do it. Essentially, you're giving the model a task without any prior exposure or "shots" at similar tasks. The expectation is that the model will understand and perform the task based solely on its pre-trained knowledge, without needing specific examples or instructions related to that particular task.
Pro tip: Directly allow it to say "I don't know" when an AI assistant is unsure of the answer to your question. This will significantly reduce the chance of an AI assistant inadvertently generating false information, often referred to as "hallucinations".
Example:
In this example, we ask Claude to help us with the classification of AWS certifications by difficulty level without offering extra details. By clearly defining the target classifications (easy, intermediate, or difficult) and outlining the expected output format, we guide the model to make accurate predictions .
Prompt:
Classify the following certification exam as either easy, intermediate or difficult: [AWS Solutions Architect Associate]. If you don't know you can say I don't know or ask for more information.
Output:
Without knowing more about the specifics, I would guess the AWS Solutions Architect Associate certification exam may be at an intermediate level of difficulty for most candidates. But the actual experience can vary based on individual's existing AWS and cloud architecture knowledge. Let me know if you need any clarification or have additional details to help assess the difficulty level.

Few-shot learning is a technique where you teach the model using only a handful of examples for new tasks. By using a few specific examples and clearly outlining the output format, you guide the model to make accurate inferences. It's important to use a diverse examples that cover different aspects of what you're teaching. This variety helps the model in understanding the full scope of the task, enhancing its ability to generalize from limited data.
Pro tip: Start with simple examples and gradually move to more complex ones, progressively building the model's understanding. This step-by-step approach helps the model better process new information.
Example:
Similar to the previous example, we ask to classify AWS certifications by difficulty level, but this time we provide a few examples to guide the model.
Prompt:
AWS Cloud Practitioner is an example of easy difficulty exam
AWS Solutions Architect Associate is an example of intermediate difficulty exam
AWS Specialty Machine Learning is an example difficult difficulty exam
Classify AWS Networking Specialty Certification
Output:
Based on the examples you have provided for easy, intermediate and difficult AWS certification exams, I would classify the AWS Networking Specialty certification exam as difficult.

Chain of thought (CoT) is a technique that breaks down complex reasoning tasks through intermediary reasoning steps. You can use both zero-shot and few-shot prompting techniques with CoT prompts. If you are asking Claude to perform a complex task that includes many subtasks, it is helpful to list the subtasks separately.
Pro tip: Use the phrase "Think step by step" in your prompt to invoke CoT reasoning. This method is particularly useful for tasks that involve multiple layers or stages, as it encourages the AI to outline its thought process for each stage, making the reasoning transparent and the conclusions well-founded.
Example:
In this example, we ask to create a study plan for an AWS certification that incorporates different types of resources that have to be planned week-by-week.
Prompt:
Create a weekly preparation plan for AWS Cloud Practitioner exam. Before answering, think step-by-step and break down the plan into study topics. For each week, suggest a study topic that includes the following resources: a lesson from a video course, official documentation (FAQs, whitepapers, guides), and a hands-on exercise for practice (workshop, hands-on lab, tutorial).
Output:
Here is a suggested 6 week preparation plan for the AWS Cloud Practitioner certification exam:
Week 1:
Study Topic: AWS Cloud Concepts
Video Lesson: AWS Cloud Concepts from acloud.guru
Official Documentation: AWS Whitepaper Overview
Hands-On: Sign up for AWS free tier account
....

AI assistants powered by LLMs derive their knowledge from the dataset that they are trained on, which may not always be the most current or domain-specific. To overcome this limit, you can provide additional context for a model to consider.
One challenge of constructing long prompts that include context is token limitation, which defines how much context they can process at one time. For Claude 2.1, the limit is currently 200,000 tokens translating to roughly 150,000 words, or over 500 pages of material and trained on data up to December 2022. The latest model GPT-4 Turbo has context window 128,000 tokens and trained on data up to April 2023.
To manage documents exceeding this limit, enclose lengthy text within XML tags to distinguish it from direct instructions. This ensures clarity and helps the LLM focus on relevant input portions.
Pro tip: For document Q&A, ask the question at the end of the prompt after other input information. Additionally, instruct Claude to identify quotes from the document related to your query before providing an answer. This step ensures that answers are directly linked to the content in the document, improving the relevance and accuracy of AI responses.
Example:
You're considering a new certification and are intrigued by the recently announced AWS Certified Data Engineer - Associate certification, which Claude doesn't yet know about. To help Claude advise which certification best aligns with your skills and goals, you download the exam guides for both the new AWS Certified Data Engineer - Associate and the AWS Certified Developer - Associate and provide them to Claude. You ask Claude to make a recommendation on which certification you should pursue based on your career objectives.
Prompt:
Read the attached documents carefully, because I'm going to ask you a question about it.
First, find the quotes from the document that are most relevant to answering the question, and then print them in numbered order. Quotes should be relatively short. If there are no relevant quotes, write "No relevant quotes" instead.
Then, answer the question, starting with "Answer:". Do not include or reference quoted content verbatim in the answer. Don't say "According to Quote [1]" when answering. Instead make references to quotes relevant to each section of the answer solely by adding their bracketed numbers at the end of relevant sentences.
Here is the first question:
I am considering to take an AWS certification exam. I've been building serverless data analytics pipeline for 3 years using services such as AWS Glue, AWS Step Functions, Amazon Redshift. Which certification should I take?
If the question cannot be answered by the document, say so.
Output:
<documents>
Quote 1: The AWS Certified Data Engineer - Associate (DEA-C01) exam validates a candidate’s ability to implement data pipelines and to monitor, troubleshoot, and optimize cost and performance issues in accordance with best practices.
The exam also validates a candidate’s ability to complete the following tasks:
Ingest and transform data, and orchestrate data pipelines while applying programming concepts.
.....
Answer: Based on the quotes, I would recommend taking the AWS Certified Data Engineer - Associate (DEA-C01) exam given your 3 years of experience building serverless data analytics pipelines using relevant AWS services like AWS Glue, AWS Step Functions, and Amazon Redshift. The DEA-C01 exam validates skills in data ingestion, transformation, pipeline orchestration, data store management, operations, security, and governance, which aligns with your background. Meeting the 2-3 years data engineering experience and 1-2 years hands-on AWS experience requirements makes you a good fit as the target candidate for this certification exam.
</documents>

Now that we've explored prompt engineering techniques, we're ready to apply them in creating an AI-powered app using PartyRock Playground. PartyRock is a space where you can build AI-generated apps using prompts powered by Amazon Bedrock. This allows you to experiment with prompts in a robust environment.
I've developed several AI assistants with PartyRock that help me in content creation for copywriting, visual design, and ideation. I will guide you in building an AWS certification planner app that helps you choose the right AWS certification aligned with your career goals and create a personalized study plan.
Below, I show step-by-step how to create your own AI app in a 90-second video.
Three steps to make AI apps without coding

Go to partyrock.aws and click 'Sign In button' at the top right corner. Follow the instructions to create a free account. Once you're signed in, click 'Build your own app.' You will see an App Builder popup.
Before I start writing a prompt inside App builder, I consider the following aspects of my application:
  • What information will users need to provide? This will be entered into my input widget.
  • What information will the app generate? This will be displayed in my output widget.
  • What type of information will be created? It could be an image or text.
  • Do I need to include a chatbot assistant as part of my app?
After addressing these questions, I begin drafting my initial prompt. If you're unsure of the specifics or looking for inspiration, feel free to start with a short prompt.
Prompt:
This app will suggest an AWS certification that aligns with the user's career goals and create a weekly study plan based on the time the user can allocate for preparation.
After you finish writing a prompt, click 'Generate App.' In a few seconds, your app will appear! 🎉

Let's make some changes. Click 'Edit' button at the top, then the 'Settings' icon on a widget. A window for updating settings in the widget will appear on the right side.
You can select a model and adjust advanced settings on an AI-powered widget to influence the response generated by the foundation model. Increasing temperature or Top P on a text generation or chatbot widget increases the randomness of the model’s response, resulting in more creative and imaginative text. Lowering the temperature or Top P results in more factual text. For more information about these advanced settings, see Inference parameters for foundation models in the Amazon Bedrock User Guide.
Let's introduce a new 'Chat with an Instructor' feature, enabling app users to ask follow-up questions. Click the 'Add Widget' button at the top and select 'ChatBot.' Update prompt as below and click ‘Save’.
Prompt:
Pretend you are an AWS Instructor. You suggested an AWS Certification in @Suggested Certification based on the user goals in @User Goal and allocated study time specified in @Study Time. The user will now have a follow up conversation with you.

It's time to test our app. Enter information inside the 'User Goal' and 'Study Time' widgets. The app will begin generating responses in the output widgets. Check if the results meet your expectations.
If everything looks good, fantastic! If not, experiment: make adjustments to the settings, refine your prompts, and observe how these changes impact the app's responses.
Ready to share your app? Simply click 'Make Public and Share,' copy the link, and share it with others!
To give you some inspiration, I've created a few apps. Feel free to copy them by clicking 'remix' button and making your own adjustments.
Now it's your time to bring your ideas to live and create new apps! 🚀☁️

Congratulations on your progress in learning prompt engineering and applying it with Claude and PartyRock. In this article, we covered common prompt engineering techniques. For those looking to advance their skills, I have included links to additional educational resources below.
I hope this article provided you with a solid foundation to unlocking more accurate and helpful insights from your interactions with AI assistants and for becoming an AI-powered app creator.
Best wishes as you continue your AI journey ❤️


Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.