AWS Logo
Menu
From Concept to Playable in seconds: Creating the Greedy Snake Game with Amazon Bedrock

From Concept to Playable in seconds: Creating the Greedy Snake Game with Amazon Bedrock

This blog post demonstrates creating a Greedy Snake game from scratch using Amazon Bedrock and a text prompt, highlighting the speed and capabilities of large language models on Amazon Bedrock like Llama 3.1 in transforming ideas into runnable code. It also covers prompt engineering best practices.

Haowen Huang
Amazon Employee
Published Aug 22, 2024
In the ever-evolving landscape of software development, time is a precious commodity. As developers, we constantly search for ways to streamline our workflows, reduce technical debt, and bring our ideas to life with greater speed and efficiency. Enter the world of generative AI – a game-changing technology that promises to revolutionize the way we approach coding and problem-solving.
Imagine being able to rapidly prototype and validate your ideas, accelerating the development process and enabling swift proof-of-concept demonstrations. This is the reality that generative AI platforms like Amazon Bedrock offer. By leveraging the power of large language models trained on vast amounts of data, we can harness the capabilities of AI to accelerate our development cycles and evaluate the quality of our prompts for optimal results.
In this blog post, we'll explore how I used Amazon Bedrock, which empowers developers to rapidly prototype and validate their ideas, to create the classic Greedy Snake game from scratch. Additionally, we'll discuss how I further leveraged the platform to critique and refine my prompts, ensuring the highest quality output. By combining the right prompts, models, and techniques, we'll witness the remarkable journey from a high-level idea to a playable, visually engaging game in a matter of seconds, facilitated by Amazon Bedrock's ability to streamline the prototyping phase and enable swift proof-of-concept development.
Let's dive in!

The Large Language Model Used

The large language model I used to generate the game code is the Meta Llama 3.1 70B Instruct model on the Amazon Bedrock.
Amazon Bedrock is a powerful generative AI platform that allows developers to create and fine-tune large models for various use cases, including code generation.
The Meta Llama 3.1 70B Instruct model has been specifically designed for following instructions and generating high-quality code. You can refer to the model card on Hugging Face for more details:

The Prompt Engineering Approach

The key to using generative AI successfully is prompt engineering - crafting clear, specific prompts that guide the model to generate the desired output. Here's the prompt I used to generate the Greedy Snake game code:
“Write a short and high-quality python script for the following task, something a very skilled python expert would write. You are writing code for an experienced developer so only add comments for things that are non-obvious. Make sure to include any imports required.
NEVER write anything before the ```python``` block. After you are done generating the code and after the ```python``` block, check your work carefully to make sure there are no mistakes, errors, or inconsistencies.
If there are errors, list those errors in tags, then generate a new version with those errors fixed. If there are no errors, write "CHECKED: NO ERRORS" in tags.
Here is the task: write a greedy snake game.
Double check your work to ensure no errors or inconsistencies.”
As you can see, this prompt provides detailed requirements for the game's functionality, libraries to use, and other implementation details. Providing this level of specificity is crucial for obtaining high-quality code output from the generative AI model.
On the Amazon Bedrock Chat playground, click “Run” to submit the above prompt to the Meta Llama 3.1 70B Instruct model, and wait for the response from the model in seconds. Shown as the following screenshot.
Screenshot of the Amazon Bedrock Chat Playground with the Prompt
Screenshot of the Amazon Bedrock Chat Playground with the Prompt

The Output: Greedy Snake Game Code

After entering the prompt, the Llama 3.1 70B Instruct model generated the following Python code for a fully-functional Greedy Snake game. The code is shown below:
This code initializes Pygame, setting up constants for the game’s dimensions, block size, and speed. It defines colors for the display elements and initializes the display screen. The code also sets up the font for displaying the score, initializes the snake’s and food’s positions, and sets the initial direction of the snake’s movement.
The game loop continuously checks for user input events, such as quitting the game or changing the snake's direction using the arrow keys. It updates the snake's position based on the current direction, checks for collisions with food or the boundaries, and updates the score accordingly. The game loop also handles rendering the game elements on the screen and caps the frame rate.
The following image shows the code snippets as the background with a screenshot of the game running in the foreground:
Code Snippets as Background with Game Screenshot in Foreground
Code Snippets as Background with Game Screenshot in Foreground
What is truly remarkable is that this comprehensive and executable code was generated from a single text prompt, without any supplementary examples or training data provided. The model possessed the capability to transform high-level requirements into complete code, thereby saving a substantial amount of time compared to writing everything from the beginning.
Certainly, the generated code is not flawless and could be enhanced or extended with additional features. Nevertheless, it furnishes a robust foundation upon which developers can build.

Evaluating the Quality of the Prompt

While the code for the Greedy Snake game was successfully generated, I need to objectively evaluate the quality of the prompt I used. I'm concerned about whether I could have phrased it better.
The criteria I'm referring to are the 16 best practices for prompt engineering summarized in the book "Generative AI on AWS". I have incorporated these 16 best practices into my prompt to evaluate the quality of the prompts I previously used to generate the Greedy Snake game.
The complete prompt is as follows:
Here are the key prompt-engineering best practices discussed in Chapter 2 of the book “Generative AI on AWS”:
- Be clear and concise in your prompts. Avoid ambiguity.
- Move the instruction to the end of the prompt for large amounts of input text.
- Clearly convey the subject using who, what, where, when, why, how etc.
- Use explicit directives if you want output in a particular format.
- Avoid negative formulations if a more straightforward phrasing exists.
- Include context and few-shot example prompts to guide the model.
- Specify the desired size of the response.
- Provide a specific response format using an example.
- Define what the model should do if it cannot answer confidently (e.g. respond "I don't know").
- Ask the model to "think step-by-step" for complex prompts requiring reasoning.
- Add constraints like maximum length or excluded information for more control.
- Evaluate the model's responses and refine prompts as needed.
- Use disclaimers or avoid prompts the model should not answer for sensitive domains.
- Use XML/HTML tags to create structure within the prompt.
- Focus the model on specific parts of the input text.
- Mask personally identifiable information from the model's output.
Based on the above 16 prompt-engineering best practices, please evaluate the following prompts I used to generate a Greedy Snake Game:
“Write a short and high-quality python script for the following task, something a very skilled python expert would write. You are writing code for an experienced developer so only add comments for things that are non-obvious. Make sure to include any imports required.
NEVER write anything before the ```python``` block. After you are done generating the code and after the ```python``` block, check your work carefully to make sure there are no mistakes, errors, or inconsistencies.
If there are errors, list those errors in tags, then generate a new version with those errors fixed. If there are no errors, write "CHECKED: NO ERRORS" in tags.
Here is the task: write a greedy snake game.
Double check your work to ensure no errors or inconsistencies.”
I submitted the above prompt to Amazon Bedrock Chat playground, as shown in the following screenshot:
Evaluating the Quality of the Prompt - Submit the Prompt
Evaluating the Quality of the Prompt - Submit the Prompt
Within seconds, I received the following output from the Llama 3.1 70B Instruct model:
Evaluating the Quality of the Prompt - Model Reply
Evaluating the Quality of the Prompt - Model Reply
The model output provides valuable insights into the strengths and potential areas of improvement for my prompt. Let’s examine them in detail.
First, it recognized the strengths of my prompt in the “Strengths” section:
Additionally, the model suggested improvements to enhance the prompt, as shown in the “Weaknesses” section:
What I want to highlight is that the Llama 3.1 70B Instruct model also provided the following detailed code modifying suggestions:

Conclusion

The potential of generative AI to revolutionize software development is immense. By leveraging platforms like Amazon Bedrock and powerful language models, developers can accelerate their workflows, rapidly prototype and validate their great ideas, iterate more rapidly, and bring ideas to life with unprecedented efficiency.
The Greedy Snake game example demonstrated the remarkable capabilities of generative AI in transforming a simple prompt into functional code. However, it's crucial to recognize that while generated code can provide a strong foundation, it may require further refinement and optimization.
As generative AI continues to evolve, we can anticipate even more advanced models, better prompting techniques, and tighter integration with development tools. Embracing this technology early will offer a significant competitive advantage.
Ultimately, generative AI is not a replacement for human developers but a powerful tool to augment their capabilities. By combining human creativity with AI, we can unlock new frontiers of innovation and create extraordinary software solutions.
Note: The cover image for this blog post was generated using the SDXL 1.0 model on Amazon Bedrock. The prompt given was as follows:
“A stylized digital illustration with a futuristic and technology-inspired design, depicting a large coiled snake made of sleek metallic materials and circuit board patterns. The snake's body forms the shape of the Amazon Bedrock logo in the center. Surrounding the snake are various coding elements, such as code snippets, programming symbols, and binary patterns, arranged in an abstract and visually striking way. The overall image should convey a sense of innovation, artificial intelligence, and the fusion of technology and creativity”
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments