Amazon Bedrock in Real Card Games
How Amazon Bedrock Enhances Poker Strategies and Player Performance
masainox
Amazon Employee
Published Sep 27, 2024
Last Modified Sep 28, 2024
I had a blast playing cards with my relatives during the holidays after a long time. But since I only play occasionally, I tend to forget the rules, hands, and scoring methods, which often turns things into a bit of a mess... So this time, I decided to let Amazon Bedrock lend a hand with real-world card games.
In this demo, we’ll have Amazon Bedrock sneak a peek at the cards in a game of “Five Card Draw Poker,” a popular poker variant in Japan, and give advice on the next move.
Here’s what the resulting web app looks like:
In this gameplay video, you can see Amazon Bedrock recognizing the playing cards in the webcam feed, showing the current hand, and offering suggestions on the next draw. For instance, it initially spots a pair and advises on the next move. Later, when 3 cards are drawn, it briefly misidentifies the hand as a flush but ultimately catches a royal flush—the strongest hand in poker—and recommends sticking with it.
Pretty impressive, right?!
The overall architecture is shown in the diagram below. The poker video from the webcam is periodically captured as images using the Amazon Kinesis Video Streams (KVS) S3 Delivery feature and stored in an Amazon S3 bucket. This triggers an AWS Lambda function, which performs hand recognition using Amazon Bedrock. The inference results are then published in real-time to the browser via AWS IoT Core and MQTT over WebSocket.
Let’s dive in!
First, download the sample code. After extracting the file, you’ll see the directory structure as follows:
For this demo, all resources will be created in the
us-east-1
region due to model requirements for Bedrock.To start with Bedrock, follow the user guide to set up model access. In this case, we’re using Anthropic’s Claude 3 Sonnet model. When prompted for the model use case, enter “builders.flash poker demo”.
If you’re new to the AWS Cloud Development Kit (CDK), check out the CDK Developer Guide to install CDK. If you already have CDK installed, you can set it up with the commands below.
We’re using Node.js version
v20.11.1
.This will complete the setup of S3, Lambda, IoT Core, and AWS IAM. When you run
cdk deploy
, the name of the created S3 bucket will be displayed (e.g., CdkStack.BucketNameOutput = cdkstack-pokerbucket3445747a-te5phsjp0uz7
), so be sure to note it down. You can also find the S3 bucket name in the Outputs section of the AWS CloudFormation console stack.Run the following commands to create the stream and configure the S3 Delivery settings.
In the configuration,
DestinationConfig
specifies the S3 bucket created by the CDK. Make sure WidthPixels
and HeightPixels
match the camera settings configured in the Device Setup section.This setup will have KVS capture images from the webcam every 10 seconds and store them in the S3 bucket.
The hardware used for reading the playing cards is a Raspberry Pi with a standard webcam. To send video to KVS, follow these steps to install the KVS SDK and sample code on the Raspberry Pi.To send video to KVS, install the KVS SDK and sample code on your Raspberry Pi using the steps below.
To send video to KVS, install the KVS SDK and sample code on your Raspberry Pi using the steps below.
Once everything is set up, use the built sample code to start streaming the webcam footage to KVS.
Make sure the command arguments match the values supported by your webcam. In this example, we’re using a 640x480 resolution at 30 fps. If you’re unsure of your webcam’s supported values, you can check them using the
gst-device-monitor-1.0
command.At this stage, you should be able to see the video playing in the KVS console, and images being uploaded to the S3 bucket.
The frontend is built using Vite and AWS Amplify. While there are several ways to receive real-time messages from IoT devices in the browser, including AWS AppSync, here we’ll use a simple approach with Amplify PubSub.
Amplify PubSub enables you to receive MQTT messages over WebSocket without the need for complex schema definitions, making setup straightforward and easy to use. For instance, the following code snippet is all you need to receive real-time messages from IoT Core:
This version keeps the instructions concise and easy to follow, while highlighting the simplicity of using Amplify PubSub.
Pretty simple, right?
Install the Vite and Amplify packages using the
package-lock.json
. Run the following commands:Configuring AWS Amplify
Next, we’ll set up the project with the name
webui
:Set up Amplify Auth
For the first question, choose Manual Configuration to allow unauthenticated access. If you select Default Configuration, you can skip the later questions, but you’ll need to activate Guest access in the Amazon Cognito console’s Identity Pool. Make sure to select Yes for Allow unauthenticated logins?.
Now, deploy the backend
To enable the browser to access IoT Core, grant the following policy to the
unauthRole
. In the AWS IAM console, search for webui
to find the unauthRole
(e.g., amplify-webui-dev-205849-unauthRole
) and attach the policy below.With this, the browser is now permitted to receive messages from the MQTT topic
poker-advice
.Update the parameters in the following two files:
In
index.html
, replace <VIDEO_SOURCE_URL>
with the URL dynamically generated by KVS. You’ll need to generate this URL before running the web app. You can retrieve the URL using the following command. If you have trouble obtaining it, ensure that the video from the webcam is being uploaded correctly, then try running the command again.Update
<IOT_CORE_ENDPOINT>
in main.js
. You can obtain the endpoint using the following command.Great job! This is the last step—let’s start the web app! Access
localhost
, and if you see a screen like this, you’ve succeeded.Congratulations! Now, as you play poker, sneak a peek at your hand in front of the camera and get advice without letting your opponents catch on.
This time, we used AWS IoT and AI managed services to connect a real-world poker game with cloud-based generative AI in real-time. To use it conveniently, you might need a small camera attached to your glasses, but for hand recognition and next move advice, the simple prompt we used here worked quite well with Amazon Bedrock/Claude 3.
Before using Amazon Bedrock, I experimented with creating and inferencing custom models using Amazon Rekognition. However, Bedrock delivered much higher quality results and was faster to set up. I also tested it with UNO and mahjong: UNO cards were recognized reasonably well, while mahjong tiles proved a bit more challenging.
Thanks for following along! Hope you have fun with your next game! See you next time! 👋
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.