logo
Menu

AWS DeepRacer for Dummies- Part I

A brief introduction to AWS DeepRacer and ML

Published May 16, 2024
Since I started my cloud learning journey I have often heard and read about AWS DeepRacer. I even got to see the competition at Las Vegas when I attended AWS re:Invent 2022. I know AWS says it is one of the fastest and most entertaining ways of learning machine learning, but I had never really taken the time to understand what it was all about until recently.
Artificial Intelligence (AI) is the talk of town these days, so I have added it to my list of areas to explore and learning AWS DeepRacer promises to be an exciting place to start. It is also exciting to know that that there are a lot of potential prizes that could be won by winners of AWS DeepRacer competitions- including a free ticket to AWS re:Invent!
I am still new very new to the AWS DeepRacer world, but thanks to the AWS Educate course, AWS Deepracer Primer, I have become well-grounded in the core concepts behind AWS DeepRacer. I even earned a badge after completing the course!
So, let’s explore what AWS DeepRacer is, what reinforcement learning (RL) is, and some tips for getting started.
By definition, AWS DeepRacer is a fully autonomous 1/18th scale race car driven by reinforcement learning. It is said to be fully autonomous because it drives itself and makes its own decisions to achieve the fastest lap.
According to AWS, AWS DeepRacer gives you an interesting and fun way to get started with RL. RL is a type of machine learning (ML), and ML is a form of AI, which is all about teaching machines to think or creating machines that seem to have human intelligence.
ML focuses on getting machines to classify things or events or make simple predictions based on past behaviour. It aims at enabling computers to solve problems by using examples of data from the real world.
RL is a type or method of ML in which learning is through trial and error, or learning through feedback. With RL, the computer program dynamically learns by adjusting actions based on continuous feedback to maximise a reward.
A simple real-world example of RL is dog training. A dog is often rewarded with food treats, praise, petting, or a favorite toy or game for completing a task during training. By contrast, if the dog gets a scratch on its nose because it bothered a cat , it will probably not bother that cat in the future. The scratch was an unpleasant consequence. Likewise, in RL, good outcomes are rewarded with higher rewards.
You don’t need the physical AWS DeepRacer device to get started learning RL. There is a 3D simulation equivalent of the real car that has sensors integrated into the system just like the physical car. So, one can start learning with the AWS DeepRacer console.
Here are a few key helpful terms to be familiar with when getting started:
· Agent: In this context, agent refers to the AWS DeepRacer vehicle. In general terms, an agent is the algorithm or piece of software that acts in an environment.
· Environment: In this context, environment is usually the AWS DeepRacer truck. Generally, environment is said to be the surrounding area within which the agent interacts.
· State: It is each photo the agent takes of its environment. It has been described as the agent’s current position within the environment, that is visible or known to the agent.
· Episode: An episode is each iteration where an agent goes from the start position to a termination state. Termination state can mean one of two things: the agent finished a lap around the track, or the agent drove off the track. So, simply stated, an episode is an attempt around the track. Through episodes, the car gathers data or experience.
· Action: This refers to any step the agent takes towards its goal of making a lap. In AWS DeepRacer, it usually involves changing its speed or steering angle
· Model: All the parameters in the network that are used to infer actions going forward. The model is improved through episodes and the new version of the model is used for the next iteration.
· Reward: A way to incentivise behaviour through predefined parameters. It is provided by the environment and is specified through a reward function that is written in code. The reward can be positive or negative. If the agent’s chosen action brings the agent closer to the goal, it receives a positive reward but if it takes it away from the goal it either receives a negative reward or no reward. You guide the car by rewarding it appropriately.
The reward function is code you write to tell the agent the action it just took was good or bad. Each state is assigned a reward by reward function. With AWSDeepracer, the reward is the number assigned to different zones of the track. Desired routes give high number and vice versa.
Initially, the agent explores. It wanders in random directions to discover. After more training, it exploits. That means, it now uses its experience to decide on the right action.
To sum, the car is the agent and the environment is the race track in which it interacts. It does so with the goal of getting to the finish line with the maximum reward.
Now that we covered the basics, get started learning about AWS DeepRacer by registering for AWS Educate and taking the AWS DeepRacer Primer course. Then come back for part two of this blog, where I take it a step further and explain how to create, train, and evaluate an AWS Deepracer Model as well as how to join the AWS DeepRacer league.
Happy racing!
Key References:

1 Comment