
AWS Account Setup: My 10-Step Checklist After 14 Years
How I would set up my AWS account if I stared from scratch today with 14 years of 20/20 hindsight.
- Using AWS for more than "just another VM"
- Infrastructure as Code (IaC) (Bash scripts again -> CloudFormation -> Ansible -> Chef -> Terraform)
- Enable MFA on my root user - the first step everyone should take when creating a new AWS account. You should not use your root user for day-to-day activity, and set up another non-root user as a first step.
- Enabling AWS Cost Explorer - allows you to see exactly which resources / services in which region with the cost and projected costs.
- Enabling IAM Identity Center - will enable AWS Organizations as well, then set the AWS access portal URL to a friendly name. Allows you to easily remember the URL to log into your AWS account with.
- Enabling Cost Optimization Hub - at some point, you always want to take a look at resource consumption vs what you have set up, and then optimize for your specific workload.
- Use IaC - Set up Terraform to use S3 to store the state file, and create your infrastructure.
- CI/CD pipeline for Terraform - allows previewing changes, and then applying them by merging the PR.
- AWS CloudTrail - Enabling CloudTrail across all AWS accounts to track all changes and help with troubleshooting
- Set up a Budget + Alert - Set up an AWS Budget with an alert when it reaches 75% of certain amount so you can be notified if you spin up resources that could cost more than you had planned.
- Centrally managing users - Creating an Admin and a Developer group in Identity Center with a user in each, then expanding the groups as needed.
- Local access - Setting up my local AWS CLI config to use the new user, along with the other tools used when building on AWS.
Make sure to pay attention to step 3 and set the friendly name for your access portal URL, we'll use this later to configure our AWS CLI to allow access to our account.
plan
and apply
commands being run). In theory all you really need is the S3 bucket, but it is highly recommended to enable versioning on it, encrypt it at rest, and also have that DynamoDB table for locking.>.
in it), and then we need to take care of the following:- Change the region to the one where you want to create your infrastructure
- Install Terraform
- Create a new git repo and set up access to it
- Add a GitHub PAT to allow accessing the repo from CodeBuild
git
:- Repository access: All repositories - you can lock it down to just the single repo you are using to bootstrap your AWS account, but useful to allow for future pipelines for other repositories.
- Repository permissions:
- Contents: Read-only - used to read the source code for running in CodeBuild
- Metadata: Read-only - automatically set when selecting read-only for Contents
- Pull requests: Read and write - used to write a comment with the result of the build, and the
terraform plan
output for any pull request - Webhooks: Read and write - CodeBuild will create a webhook to trigger the relevant build when code is committed / merged to the
main
branch, or a new pull request is opened
your GitHub PAT
in the following command that you will run in CloudShell - this stores the PAT in Systems Manager Parameter Store so that we can access it later when setting up our CI/CD pipeline for Terraform:terraform
sub-directory, so create it after changing into the folder of the newly cloned repo with:terraform
directory, and add the following to bootstrap.tf
after creating the file:state_file_aws_region
and state_file_bucket_name
to values for your setup - the region where you want to create this infrastructure, and a unique name for the bucket. Now run the following, after the 2nd command, review the list of infrastructure that will be created, and enter yes
to create it:terraform.tf
and providers.tf
were created, you can have a look inside each. The terraform.tf
defines the S3 bucket and DynamoDB table to use, and the providers.tf
specifies the version of Terraform to use, and versions of the providers (AWS and local). Currently our state file is only stored locally in CloudShell, run the following to copy it to the S3 bucket:bootstrap.tf
file - here are the variables you need to set:github_organization
- either your GitHub username, or the GitHub organization namegithub_repository
- name of the repoaws_region
- region for the AWS resourcesstate_file_iam_policy_arn
- generated policy to allow access to state file resources, used for the IAM Roles for the CodeBuild projects
plan
to be added as a comment to the PR to allow reviewing. The other CodeBuild job will run apply
when there is a commit on the main
branch, e.g. when you merge a PR. To create these, run the following:- Groups - a group that users can belong to
- Users - the users that are linked to at least 1 group
- Permission sets - IAM policies specifying access to resources, either AWS-provides, or user-created
- Account Assignments - linking a group to the permission set in a specific AWS account
AdministratorAccess
AWS-managed IAM policy for both of those accounts, but there may be a scenario where no-one is allowed full admin access to one of these accounts. In that case you would want to specify a different IAM policy, I'll write a follow-up post to dive into this in much more detail and link it here when published.Admin
and a Developers
group. The Admin
one will have full access via AdministratorAccess
, and the Developers
one will only have read-only access via the ViewOnlyAccess
IAM policy. We'll also add 2 users, Mary Major
who has admin access, and John Doe
that only has read-only access as part of the Developers
group. Create a new file called identity-center.tf
, and add the following:terraform apply
in CloudShell, let's use our new pipeline. Commit the changes to the branch we created, and then push the changes with:
Show Plan
to expand and see all the resources that will be created. After reviewing the changes, merge the PR. If you navigate back to the main
branch of your repo on GitHub, you will see a yellow/orange dot next to the commit - this indicates the CodeBuild job is running. When it has completed successfully, it will add a green checkmark like this:
aws configure sso
locally. It will ask for the following:- SSO Session name - name for the session, this can be used later when setting up multiple AWS accounts, or multiple roles to assume, for now, set it a short string identifying this AWS account
- SSO start URL - open up IAM Identity Center and copy the URL that you set earlier and paste it in
- SSO region - set it to the same one used in the Terraform code
- SSO registration scopes - set this to
sso:account:access
terraform
directory. Run the following to install the providers, modules, and set up access to the state file:cloudtrail.tf
, and add the following (and update the values with your ones):budget.tf
and add the following - feel free to uncomment the last 3 lines to change to values you want, those are the defaults for the module:plan
after we pull down the new modules with init
:Delete
- this will remove all the changes we made (SSH key, checked out code, terraform, etc). Remember to also delete the temporary SSH key from your GitHub account at this point. Copy the portal access URL from Identity Center again, then log out of the account. Afterwards, paste in the login URL, and log back in. Now is a good time to go check on that PR we created, and merge it if there weren't any errors.- Deep dive into how the modules were built
- Using GitHub Actions instead of CodeBuild for the CI/CD pipeline
- Setting up multiple AWS accounts, with Identity Center access to each
- Creating new GitHub repos for projects / services with the base Terraform in each to start with a CI/CD pipeline similar to this post's one
- Managing my personal DNS domains and mail with Route53 and Terraform
- Setting up Traefik as a reverse proxy for my home lab using Route53 for DNS to make it easy to spin up containers with friendly URLs that have SSL certs
- Tips to speed up my workflow when using Terraform for AWS infrastructure
- Overview of how to build Terraform modules and the different ways to host them
- Building out a module to create an ECS cluster to use with multiple AWS accounts (dev, test, prod, etc)
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.