AWS Logo
Menu
AWS CLI Cheatsheet: Interacting with Amazon S3

AWS CLI Cheatsheet: Interacting with Amazon S3

A developer's cheatsheet for using Amazon S3 with the AWS CLI— with quick commands and tips.

Published Oct 2, 2024
Last Modified Oct 11, 2024

How to Interact with Amazon S3 Using the AWS CLI

Amazon S3 (Simple Storage Service) is one of the most widely used services in AWS, offering scalable object storage for various use cases like backup, data lakes, and website hosting. In this blog post, we will walk through some of the fundamental S3 operations you can perform using the AWS CLI (Command Line Interface).

Prerequisites

Before we dive into the commands, ensure you have the following:
  • AWS CLI Installed: You can install it by following the instructions here.
  • AWS Credentials Configured: Make sure you've run aws configure to set up your access keys, default region, and output format.

Key Concepts of Amazon S3 and AWS CLI

1. Creating an S3 Bucket

The first step to using S3 is creating a bucket, where your data will be stored. A crucial point here is that S3 bucket names must be globally unique across all AWS accounts and regions.
This is a point of confusion usually as people jump into thinking that S3 is a global service when, in fact, it's still very much a regional service. You choose where to store your data and then make use of cross-region replication if you want your data to be available in different geographies. The only thing global about it is the namespace which enforces that names are unique not only across your own AWS accounts and different regions, but also other people's accounts and regions, so keep that in mind.
Here’s how you can create an S3 bucket using the AWS CLI:
  • my-unique-bucket-name: Replace this with your desired bucket name, ensuring it hasn’t been taken.
  • --region: It is recommended to specify the region for the bucket. If not specified, AWS will default to the us-east-1 region.

2. Listing S3 Buckets

Once you have created buckets, you can easily list all the buckets under your AWS account with the following command:
This will return a list of all the buckets, along with the date they were created.

3. Uploading Files to S3

One of the most common tasks is uploading files to S3. You can do this using the cp command. Think as short for "copy":
This command uploads the file.txt from your local directory to the specified S3 bucket.
You can also upload entire directories using the recursive flag:
4. Downloading Files from S3
Downloading files is as easy as uploading. Use the following command to download a file from S3:
This will copy file.txt from the bucket to your local machine.

5. Deleting Files and Buckets

To delete a specific file from your bucket, use:
To delete an entire bucket, you must ensure it is empty first otherwise you'll get an error. To delete all objects within the bucket use:
Now the bucket is empty, you can delete the bucket itself by running:

6. Syncing Local Files with S3

The AWS CLI provides a powerful sync command to synchronize files between your local system and S3. For example, to sync a local directory with an S3 bucket, use the following command:
This command uploads new or updated files and skips files that haven't changed.

Important Considerations

1. Globally Unique Bucket Names

As mentioned earlier, S3 bucket names must be globally unique, meaning no two AWS users can have buckets with the same name. When creating a bucket, think of a name that is specific to your organization or project to avoid conflicts.

2. S3 Pricing

Keep in mind that S3 storage is priced based on usage, meaning the more data you store, the higher the cost. AWS also charges for requests (e.g., GET, PUT) and data transferred out of S3.

3. Permissions and Access Control

By default, S3 buckets are private. You need to configure access control policies if you intend to make your bucket or objects public. Use AWS Identity and Access Management (IAM) roles and policies to grant fine-grained access to specific users or services.

4. Versioning

S3 allows you to enable versioning on a bucket to keep track of multiple versions of objects. This is useful for backup and disaster recovery scenarios. You can enable versioning using the AWS CLI like this:
5. S3 Object Storage Classes
Amazon S3 provides different storage classes based on access patterns. For example, STANDARD is used for frequent access, while GLACIER is ideal for long-term archival storage. You can specify the storage class when uploading files:
Conclusion
Amazon S3 is a powerful and flexible storage service, and interacting with it using the AWS CLI simplifies automation and management tasks. With the commands we've covered, you're now equipped to create buckets, upload/download files, sync directories, and manage access efficiently. Always keep in mind bucket name uniqueness and S3 pricing to ensure optimal use of the service.
If you’d like to dive deeper into AWS S3 features, check out the AWS S3 Documentation for more information.
Happy clouding! 🤗
 

Comments