AWS Logo
Menu

Break a Monolithic Application Into Microservices With AWS Migration Hub Refactor Spaces, AWS Copilot

AWS lets you decompose a monolith and focus on innovation.

Hemanth Vemulapalli
Amazon Employee
Published Nov 16, 2023
Last Modified May 13, 2024

Introduction

What You Will Accomplish

Prerequisites

Modules

Module One: Setup

Overview Module One

Implementation Module One

Step 1: Install Software

Step 2: Download and Open the Project

Module Two: Containerize and Deploy the Monolith

Overview Module Two

Why Use Containers?

Dependency Control and Improved Pipeline

Density and Resource Efficiency

Flexibility

Application Overview

What is Amazon ECS?

What You Will Accomplish in Module Two

Implementation Module Two

Step 1: Create an AWS Copilot Application

Step 2: Create the Environment

Step 3: Deploy the Environment

Step 4: Create the Monolith AWS Copilot Service

Step 5: Confirm the Deployment

Module Three: Deploy the Refactor Environment

Overview Module Three

Why Refactor Spaces?

What You Will Accomplish in Module Three

Implementation Module Three

Step 1: Download Templates

Step 2: Deploy Refactor Spaces

Step 3: Test Your Monolith

Module Four: Break the Monolith

Overview Module Four

Why Microservices?

Isolation of Crashes

Isolation for Security

Independent Scaling

Development Velocity

What You Will Accomplish in Module Four

Implementation Module Four

Step 1: Create the Environments

Step 2: Create the Services

Step 3: Edit the Path in the manifest.yml for Each Microservice

Module Five: Deploy Microservices

Overview Module Five

What You Will Accomplish in Module Five

Implementation Module Five

Step 1: Deploy the Microservices

Step 2: Register Your Microservice With Refactor Spaces

Step 3: Shut Down the Monolith

Step 4: Test Your Microservices

Module Six: Clean Up

Overview Module Six

What You Will Accomplish in Module Six

Step 1: Delete the Application

Step 2: Delete Refactor Spaces

Conclusion

Introduction

Traditional monolithic architectures are challenging to scale. As an application's code base grows, it can become complex to update and maintain. Introducing new features, languages, frameworks, and technologies can become challenging to manage. This in turn limits innovation and new ideas. You can use AWS Migration Hub Refactor Spaces to provision a refactor environment to automate the AWS infrastructure needed. There are several approaches to decompose the monolithic application into microservices. You could consider the strangler fig pattern, leave and layer pattern, or refactor using a multi-account strategy. Each of these approaches helps your business to improve the application efficacy by reducing the change risk for the application consumers.
Within a microservices architecture, each application component runs as its own service and communicates with other services via a well-defined API. Microservices are built around business capabilities, and each service performs a single function. Programmers are able to use Polyglot or multi-language microservices, which can be written using different frameworks and programming languages. You can then deploy them independently, as a single service, or as a group of services.
In this tutorial I will walk you through the process of decomposing a monolith to microservices leveraging the strangler fig pattern using Refactor Spaces and AWS Copilot. These AWS offerings will do a lot of undifferentiated heavy lifting while allowing you to focus on what matters: innovation.

What You Will Accomplish

You will start by deploying a monolithic Node.js application to a Docker container, then decompose the application to microservices. You will use Refactor Spaces to provision a refactor environment to incrementally refactor to microservices. Refactor Spaces will do this by shielding application consumers from the infrastructure changes as you decompose the application. In this example, the Node.js application hosts a message board with threads and messages between users. After you are done, you can use this tutorial as a reference to build and deploy your own containerized microservices on AWS.
Image depicting a Monolithic application comprising of three services for Users, Threads, and Posts. This monolith is transforming into individual microservices, Users, Threads and Posts.
About
✅ AWS LevelIntermediate - 200
⏱ Time to complete140 minutes
🧩 Prerequisites- An AWS account: If you don't already have an account, follow the Setting Up Your AWS Environment tutorial for a quick overview.
- Install and configure the AWS CLI.
- Install and configure AWS Copilot.
- Install and configure Docker.
- A text editor. For this tutorial, we will use VS Code, but you can use your preferred IDE.
💻 Code Sample- Code sample for application on GitHub
- AWS CloudFormation scripts for Refactor Services on AWS Samples
📢 FeedbackAny feedback, issues, or just a 👍 / 👎 ?
⏰ Last Updated2023-11-16

Prerequisites

  • An AWS account: If you don't already have an account, follow the Setting Up Your AWS Environment tutorial for a quick overview.
  • Install and configure the AWS CLI.
  • Install and configure AWS Copilot.
  • Install and configure Docker.
  • A text editor. For this tutorial, we will use VS Code, but you can use your preferred IDE.
  • Check that sufficient quota is available for all required services. For example, this tutorial uses five Virtual Private Clouds (VPCs) and there is a default quota of five VPCs per Region.

Modules

This tutorial is divided into the following modules. Complete each module before moving to the next one.
  1. Setup (20 minutes): In this module, you will install and configure the AWS CLI, install AWS Copilot, and install Docker.
  2. Containerize and deploy the monolith (30 minutes): In this module, you will containerize the application. You will instantiate a managed cluster of Fargate on Amazon ECS compute instances using AWS Copilot. You will also deploy your image as a container running on the cluster.
  3. Deploy the refactor environment (20 minutes): In this module, you will deploy a Refactor Spaces environment. This will set up the infrastructure to incrementally refactor your application. You will then register the monolith from the previous step as a default route in Refactor Spaces.
  4. Break the monolith (20 minutes): In this module, you will break the Node.js application into several interconnected services. Then you will push each service's image to an Amazon Elastic Container Registry (Amazon ECR) repository.
  5. Deploy microservices (30 minutes): In this module, you will deploy your Node.js application as a set of interconnected services behind an Application Load Balancer. Then, you will use Refactor Spaces to re-route traffic from the monolith to the microservices.
  6. Clean up (10 minutes): In this module, you will terminate the resources you created during the tutorial. You will stop the services running on Amazon ECS, delete the Application Load Balancer, and delete the AWS CloudFormation stack to terminate all underlying resources.

Module One: Setup

Overview Module One

In this module, you will use the AWS command line to install the tools required to complete and configure your environment for the tutorial.
Module Details
  • ⏱ Time to complete: 20 minutes

Implementation Module One

For this tutorial, you will build the Docker container image for your monolithic Node.js application and push it to Amazon Elastic Container Registry (Amazon ECR).

Step 1: Install Software

In the next few steps, you are going to be using Docker, GitHub, Amazon ECS, and Amazon ECR to deploy code into containers. To complete these steps, you will need the following tools.
  1. An AWS account: If you don't have an account with AWS, sign up here. All the exercises in this tutorial are designed to be covered under the AWS Free Tier. Note: Some of the services you will be using may require your account to be active for more than 12 hours. If you have a newly created account and encounter difficulty provisioning any services, wait a few hours and try again.
  2. Docker: You will use Docker to build the image files that will run as containers. Docker is an open-source project. You can download it for macOS or for Windows. After Docker is installed, verify it is running by entering Docker --version in the terminal. The version number should display, for example: Docker version 19.03.5, build 633a0ea.
  3. AWS CLI:
    • You will use the AWS Command Line Interface (AWS CLI) to push the images to Amazon ECR. To learn about and download the AWS CLI, see Getting started with the AWS CLI.
    • After AWS CLI is installed, verify it is running by entering aws --version in the terminal. The version number should display, for example: aws-cli/1.16.217 Python/2.7.16 Darwin/18.7.0 botocore/1.12.207.
    • If you already have AWS CLI installed, run the following command in the terminal to validate you are using the latest version: pip install awscli --upgrade --user.
    • If you have not used AWS CLI before, you can configure your credentials.
  4. AWS Copilot: AWS Copilot is an open-source command line interface that helps developers to build, release, and operate production-ready containerized applications. This can be done on AWS App Runner, Amazon ECS, and AWS Fargate. On macOS, you can use brew to install AWS Copilot.
For other platforms, use curl or PowerShell to download the release.

Step 2: Download and Open the Project

Download the code from GitHub: Navigate to AWS Labs and select Clone or Download to download the GitHub repository to your local environment. You can also use GitHub Desktop or Git to clone the repository.

Module Two: Containerize and Deploy the Monolith

Overview Module Two

Containers are lightweight packages of your application's code, configurations, and dependencies. Containers deliver environmental consistency, operational efficiency, developer productivity, and version control. Containers can help applications deploy quickly, reliably, and consistently, regardless of deployment environment.
Image depicts a Container vs a Virtual Machine. How Docker abstracts the underlying Operating System from the App, it's binaries and libraries.

Why Use Containers?

Launching a container with a new release of code can be done without significant deployment overhead. Code built in a container on a developer's local machine can be moved to a test instance by moving the container without requiring recompiling. This increases the operational development speed. At build time, this container can be linked to other containers required to run the application stack.

Dependency Control and Improved Pipeline

A Docker container image is a point-in-time capture of an application's code and dependencies. This helps an engineering organization create standard pipelines for the application lifecycle. For example:
  • Developers build and run the container locally.
  • Continuous integration server runs the same container and runs integration tests to make sure it passes expectations.
  • The same container is shipped to a staging environment where its runtime behavior can be checked using load tests or manual QA.
  • The same container is shipped to production.
Building, testing, moving, and running the exact same container through all stages of the integration and deployment pipeline can increase quality and reliability.

Density and Resource Efficiency

Containers facilitate enhanced resource efficiency by having multiple heterogeneous processes run on a single system. Resource efficiency is a natural result of the isolation and allocation techniques used by containers. Containers can be restricted to consume certain amounts of a host's CPU and memory. Understanding what resources a container needs, and what resources are available from the underlying host server, will help you address right-sizing your instances. This is done by either smaller hosts, by increasing the density of processes running on a single host, or by optimizing resource consumption and availability.

Flexibility

The flexibility of Docker containers is based on their portability, ease of deployment, and small size. This is different from the installation and configuration required on a VM. For instance, packaging services inside of containers helps moving them between hosts. This in turn isolates them from failure of adjacent services and protects them from errant patches or software upgrades to the host system.

Application Overview

Diagram showing the flow from User/client to the Monolith. The User accesses the monolith via the ELB. The monolith is running on ECS in a Fargate.
  1. Client The client makes a request over port 443 to the refactor proxy URL.
  2. Elastic Load Balancer (ELB) AWS Copilot creates an ELB and registers the monolith to the target group.
  3. Containerized Node.js Monolith The Node.js cluster parent is responsible for distributing traffic to the workers within the monolithic application. This architecture is containerized, but still monolithic because each container has all the same features of the rest of the containers.

What is Amazon ECS?

Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that simplifies your deployment, management, and scaling of containerized applications. Amazon ECS will launch, monitor, and scale your application across flexible compute options with automatic integrations to other supporting AWS offerings that your application needs. Amazon ECS supports Docker containers. With API calls, you can launch and stop Docker-enabled applications, query the complete state of your cluster, and access many familiar features. These include security groups, Elastic Load Balancing, EBS volumes, and AWS Identity and Access Management (IAM) roles.
You can use Amazon ECS to schedule the placement of containers across your cluster based on your resource needs and availability requirements. You can also integrate your own scheduler or third-party schedulers to meet business or application-specific requirements.
There is no additional charge for Amazon ECS. You pay for the AWS resources (for example, EC2 instances or EBS volumes) you create to store and run your application.

What You Will Accomplish in Module Two

In this module, you instantiate a managed cluster of EC2 compute instances using Amazon ECS. You then deploy your image as a container running on the cluster.
Shows a flow of what will be accomplished in this tutorial. Starting with downloading the code on GitHub to the users computer. Using Copilot to push the monolith to ECR and deploy a load balanced web service using ELB and ECS.
Module Details
  • ⏱ Time to complete: 30 minutes
Services used

Implementation Module Two

Follow these step-by-step instructions to deploy the Node.js application using AWS Copilot.

Step 1: Create an AWS Copilot Application

An AWS Copilot Application is a group of services and environments. Think of it as a label for what you are building. In this example, it's an API built as a monolithic Node.js application. In this step, you will create a new empty application. In the terminal or command prompt, enter the following and name the Application api.
An AWS Copilot Application creates an empty application that consists of roles to administrate StackSets, Amazon ECR repositories, KMS keys, and S3 buckets. It also creates a local directory in your repository to hold configuration files for your application and services. The output should looks something like this after it finishes:

Step 2: Create the Environment

Copilot Environments are infrastructure where applications run. AWS Copilot provisions a secure VPC, an Amazon ECS cluster, a Load Balancer, and all the other resources required by the application. To use your AWS credentials, Enter copilot env init and choose profile default. Name the Environment monolith.
Choose Yes, use default.
It will now start creating the infrastructure needed. AWS Copilot creates a manifest.yml that is used to configure the environment. Once the creation is done, you should see similar output to this:

Step 3: Deploy the Environment

The next step is to deploy the environment and provision the services for the application. To deploy, enter copilot env deploy --name monolith in the terminal.

Step 4: Create the Monolith AWS Copilot Service

A Copilot Service runs containers. Internet-facing services can be a Request-Driven Web Service that uses AWS App Runner. They can run on Amazon ECS or Fargate, with a Load Balanced Web Service that provisions an Application or Network Load Balancer with appropriate security groups.
Other Service types include a Backend Service that lets AWS services communicate within the application but not to the internet. A Worker Service is used for asynchronous service-to-service messaging with Amazon Simple Queue Service (Amazon SQS).
This tutorial uses a Load Balanced Web Service for the Internet-facing monolith. To create the monolithic service, enter copilot svc init and choose Load Balanced Web Service.
Name the service monolith.
Enter the path to 2-containerized/services/api/Dockerfile
Change directory to ./amazon-ecs-nodejs-microservices/copilot/monolith and examine manifest.yml to see how the service is configured. Note that the parameter http defines the path for the app. The Node.js application, server.js, defines the base route as / .
To deploy the monolith Service, enter copilot svc deploy --name monolith in the terminal. When you deploy the service, the container is built locally by Docker and pushed to your Elastic Container Registry. The service pulls the container from the registry and deploys it in the environment. The monolith application is running when the deployment is complete.

Step 5: Confirm the Deployment

When deployment completes, AWS Copilot prints the URL to the service in the output.
You can test the application deployment by entering queries in a browser, such as:

Module Three: Deploy the Refactor Environment

Overview Module Three

A Refactor Spaces environment provides the infrastructure, multi-account networking, and routing needed to incrementally modernize applications. Refactor Spaces environments include an application proxy that models the Strangler Fig pattern. This helps you transparently add new services to an external HTTPS endpoint, and incrementally route traffic to the new services. Refactor Spaces optionally bridges networking across AWS accounts to allow legacy and new services communicate while maintaining the independence of separate AWS accounts.

Why Refactor Spaces?

Migration Hub Refactor Spaces simplifies application refactoring by:
  • Reducing the time to set up a refactor environment.
  • Reducing the complexity for iteratively extracting capabilities as new microservices and re-routing traffic.
  • Simplifying management of existing apps and microservices as a single application with flexible routing control, isolation, and centralized management.
  • Helping dev teams achieve and accelerate tech and deployment independence by simplifying development, management, and operations while apps are changing.
  • Simplifies refactoring to multiple AWS accounts. Refer to the following architecture reference for additional details.

What You Will Accomplish in Module Three

In this module, you will deploy a Refactor Spaces environment along with a Refactor Spaces application using AWS CloudFormation.
Diagram showing the flow from User/client to the Monolith. The User accesses the monolith via the refactor proxy URL. The ELB is registered as a default route in the refactor environment. The monolith is running on ECS in a Fargate.
  1. Client The client makes a request over port 443 to the refactor proxy URL.
  2. AWS Migration Hub Refactor Spaces Refactor Spaces provides an application that models the Strangler Fig pattern for incremental refactoring.
  3. Elastic Load Balancer (ELB) AWS Copilot creates an ELB and registers the monolith to the target group.
  4. Containerized Node.js Monolith The Node.js cluster parent is responsible for distributing traffic to the workers within the monolithic application. This architecture is containerized, but still monolithic because each container has all the same features of the rest of the containers.
Module Details
  • ⏱ Time to complete: 30 minutes
Services used

Implementation Module Three

Step 1: Download Templates

Navigate to AWS Samples and select Clone or Download to download the GitHub repository to your local environment. Copy the rs.yaml and rs-service-op.yaml files into the repository that you downloaded in Module 1. You can also do this with curl / Invoke-WebRequest without cloning the whole repository:

Step 2: Deploy Refactor Spaces

In this step, you deploy an AWS CloudFormation template to create a Refactor Spaces environment, application, and register the monolith as a default service and route.
  • Run the following command at the root directory for this project to deploy a refactor environment. Replace the <<Stack Name>> with a name of your choice, and <<MonolithUrl>> with the Copilot CLI output from the last module with /api appended since the monolith listens on the /api. For example, [http://api-m-Publi-5SPO2C-558916521.us-east-1.elb.amazonaws.com/api](http://api-m-Publi-5SPOLPJTUB2C-558916521.us-east-1.elb.amazonaws.com/api)
This will take some time, you should see the following output while it is running (with a different URL):
Once done, you will see Successfully created/updated stack - monolith-tutorial.

Step 3: Test Your Monolith

In the previous step, you created resources for the refactor environment using CloudFormation. Now, run the following command to access the outputs from the deployment.
NOTE: Save this output to a text file for later use.
Command:
Response:
To see the output in JSON format, copy the rsProxyURL value from the output above, append /users, or /threads, or /posts and paste into a web browser. The following screenshot is from Firefox optimized to see JSON format.
JSON output when accessing the monolith

Module Four: Break the Monolith

Overview Module Four

The final application architecture uses Refactor Spaces, Amazon ECS, and the Application Load Balancer.
Diagram depicts the flow of traffic to the application when it is refactored to microservices. Each services is registered with Refactor Spaces to use the proxy to route traffic based on HTTP requests
  1. Client The client makes traffic requests over port 80.
  2. Load Balancer The Application Load Balancer routes external traffic to the correct service. The Application Load Balancer inspects the client request and uses routing rules to direct the request to an instance and port for the target group.
  3. Target Groups Each service has a target group that tracks the instances and ports of each container running for that service.
  4. Microservices Amazon ECS deploys each service into a container across an EC2 cluster. Each container only handles a single feature.

Why Microservices?

Isolation of Crashes

Engineering organizations can and do have fatal crashes in production. Because they are isolated by nature, microservices can limit the impact of such crashes. Best practice microservices architecture is decoupled by default. This means that if one microservice crashes, the rest of your application can continue to work as expected.

Isolation for Security

In a monolithic application, the blast radius is limited to the application boundaries. If one feature or function is compromised, it can be assumed the full application is to some extent also vulnerable. For example, if the vulnerability included remote code execution, you can assume that an attacker could have gained horizontal access to other system features. However, when microservice best practices are followed, you will limit the blast radius to the one feature or function the microservice supports. Separating features into microservices using Amazon ECS helps to secure access to AWS resources by giving each service its own AWS Identity and Access Management (IAM) role.

Independent Scaling

When features are broken into microservices, the amount of infrastructure and number of instances for each microservice can be scaled independently. This helps to measure and optimize the total cost of ownership by feature. If one particular feature requires scale changes, other features are not be impacted. Using an ecommerce example, when customers browse your website at a higher rate than they purchase, you can scale the microservices separately. The scale for the catalog browsing or search can be set higher than the scale for the checkout function.

Development Velocity

In a monolith, when the application has tight coupling, adding a new feature requires full regression and validation of other features in the application. Microservice architectures that follow loose coupling best practices, allow for independent feature changes. Developers can be confident that any code they write will only impact other features when they explicitly write a connection between two microservices. This independence in microservices lower risks in development, and help a team to build faster.

What You Will Accomplish in Module Four

In this module, you will break the Node.js application into several interconnected services and create an AWS Copilot Environment and Service for each microservice.
Module Details
  • ⏱ Time to complete: 20 minutes
Services used

Implementation Module Four

Follow these step-by-step instructions to break apart the monolith to microservices.

Step 1: Create the Environments

In the previous module, you created and deployed the api Application. You can reuse the application to deploy new environments for the microservices. In this module, the code for the application has been divided into three microservices: posts, threads, and users. Each one will be deployed as a service in a new environment. The monolith environment and service was deployed using the CLI menu. In this module, you can deploy services by specifying the flags for the copilot env init command. Create the posts microservice first.
Select Yes, use default.
Deploy the Environment using copilot env deploy —-name posts, you will see similar output to this:
Repeat for users, threads environments.

Step 2: Create the Services

Create a service for Posts using the flags.
Repeat for users, threads microservices.

Step 3: Edit the Path in the manifest.yml for Each Microservice

AWS Copilot sets the path to a Service based on the Service name. However, the route to the microservice in  server.js is api/<<service name>>. Edit the path in each microservice manifest, and add api/ to the path.

Module Five: Deploy Microservices

Overview Module Five

This is the process to deploy the microservices and safely transition the application's traffic away from the monolith.
Diagram depicts the steps required to incrementally refactor from a monolith registered with Refactor Spaces to microservices using the refactor environment.
  1. Switch the Traffic This is the starting configuration. The monolithic Node.js app running in a container on Amazon ECS.
  2. Deploy Microservices Using the three container images built and pushed to Amazon ECR by AWS Copilot, you will deploy three microservices.
  3. Register with Refactor Spaces Register the microservices' load balancer DNS with Refactor Spaces to route traffic from the monolith transparently.
  4. Shut down the Monolith Shut down the monolith by deleting the Copilot environment. Refactor Spaces routes traffic to the running microservices for the respective endpoints.

What You Will Accomplish in Module Five

In this module, you will deploy your Node.js application as a set of interconnected services using AWS Copilot. Then, you will stop the monolith service and shift traffic from the monolith to the microservices.
Module Details
  • ⏱ Time to complete: 20 minutes
Services used

Implementation Module Five

Follow these step-by-step instructions to deploy the microservices.

Step 1: Deploy the Microservices

For each microservice, enter copilot svc deploy --name <microservice> --env <microservice>. Similar to the monolith service, AWS Copilot builds a container for the microservice, pushes to a repository, and deploys on an Amazon ECS cluster running on Fargate.
Repeat for Users and Threads microservices

Step 2: Register Your Microservice With Refactor Spaces

Register the microservice as a service in the Refactor Spaces application we created in the earlier module. Now you can configure routing for the URI based endpoints. Refactor Spaces sets the URI paths in the proxy and automates the cut-over.
  • To register service and create a route in Refactor Spaces, replace the ApplicationID and EnvironmentId parameters with appropriate values from the output of Module 3. Then pass the Application Load Balancer URLs for all three services from the Copilot commands in the previous step.
  • This command registers all three microservices deployed previously and registers them as services with Refactor Spaces.

Step 3: Shut Down the Monolith

To shut down the monolith service, delete the monolith Service. Enter copilot svc delete --name monolith in the terminal.

Step 4: Test Your Microservices

Navigate to the Refactor Spaces console and open the environment rs-tutorial-env and open the rs-tutorial-app. You should see the four services and their corresponding routes.
AWS Console screenshot of the Refactor Spaces environment. Shows the Refactor Spaces application that acts as a proxy. Also, shows the services registered during the process of incremental refactoring.
Copy the ProxyURL and paste in a new browser tab. Append /users or /threads or /posts to view the output. NOTE: DNS updates take some time and you may see a not found or internal server error. In this case, wait and refresh until you see it completed.
JSON output of the application after it is refactored.

Module Six: Clean Up

Overview Module Six

It is a best practice to delete resources when the tutorial is completed to avoid ongoing charges for running services. However, this step is optional. You can also keep the resources and services deployed for a detailed examination of the infrastructure, or as template for future deployments.

What You Will Accomplish in Module Six

In this module, you will terminate the resources you created during this tutorial. You will stop the services running on Amazon ECS. You will delete the Application Load Balancer. You will also delete the AWS CloudFormation stack to terminate the Amazon ECS cluster, including all the underlying EC2 instances.
Module Details
  • ⏱ Time to complete: 20 minutes
Services used

Step 1: Delete the Application

You can delete all services and infrastructure by entering:

Step 2: Delete Refactor Spaces

You can delete the Refactor Spaces environment and services by deleting the CloudFormation stacks created in previous modules. First delete the services stack and then delete the environment stack by passing the <<stackname>> to the following command.

Conclusion

Congratulations! You have completed the tutorial. You learned how to run a monolithic application in a Docker container. You deployed the same application as microservices, and then switched the traffic to the microservices without incurring downtime. Refer to links below for additional learning.
Happy refactoring!
Learn more about using Amazon ECS to build and operate containerized architectures on AWS. See Amazon ECS resources.
Integrate Refactor Spaces resources directly into Copilot using Overrides. Learn about CDK Overrides.
Learn best practices for building and running microservices on AWS to speed up deployment cycles, foster innovation, and improve scalability. Read the whitepaper.

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments