Building Portable Multicloud Applications Without Containers
Build portable multicloud applications WITHOUT containers that run on AWS, Azure, and GCP using cloud native serverless services.
Chris Jones
Amazon Employee
Published Nov 21, 2024
Customers sometimes choose to create portable workloads that leverage the best cloud-native services from their cloud services provider (CSP). For Amazon Web Services this includes AWS Lambda for serverless compute, Amazon DynamoDB for serverless key/value storage with a highly available operating model and nearly limitless concurrency, and more. In this post, I will show you how to use common tools and coding best practices to build cloud-native multicloud applications from one application repository that can run on AWS, GCP, and Azure – all without needing to use containers. This approach is certainly not right for every workload, but for those that do require a high degree of portability, such as Independent Software Vendor (ISV) applications that are created for users of various cloud services providers, this approach is a good way to reduce your costs while accelerating development time.
Some multicloud solutions use cloud-agnostic technologies, such as Docker, which have major cloud providers’ support, so that applications are not tightly coupled to cloud-native services and can be portable across different cloud providers.
However, for various reasons a containerized approach may not always be the best choice for your individual needs. Sometimes a purely cloud-native approach, such as serverless compute, is the best architectural decision. You can use a cloud-native serverless approach while still building applications that are portable across different cloud providers.
Customers get these advantages by building multiple artifacts - one for each targeted cloud provider - from a single core code base in their CI/CD pipelines, rather than one artifact for all cloud providers. This reduces operational and testing burden, creates a portable code that can be run in any of these environments, and increases deployment velocity. Reducing unused code in your application decreases attack surfaces and potentially wasteful deployment of code with no functional changes.
To do this, we must design an application with a pattern that relies on three key existing concepts which can be implemented in most programming languages, including Java, C#, TypeScript, and Python.
- Hierarchical build tools, like Maven sub-modules
The code examples in this post are all written in Java. However, you can see working examples of other languages, including C#, TypeScript, and Python, in this repository.
This pattern has multiple benefits, such as:
- Using cloud-native services. Without this pattern, customers must use technologies that are self-managed. This may not be the right choice in every case due to a variety of factors, such as cost, complexity, development time, or team skills. With this pattern, customers can use cloud native services from each individual cloud provider, such as AWS Lambda, Azure Functions, GCP Firestore, and more.
- Code reusability. This pattern consolidates cloud-agnostic business logic in a “core” module within a cloud-aware wrapper. You can save time by reusing the application’s “core” business logic. Any porting or replicating of the application can focus on the development of relatively simple or “boilerplate” code at the “fringe” of the application, such as the entry point and DAO operations.
- Strangler pattern for cloud migrations instead of lift-and-shift. With this pattern, the same application version can be deployed to multiple cloud providers. This allows a customer to incrementally route a portion of their application traffic to the target cloud (such as AWS), ensure the application performs as expected, route the remaining application traffic to the target cloud, and then retire the source cloud application and infrastructure. Essentially, this is a canary deployment, but across cloud providers. Previous multicloud application development practices would result in considerable refactoring of the code and the creation of different applications for each cloud. This would prevent simultaneous deployments and, therefore, prevent canary deployments across cloud providers. This approach may be best suited for stateless APIs or services.
- Enabling cloud portability. If a customer wants to move an application to a different cloud provider, they do not need to refactor the application to remove that cloud provider's dependencies and related code. Instead, the customer will have multiple versions of the same application - each containing only the targeted cloud provider's dependencies and related code.
When implementing this pattern in code, the code base should be structured similar as displayed in the following screenshot - regardless of the language being used.
The java directory is the root directory. It does not have to be named "java". It will likely be the name of the repository.
Under the root directory, there should be two directories: 1) core, and 2) deployments. Each of these directories is discussed in more detail below.
The core directory (or module) should contain the application's business logic and NO cloud provider dependencies. In this way, the core directory will be agnostic. If a cloud provider's artifact is added to core's dependencies, it may be considered "pinned" to that CSP. Adding cloud provider artifacts is discussed in the following Deployments section.
The core will contain abstract classes or interfaces (depending on the language being used) for operations at the "fringe" of the application - such as the entry or invocation point and any DAO operations. These abstract classes or interfaces will be implemented in the Deployments module. The following is a diagram to help visualize this.
Concerning the entry or invocation point of the application, these abstract classes or interfaces will have implementation classes in the Deployments directory that will allow objects unique to a cloud provider's proprietary services to be mapped to objects that the core logic expects. For example, if we plan to deploy an application to AWS Lambda and GCP Cloud Functions - both serverless cloud offerings - we will need to define a Java interface like the following to map the data object that the CSP service passes into our application’s entry point. The core logic, which is unaware of which cloud provider it is running on, will take a
GenericRequest
. So, each implementation of GenericRequestMapper
will be responsible for mapping a cloud provider’s proprietary object to GenericRequest
so that the business logic in the core module can act on the data encapsulated in GenericRequest
(see the following Deployments section for more details).In the same way, the DAO abstract classes or interfaces will have implementation classes in the Deployments directory. Using abstract classes or interfaces for DAO operations will allow for flexibility in the type of data storage system the core library interacts with. It could be Kafka, a relational database, NoSQL database, RESTful API, etc. The following is an example of one such DAO interface:
The Deployments directory should contain child directories for each cloud platform service that the application intends to be run on, such as a serverless service like AWS Lambda, GCP Cloud Functions, Azure Functions, etc. Each of these child directories should contain the following:
- A dependency on the core artifact and any required CSP SDKs
- Implementation of the core mapper interface
- Implementation of the DAO abstract class or interface
- An entry point (if the serverless service requires one)
The following are examples of implementations of a mapper interface. The implementations handle mapping the data object that the cloud service passes into the application's entry point upon invocation. The mapper implementation should produce an object that the core artifact expects - in this case that object is of type
GenericRequest
.The following is used in AWS Lambda:
The following is used in GCP Cloud Functions.
The following are examples of implementations of a DAO interface. The implementations handle the data store-specific boilerplate logic that is needed to persist or retrieve data to/from a data store. The following involves calling AWS DynamoDB and will be included in the AWS Lambda artifact:
The following involves calling GCP Firestore and will be included in the GCP Cloud Functions artifact:
Now we need an entry point that can be invoked by the intended cloud service (such as AWS Lambda or GCP Cloud Functions). This entry point should 1) call the mapper to produce a
GenericRequest
and 2) pass that GenericRequest
to the core entry point. The following is an entry point for AWS Lambda:The following is an entry point for GCP Cloud Functions:
To build multiple cloud artifacts, just run
mvn clean install
from the project’s root directoy. Maven will walk the project structure and build JAR artifacts for each sub-module that specifies jar as the packaging in the sub-module’s pom.xml
. The JARs will be under the /target
directory in each sub-module.Although the examples in this blog post have all been written in Java, this pattern also works with TypeScript, C#, and Python code bases. Similar build commands in each of these language ecosystems can be used to automatically build multiple cloud artifacts. Please see this repository for examples and instructions in these other languages.
This blog outlines the broad steps necessary to create a non-containerized portable application from a single code base. However, I go far deeper in the workshop Building Portable Multicloud Applications Without Containers, where our code samples are expanded on in-depth. I encourage you to investigate further from there.
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.