
Building Portable Multicloud Applications Without Containers
Build portable multicloud applications WITHOUT containers that run on AWS, Azure, and GCP using cloud native serverless services.
- Hierarchical build tools, like Maven sub-modules
- Using cloud-native services. Without this pattern, customers must use technologies that are self-managed. This may not be the right choice in every case due to a variety of factors, such as cost, complexity, development time, or team skills. With this pattern, customers can use cloud native services from each individual cloud provider, such as AWS Lambda, Azure Functions, GCP Firestore, and more.
- Code reusability. This pattern consolidates cloud-agnostic business logic in a “core” module within a cloud-aware wrapper. You can save time by reusing the application’s “core” business logic. Any porting or replicating of the application can focus on the development of relatively simple or “boilerplate” code at the “fringe” of the application, such as the entry point and DAO operations.
- Strangler pattern for cloud migrations instead of lift-and-shift. With this pattern, the same application version can be deployed to multiple cloud providers. This allows a customer to incrementally route a portion of their application traffic to the target cloud (such as AWS), ensure the application performs as expected, route the remaining application traffic to the target cloud, and then retire the source cloud application and infrastructure. Essentially, this is a canary deployment, but across cloud providers. Previous multicloud application development practices would result in considerable refactoring of the code and the creation of different applications for each cloud. This would prevent simultaneous deployments and, therefore, prevent canary deployments across cloud providers. This approach may be best suited for stateless APIs or services.
- Enabling cloud portability. If a customer wants to move an application to a different cloud provider, they do not need to refactor the application to remove that cloud provider's dependencies and related code. Instead, the customer will have multiple versions of the same application - each containing only the targeted cloud provider's dependencies and related code.
GenericRequest
. So, each implementation of GenericRequestMapper
will be responsible for mapping a cloud provider’s proprietary object to GenericRequest
so that the business logic in the core module can act on the data encapsulated in GenericRequest
(see the following Deployments section for more details).1
2
3
4
5
public interface GenericRequestMapper<T> {
GenericRequest map(T httpRequest) throws IOException;
}
1
2
3
4
5
6
7
public interface PersonDao {
Optional<Person> getById(UUID id);
void save(Person person);
}
- A dependency on the core artifact and any required CSP SDKs
- Implementation of the core mapper interface
- Implementation of the DAO abstract class or interface
- An entry point (if the serverless service requires one)
GenericRequest
.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
public class MapToGenericRequestMapper implements GenericRequestMapper<Map<String, Object>> {
private final static Gson gson = new Gson();
public GenericRequest map(Map<String, Object> event) {
JsonObject bodyJson = gson.fromJson(gson.toJson(event), JsonObject.class);
// We need to split the query string parameters after deserializing them.
JsonObject unsplitQueryParametersJson = bodyJson.getAsJsonObject("queryStringParameters");
Map<String, String> unsplitQueryParameters = gson.fromJson(unsplitQueryParametersJson, Map.class);
Map<String, List<String>> splitQueryParameters = unsplitQueryParameters.entrySet()
.stream()
.collect(
Collectors.toMap(
Map.Entry::getKey,
value -> List.of(value.getValue().split(","))
)
);
return new GenericRequest(
bodyJson.get("httpMethod").getAsString(),
bodyJson.get("path").getAsString(),
gson.fromJson(bodyJson.getAsJsonPrimitive("body").getAsString(), JsonObject.class),
splitQueryParameters
);
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public class HttpRequestMapper implements GenericRequestMapper<HttpRequest> {
private final Gson gson = new Gson();
public GenericRequest map(HttpRequest httpRequest) throws IOException {
return new GenericRequest(
httpRequest.getMethod(),
httpRequest.getPath(),
this.gson.fromJson(httpRequest.getReader(), JsonObject.class),
httpRequest.getQueryParameters()
);
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
/**
* This class provides the DynamoDB-specific implementation of the PersonDao interface.
* It instantiates and configures the DynamoDB client in the constructor and overrides the
* {@link PersonDao#getById} and {@link PersonDao#save} methods with DynamoDB-specific logic.
*/
public class DynamoDbPersonDaoImpl implements PersonDao {
private static final String PRIMARY_KEY = "id";
private final Table table;
public DynamoDbPersonDaoImpl() {
String dynamoTableName = Optional.ofNullable(System.getenv("DYNAMO_TABLE_NAME"))
.orElseThrow(() -> new IllegalArgumentException("DYNAMO_TABLE_NAME should not be null"));
final AmazonDynamoDB client = AmazonDynamoDBClientBuilder.defaultClient();
final DynamoDB dynamoDB = new DynamoDB(client);
this.table = dynamoDB.getTable(dynamoTableName);
}
public Optional<Person> getById(UUID id) {
log.info("Getting person by id: {}", id);
return Optional.ofNullable(this.table.getItem(PRIMARY_KEY, id.toString()))
.map(item -> new Person(
UUID.fromString(item.getString(PRIMARY_KEY)),
item.getString("firstName"),
item.getString("lastName"),
OffsetDateTime.parse(item.getString("dateOfBirth"))
));
}
public void save(Person person) {
log.info("Saving person: {}", person);
Item item = new Item()
.withPrimaryKey("id", person.getId().toString())
.withString("firstName", person.getFirstName())
.withString("lastName", person.getLastName())
.withString("dateOfBirth", person.getDateOfBirth().format(DateTimeFormatter.ISO_OFFSET_DATE_TIME));
this.table.putItem(item);
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
/**
* This class provides the Firestore-specific implementation of the PersonDao interface.
* It instantiates and configures the DynamoDB client in the constructor and overrides the
* {@link PersonDao#getById} and {@link PersonDao#save} methods with Firestore-specific logic.
*/
public class FirestorePersonDaoImpl implements PersonDao {
private final Firestore firestoreDatabase;
private final static String PERSON_TABLE_NAME = "person";
private final static String ID_FIELD_NAME = "id";
public FirestorePersonDaoImpl() throws RuntimeIOException {
this.firestoreDatabase = FirestoreOptions.getDefaultInstance().getService();
}
public Optional<Person> getById(UUID id) {
try {
ApiFuture<DocumentSnapshot> documentSnapshotFuture = this.firestoreDatabase.collection(PERSON_TABLE_NAME)
.document(id.toString())
.get();
DocumentSnapshot documentSnapshot = documentSnapshotFuture.get();
if (documentSnapshot.exists()) {
Long dateOfBirthEpochSeconds = documentSnapshot.getLong("dateOfBirth");
Objects.requireNonNull(dateOfBirthEpochSeconds);
Person person = new Person(
UUID.fromString(documentSnapshot.getId()),
documentSnapshot.getString("firstName"),
documentSnapshot.getString("lastName"),
OffsetDateTime.of(
LocalDateTime.ofEpochSecond(dateOfBirthEpochSeconds, 0, ZoneOffset.UTC),
ZoneOffset.UTC
)
);
return Optional.of(person);
}
return Optional.empty();
}
catch (ExecutionException | InterruptedException e) {
log.error("", e);
throw new RuntimeException(e);
}
}
public void save(Person person) {
try {
ApiFuture<WriteResult> writeResultFuture = this.firestoreDatabase.collection(PERSON_TABLE_NAME)
.document(person.getId().toString())
.set(
Map.of(
"firstName", person.getFirstName(),
"lastName", person.getLastName(),
"dateOfBirth", person.getDateOfBirth().toEpochSecond()
)
);
// Block the set/write operation with the get() method.
WriteResult writeResult = writeResultFuture.get();
log.info("Successfully wrote person to database at {}: {}", writeResult.getUpdateTime(), person);
}
catch (ExecutionException | InterruptedException e) {
log.error("", e);
throw new RuntimeException(e);
}
}
}
GenericRequest
and 2) pass that GenericRequest
to the core entry point. The following is an entry point for AWS Lambda:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
public class StreamLambdaHandler implements RequestHandler<Map<String, Object>, GenericResponse> {
private final static MapToGenericRequestMapper mapToGenericRequestMapper = new MapToGenericRequestMapper();
private final static PersonDao personDao;
private final static PersonController personController;
static {
log.info("Inside static block. Starting to initialize dependency tree");
personDao = new DynamoDbPersonDaoImpl();
personController = new PersonController(personDao);
}
public GenericResponse handleRequest(Map<String, Object> event, Context context) {
log.info("Inside handleRequest method");
final GenericResponse response = new GenericResponse();
try {
final GenericRequest request = mapToGenericRequestMapper.map(event);
personController.handleRequest(request, response);
return response;
}
catch (Exception e) {
log.error("", e);
response.setStatus(500);
return response;
}
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
public class Main implements HttpFunction {
private final static Gson gson = new Gson();
private final static PersonDao personDao;
private final static PersonController personController;
private final static GenericRequestMapper<HttpRequest> requestMapper = new HttpRequestMapper();
static {
personDao = new FirestorePersonDaoImpl();
personController = new PersonController(personDao);
}
/**
* Public default constructor that is required by GCP Cloud Functions.
*/
public Main() {}
public void service(HttpRequest httpRequest, HttpResponse httpResponse) throws Exception {
log.info("Inside service method");
try {
GenericRequest request = requestMapper.map(httpRequest);
GenericResponse response = new GenericResponse();
personController.handleRequest(request, response);
// Map generic response back to GCP HttpResponse.
if (response.getBody() != null) {
httpResponse.getWriter().write(response.getBody());
}
httpResponse.setStatusCode(response.getStatus());
response.getHeaders().forEach(httpResponse::appendHeader);
if (response.getHeaders() != null && response.getHeaders().get("Content-Type") != null) {
httpResponse.setContentType(response.getHeaders().get("Content-Type"));
}
}
catch (Exception e) {
log.error("", e);
httpResponse.setStatusCode(500);
}
}
}
mvn clean install
from the project’s root directoy. Maven will walk the project structure and build JAR artifacts for each sub-module that specifies jar as the packaging in the sub-module’s pom.xml
. The JARs will be under the /target
directory in each sub-module.Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.