logo
Menu

Execute Lambda triggered by DB record updates

Explains how to utilize Kinesis DataStream to reflect data by triggering DB record updates

Published Sep 16, 2024
What architecture would you use if you wanted to synchronize data across multiple subsystems?
Importantly, since transactions are not available, the SQL issuer must take care to ensure that data consistency is maintained.
For example, if SQL execution is performed by Lambda, it is necessary to write to database A and database B respectively, but the application logic tends to become complex when updating multiple records.
With this method, Lambda is triggered by actual writes using audit logs, so the implementation is simple and the implementation of data synchronization can be retrofitted. In the above example, RDS for MySQL (or MariaDB) is used, but as explained at the beginning of this article, Aurora MySQL is also possible.
However, in the case of this architecture, since Lambda execution is performed using the subscription filter of CloudWatch Logs, there may be cases where Lambda execution is not possible. If you want to consider retries when an execution error occurs due to a problem such as the maximum number of concurrent Lambda functions being reached, instead of launching Lambda from a subscription filter in CloudWatch Logs, write to Kinesis Data Streams and use Event Source Mapping.
By processing multiple records in a batch in a Lambda function, the number of concurrent executions can be reduced, and in the event of a Lambda function execution error, retries can be made automatically, allowing execution with a minimum number of calls.
It is important to note that the Aurora MySQL audit log is not in the character encoding specified in the parameter group (character_set_database), but in UTF-8 format, when data is written from CloudWatch Logs to Kinesis DataStream, it is in gzip format. When data is written from CloudWatch Logs to Kinesis DataStream, it is compressed in gzip format.
Therefore, the data is displayed correctly in CloudWatch Logs, but is garbled in the Kinesis Data Stream data viewer. Also, in the kinesis-get-records in the Lambda test template, the “Hello, this is a test 123.” data is encoded in Base64 and not gzip compressed, so you must prepare the data in advance using the following method when testing Therefore, it is necessary to prepare the data in advance in the following way when testing.
Incidentally, the method of decoding Base64 is as follows.
While there are references to code for referencing Kinesis DataStream data in other languages, I could find very little code in C#. Referring to the official AWS documentation, I implemented the following method by trial and error. I tried to implement it in the following way.
We hope you enjoy your Pythagorean life with Kinesis DataStream!
 

Comments