
IoT Environmental Monitoring with AWS & Amazon Q
Create a comprehensive monitoring system using LoRaWAN sensors, AWS IoT Core, Lambda with EMF metrics, and CloudWatch dashboards—all with Amazon Q's help.
Gonzalo Vásquez
Amazon Employee
Published Jun 1, 2025
In this post, I'll share my journey of building an environmental monitoring system using LoRaWAN sensors, AWS IoT Core, Lambda functions, and CloudWatch dashboards. This project evolved from a simple temperature and humidity monitoring setup to a comprehensive system that tracks environmental conditions across multiple locations on my rural property and beyond.
The entire development process was facilitated by Amazon Q Developer CLI using its chat feature, which provided guidance, generated code, and helped troubleshoot issues throughout the project.
The system monitors indoor conditions within my house and outdoor conditions in various microenvironments: under the house in shaded areas, within two small Miyawaki forests (dense, multi-layered native forest plantings of 100m² each), and even at a neighbor's property approximately 1 km away without line of sight.
The monitoring system uses two types of Dragino sensors:
• LHT65: Temperature, humidity, and battery status sensors (outdoor locations)
• LHT52: Temperature and humidity sensors (indoor locations)
Data flows through AWS services in this sequence:
1. Sensors transmit readings via LoRaWAN
2. AWS IoT Core receives the data
3. IoT Rules route data to Lambda functions
4. Lambda functions process, validate, and publish metrics using CloudWatch Embedded Metric Format (EMF)
5. CloudWatch dashboards visualize the data
Throughout this project, I used Amazon Q Developer CLI's chat feature to guide the development process. This experience highlighted both the strengths and limitations of AI-assisted development:
• Strengths: Q Developer quickly generated boilerplate code, suggested AWS best practices, and helped troubleshoot specific issues.
• Challenges: Sometimes Q Developer misunderstood requirements or made erroneous assumptions about my existing infrastructure. For example, it initially suggested IAM permissions that didn't match my actual Lambda role names.
I also contributed to some confusion by occasionally providing incomplete or incorrect information. For instance, I initially described the battery percentage calculation incorrectly, which led Q Developer to implement a flawed algorithm that needed later correction.
The iterative nature of our conversation allowed us to refine solutions over time, with each round of feedback improving the system's functionality.
I started with a few LHT65 sensors placed outdoors. Setting up the initial AWS infrastructure involved:
1. Configuring AWS IoT Core to receive LoRaWAN messages
2. Creating a Lambda function to decode the binary payloads
3. Building a basic CloudWatch dashboard
The first iteration worked, but I quickly encountered issues:
javascript
// Initial payload decoding function with issues
function decodeLHT65Payload(bytes) {
// Extract battery voltage from bytes 0-1
const batteryRaw = ((bytes[0] << 8) | bytes[1]) & 0x3FFF;
const batteryV = batteryRaw / 1000;
const batteryPercent = ((batteryV - 2.2) / (3.6 - 2.2)) * 100; // Incorrect calculation
// Temperature and humidity extraction
// ...
}
The battery percentage calculation was incorrect, leading to confusing readings on the dashboard. After analyzing the sensor documentation more carefully, I realized the LHT65 actually provides battery status codes rather than raw voltage that could be converted to a percentage.
When I added the first LHT52 sensor for indoor monitoring, I discovered it wasn't showing up in my dashboard. The issue? I had configured the system only for LHT65 sensors. This required:
1. Creating a new Lambda function for LHT52 sensors
2. Setting up IoT rules to route data to the appropriate Lambda
3. Building a separate dashboard for indoor readings
bash
# Creating a new IoT rule for the Parent's Suite sensor
aws iot create-topic-rule \
--rule-name parents_suite_to_lht52 \
--topic-rule-payload '{
"sql": "SELECT * FROM \"dragino/devices/a84041c37189bf4b/up\"",
"actions": [{
"lambda": {
"functionArn": "arn:aws:lambda:us-east-1:123456789012:function:lht52-processor"
}
}]
}'
A key improvement was implementing CloudWatch Embedded Metric Format (EMF) in both Lambda functions. EMF allows metrics to be embedded directly within log events, eliminating the need for separate API calls to publish metrics:
javascript
// AWS Embedded Metrics for CloudWatch
const { createMetricsLogger, Unit } = require('aws-embedded-metrics');
// In the handler function
const metrics = createMetricsLogger();
metrics.setNamespace('LHT65Sensors');
// Set dimensions for metrics
metrics.setDimensions({
DeviceId: deviceId,
DevEUI: devEui,
DeviceName: deviceName
});
// Publish metrics
metrics.putMetric("Temperature", decodedData.temperature, Unit.None);
metrics.putMetric("Humidity", decodedData.humidity, Unit.Percent);
metrics.putMetric("DewPoint", parseFloat(dewPointC.toFixed(2)), Unit.None);
metrics.putMetric("BatteryVoltage", decodedData.battery, Unit.None);
// Flush metrics to CloudWatch
await metrics.flush();
Using EMF significantly simplified the metrics publishing process and reduced the Lambda execution time. It also eliminated the need for direct CloudWatch API calls, which we later identified as unused code through our path analysis.
As I added more sensors, I needed to make the Lambda functions more robust. Key improvements included:
1. Humidity validation: Filtering out readings with humidity > 100%
javascript
// Check for humidity > 100% and discard if found
if (decodedData.humidity > 100) {
trackPath(CODE_PATHS.HUMIDITY_FILTER);
console.log(Discarding payload with invalid
humidity value: ${decodedData.humidity}%
);
return {
statusCode: 200,
message: "Payload discarded due to invalid humidity value > 100%",
humidity: decodedData.humidity
};
}
2. Better battery status handling: Replacing percentage with status codes
javascript
// Extract battery status from bytes 0, bits 15:14
const batteryStatus = (bytes[0] >> 6) & 0x03;
let batteryStatusText = "";
switch(batteryStatus) {
case 0: // 00
batteryStatusText = "Ultra Low (≤ 2.50V)";
break;
case 1: // 01
batteryStatusText = "Low (2.50V-2.55V)";
break;
// ...
}
3. Dew point calculation: Adding derived metrics
javascript
// Calculate dew point
const a = 17.625;
const b = 243.04; // °C
const alpha = Math.log(decodedData.humidity / 100) + a * decodedData.temperature / (b + decodedData.temperature);
const dewPointC = (b * alpha) / (a - alpha);
The initial dashboards were functional but basic. I improved them by:
1. Increasing the period for battery status widgets from 5 minutes to 1 hour to account for the 20-minute transmission intervals
2. Changing the statistic from "Average" to "Maximum" for battery status metrics
3. Adding individual battery status gauge widgets for each sensor
4. Creating separate panels for each location type (indoor, outdoor, under-house, Miyawaki forests)


After running the system for a while, I wanted to identify unused code paths to streamline the Lambda functions. I implemented a code path tracking system:
javascript
// Setup code path tracking
const CODE_PATHS = {
DEVICE_MAP: 'DEVICE_MAP',
DECODE_PAYLOAD: 'DECODE_PAYLOAD',
BATTERY_STATUS_0: 'BATTERY_STATUS_0',
// ...
};
// Track executed code paths
const executedPaths = new Set();
function trackPath(path) {
executedPaths.add(path);
console.log(`[CODE_PATH] ${path}`);
}
With Amazon Q Developer's help, I created an analysis script to process the CloudWatch logs and identify which code paths were actually being executed:
bash
#!/bin/bash
# Script to analyze code path logs from Lambda functions
LOG_GROUP=$1
HOURS=${2:-24} # Default to 24 hours if not specified
# ... script logic ...
# Extract the code paths and count occurrences
cat "$TEMP_FILE" | grep -o '\[CODE_PATH\] [A-Z_]*' | sed 's/\[CODE_PATH\] //' | sort | uniq -c | sort -nr
The analysis revealed several interesting insights:
1. Only the FORMAT_SIMPLE event format was being used - the other 4 formats were never executed
2. The direct CloudWatch metrics fallback code was never used (EMF was working perfectly)
3. For LHT65 sensors, only one battery status code (BATTERY_STATUS_3) was encountered
4. The humidity filter was triggered only once in the LHT52 function
Based on these findings, Amazon Q Developer helped me streamline both Lambda functions by:
1. Removing the unused event format handlers
2. Eliminating the direct CloudWatch metrics fallback code
3. Simplifying the battery status handling for LHT65 sensors
The result was cleaner, more efficient code that focused only on the actual execution paths used in production.

This project taught me several valuable lessons about IoT systems on AWS and working with Amazon Q Developer:
1. Start simple, then expand: Beginning with a minimal viable system allowed me to identify issues early before scaling up.
2. Implement robust error handling: IoT devices in the real world produce unexpected data patterns that your system must handle gracefully.
3. Use code path tracking: Understanding which parts of your code are actually being executed helps eliminate unnecessary complexity.
4. Consider environmental factors: Physical placement of sensors significantly impacts both data quality and transmission reliability.
5. Leverage CloudWatch EMF: Embedded metrics significantly simplify the process of collecting and visualizing metrics from Lambda functions.
6. Be precise with AI tools: When working with Amazon Q Developer, providing clear, accurate information leads to better results. When I provided incomplete or incorrect information, it led to solutions
that needed later refinement.
7. Iterate and verify: Amazon Q Developer sometimes made incorrect assumptions or generated code that didn't quite fit my use case. Regular verification and testing were essential to catch these issues.

Building this environmental monitoring system was an iterative process that evolved from a simple proof of concept to a comprehensive solution. By leveraging AWS IoT Core, Lambda with CloudWatch EMF, and CloudWatch Dashboards, I created a reliable system that provides valuable insights into the microenvironments across my rural property.
Amazon Q Developer CLI proved to be an invaluable assistant throughout this process, despite occasional misunderstandings or incorrect assumptions. The conversational nature of the tool allowed me to refine requirements and correct course when required. This highlights both the power and current limitations of AI-assisted development - it accelerates many aspects of the development process but still requires human oversight and domain expertise.
The most rewarding aspect has been watching the data patterns emerge over time, particularly in the Miyawaki forests where the dense vegetation creates distinct temperature and humidity profiles compared to the surrounding areas. These insights help inform my land management decisions and provide a fascinating window into the environmental conditions that support different ecosystems.
For anyone looking to build a similar system, I recommend starting small, focusing on data quality, and being prepared to iterate as you learn from both the successes and challenges along the way. And if you're using Amazon Q Developer, remember that clear communication and verification of generated solutions are key to successful outcomes.
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.