How to detect Forest fires using Kinesis Video Streams and Amazon Rekognition
Detect Forest fires using Kinesis Video Streams and Amazon Rekognition
Published Jun 5, 2024
Last Modified Jun 6, 2024
On a hot summer night, while we were enjoying our food and drinks, the dogs suddenly began barking and staring at a certain direction. We got outside to have a better look and noticed that the sky had started to turn orange. We immediately knew what it was happening, there was a huge fire at a beautiful forest a few miles away. This was happening almost every summer, at different places, wiping out forests and destroying homes, with a massive impact on the environment and people's lives.
Having seen the aftermath and the years it took for the burnt areas and people to recover, I decided to build something to detect smoke and fire and help reduce the destructive impact. After all, early detection plays a crucial role when it comes to forest fires.
Waiting for a Real-time scenario, like the one described above, was not an option or desirable for testing my solution. To overcome this challenge, i decided to simulate the required conditions.
I used my laptop and played YouTube videos of forest fires as the source. This approach allowed me to consistently recreate the visual characteristics of forest fires, use specific scenes, thus ensuring that my solution was tested thoroughly under different conditions. This approach provided a reliable and efficient way to validate my solution and demonstrate how it could possibly handle similar real-time scenarios.
Here is a brief overview of the AWS services and components used in the solution:
An IP/CCTV camera
This acts as a local gateway to connect the camera and manage the video stream up to Amazon Kinesis Video Streams. It is using certificates generated by AWS IoT Core to authenticate itself securely to AWS services.
Set up an IoT Thing to represent my IP camera. This involved configuring the certificates and policies for secure communication between the IP camera and AWS IoT. It is an important component in creating a secure and manageable architecture for streaming video from an RTSP camera through a Raspberry Pi to Kinesis Video Streams.
Kinesis Video Stream to ingest live video from the RTSP camera (with a matching name to the IoT Thing).
Trained a Rekognition Custom Labels model to detect smoke and fire in images. Training takes some time, depending on the size of the dataset. (The ARN is used in Lambda functions).
Created an S3 bucket to store the extracted images from the IP camera, with the appropriate bucket policies to allow read/write access from the AWS services used.
Wrote a Lambda function to processes images stored in S3, detect smoke and fire using Rekognition, and trigger an SNS notification.
An SNS topic was created to send email notifications when smoke or fire is detected, by the Lambda function.
Created the required IAM roles and policies for Kinesis Video Streams, Rekognition, Lambda, IoT, S3, and SNS. As per best practices, least privilege principles were applied.
The GStreamer plugin for Kinesis Video Streams is a component that integrates GStreamer with Amazon Kinesis Video Streams.
Here is a brief overview about how the solution works.
The first thing to do is to start the Amazon Rekognition Model that we trained.
Next, we need to setup the RTSP camera and test the stream, using VLC. Then we move on and configure the GStreamer plugin in the Raspberry-Pi.
We have to transfer the certificates to the Raspberry Pi and place them in a specific directory.
Obtain the IoT credential endpoint using AWS CloudShell or awscli:
aws iot describe-endpoint --endpoint-type iot:CredentialProvider
The next step is to set the environment variables for the region, certificate paths, and role alias:
export AWS_DEFAULT_REGION=eu-west-1
export CERT_PATH=certs/certificate.pem.crt
export PRIVATE_KEY_PATH=certs/private.pem.key
export CA_CERT_PATH=certs/AmazonRootCA1.pem
export ROLE_ALIAS=CameraIoTRoleAlias
export IOT_GET_CREDENTIAL_ENDPOINT=cxxxxxxxxxxs.credentials.iot.eu-west-1.amazonaws.com
Now we can execute the GStreamer command and start streaming to Kinesis Video Streams:
./kvs_gstreamer_sample FireDetection rtsp://username:password@192.168.1.100/stream1
With the video feed successfully streaming to Kinesis Video Streams, it's time to start extracting the images from the stream.
Kinesis Video Streams simplifies this process by automatically transcoding and delivering images. It extracts images from video data in real-time based on tags and delivers them to a specified S3 bucket.
To use that feature, we need to create a JSON file named update-image-generation-input.json with the required config.
and run the following command in awscli
If we check our S3 bucket we can see the extracted images
Our Lambda function is now going to be triggered and will start processing them, using Amazon Rekognition. This allows for identifying smoke/fire objects within the images and triggering notifications based on detected objects.
We now have a solution where our IP camera streams video to a Kinesis Video Stream. AWS Lambda processes frames from this stream, using Amazon Rekognition Custom Labels to detect smoke and fire. Detected events are then triggering SNS.
By integrating Amazon Rekognition with custom labels, Kinesis Video Streams, S3, and AWS IoT, we can create a powerful image recognition system for many use cases.
For a more detailed walkthrough, feel free to contact me.