logo
Menu
Incident Response in the Age of LLMs - New Opportunities for Security Teams

Incident Response in the Age of LLMs - New Opportunities for Security Teams

Enhancing Cybersecurity Incident Response with Generative AI: How Amazon Bedrock Transforms Threat Detection and Mitigation

Roberto Catalano
Amazon Employee
Published Sep 18, 2024
This post is co-authored by Markus Rollwagen and Luca Perrozzi.

Introduction

In today's ever-evolving cybersecurity landscape, organizations face an increasing number of complex cyber threats, ranging from sophisticated malware and advanced persistent threats (APTs) to insider threats and supply chain attacks. Incident response is crucial to mitigate the impact of these threats and protect critical systems, data, and intellectual property. Industry-standard frameworks like the NIST Computer Security Incident Handling Guide and the MITRE ATT&CK Framework provide structured approaches to incident response, emphasizing the importance of timely detection, analysis, containment, eradication, and recovery.
When it comes to incident management on your infrastructure, services like Amazon GuardDuty, AWS Security Hub, AWS CloudTrail, and AWS Config play vital roles in threat detection, monitoring, and compliance as outlined in AWS Security Incident Response Guide. AWS also have a collection of blog posts regarding best practices for incident response that is available here.
However, the true challenge lies in minimizing the time spent on tasks such as writing automation scripts, de-obfuscating malicious code, analyzing network traffic and logs, correlating information from multiple sources, and preparing comprehensive incident reports. These tasks are often time-consuming, resource-intensive, and prone to human error, especially when dealing with large volumes of data and complex threat scenarios.
This is where Generative AI and Amazon Bedrock can play a transformative role in enhancing incident response processes.
Amazon Bedrock is a fully managed service that enables organizations to build and deploy Generative AI models, including Large Language Models (LLMs). These LLMs can augment human efforts in incident response by providing real-time insights, correlating data from multiple sources, and automating tasks.
By leveraging the power of Generative AI and Amazon Bedrock, security teams can enhance their incident response capabilities, leading to faster identification, containment, and remediation of cyber threats.

Solution Overview

The flexibility of Amazon Bedrock allows security teams to use generally pre-trained LLMs and fine-tune them on their proprietary cybersecurity data, such as security logs, threat intelligence, vulnerability databases, and incident response procedures. These trained models can then be deployed and integrated with existing security tools and workflows, providing real-time insights, data correlation, and task automation capabilities.
We prepared three use cases to showcase how LLMs can assist security professionals in various scenarios. In the first case, the LLM provides insights into the Amazon GuardDuty findings to a security analyst, providing context and recommended actions. The second case involves the LLM assisting a Security Operations Center analyst in triaging and investigating security incidents by analyzing and correlating data from multiple sources. The third case deals with the LLM ability to create SURICATA rules for the AWS Network Firewall to block malicious traffic, based on attack data provided by the analyst.

Use Case 1: LLM Explains GuardDuty Findings

Imagine a security analyst at a large e-commerce company monitoring their AWS environment. Suddenly, Amazon GuardDuty triggers an alert indicating a potential "UnauthorizedAccess" event from an IP address outside the corporate network. The analyst needs to quickly understand the nature of this threat and determine the appropriate course of action.
Without the assistance of an LLM, the analyst would need to manually review the GuardDuty finding details, cross-reference threat intelligence databases, and potentially consult with subject matter experts (SME) to fully comprehend the implications of the alert. This process can be time-consuming and may delay critical response actions.
However, with Amazon Bedrock, the analyst can feed the GuardDuty finding details into the LLM and receive an explanation in plain language. In fact, the LLM can explain the specific type of unauthorized access attempted, its potential impact, and recommended next steps for investigation or remediation.
This process empowers the analyst to better understand the threat and take appropriate actions together with an SME, providing a quicker way to minimize the potential damage from the unauthorized access attempt.
Figure 1: output of the LLM using Amazon Bedrock for the use case 1. We have used GuardDuty sample from the Stratus Red Team project as an input for this example. The full code can be found in the example notebook.

Use Case 2: Assisting Security Operation Center Analyst for Triage

Consider an Analyst in the Security Operations Center (SOC) of a financial institution. Their primary responsibility is to triage and investigate security incidents detected by various monitoring tools, such as Security Information and Event Management (SIEM) systems, Intrusion Detection Systems (IDS), and other security solutions.
During a typical day, the analyst may encounter numerous alerts and incidents, each requiring careful analysis and correlation of data from multiple sources, including security logs, network traffic, endpoint data, threat intelligence feeds, and vulnerability databases.
Without the assistance of an LLM, the analyst would need to manually review and correlate this vast amount of data, potentially missing critical connections or taking longer to reach a conclusion. This can lead to delays in incident response and potentially allow threats to persist or escalate.
However, integrating Amazon Bedrock with the SOC tools, the analyst can leverage LLM capabilities to streamline the triage process. The LLM can analyze and correlate data from various sources, providing the analyst with actionable insights and recommendations for further investigation or remediation steps. Is worth reminding that, especially with increasing complexity of the tasks, LLM evaluations should not replace human judgements and should always be reviewed by cybersecurity SMEs.
Figure 2: output of the LLM using Amazon Bedrock for the use case 2. We have used static information (JSON) of APT1 samples from VirusShare as an input for this example. The full code can be found in the example notebook.

Use Case 3: Creating Firewall Rules in SURICATA format for AWS Network Firewall to block malicious traffic

Consider a scenario where a security analyst at a large enterprise has detected a targeted attack attempting to exploit a known vulnerability in one of their web applications. The analyst needs to rapidly create rules, known as SURICATA rules, for the organization's AWS Network Firewall to block this malicious traffic and prevent further exploitation or data breaches. Without the assistance of an LLM, the analyst would need to manually analyze the attack traffic, identify the relevant signatures or patterns, and craft the appropriate SURICATA rules. This process can be time-consuming and prone to errors, especially in complex or rapidly evolving attack scenarios.
Once more, the analyst can leverage Amazon Bedrock and the LLM capabilities to streamline the rule creation process. The analyst can provide the LLM with details of the detected attack, such as packet captures, malware samples, or exploit code, and the LLM can analyze this data to generate appropriate SURICATA rules for the AWS Network Firewall.
The LLM can identify relevant signatures, patterns, and indicators of compromise (IOCs) from the attack data and translate them into SURICATA rule syntax. Additionally, the LLM can provide explanations and context for the generated rules, helping the analyst understand the logic and potential impact. The security analyst can therefore focus on the review of such generated rules, with a faster reaction time.
Figure 3: output of the LLM using Amazon Bedrock for the use case 3. We have used the potential malicious IP identified in the previous use case as an input. The full example can be found in the example notebook.

Conclusion

Amazon Bedrock offers a powerful solution for enhancing incident response capabilities in cybersecurity. By leveraging the power of LLMs, security teams can streamline processes, gain real-time insights and automate tasks, ultimately enabling faster identification, containment, and remediation of cyber threats.
The use cases presented in this blog post showcase how Amazon Bedrock can be integrated with existing security tools and workflows to augment incident response efforts. From explaining GuardDuty findings in plain language to assisting Level 2 analysts in triaging and investigating incidents, the possibilities are vast.

Limitation of LLMs

Even though LLMs can reduce toil, improve and speed up outcomes, they are known to be susceptible to hallucinations, therefore prone to producing content that is incorrect or nonsensical. Use the LLM responses as inputs to your process, not authoritative guidance and keep a human security expert in your workflow.

Authors

Roberto is a Solutions Architect at Amazon Web Services (AWS), based in Switzerland. With over 6 years of expertise in consulting, cloud computing, solutions architecture, and cyber security, he is an ardent technology enthusiast. His practical knowledge spans various domains, encompassing cyber security, networking, and IoT deployments.

Luca is a Solutions Architect at Amazon Web Services (AWS), based in Switzerland. He focuses on innovation topics at AWS, especially in the area on Data Analytics and Artificial Intelligence. Luca holds a PhD in particle physics and has 15 years of hands-on experience as a research scientist and software engineer.

Markus Rollwagen is a Senior Solutions Architect at Amazon Web Services (AWS), based in Switzerland. He enjoys deep dive technical discussions, while keeping an eye on the big picture and the customer goals. With a software engineering background he embraces Infrastructure-as-code and is passionate about all things security.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments