New cloud users or individuals with limited cloud experience may encounter challenges in comprehensively understanding their architecture due to various factors. For instance, frequent staff turnover can disrupt knowledge continuity, leading to gaps in understanding about deployed services and configurations. In order to avoid complexities such as the abstract nature of virtualized resources and the vast array of available services, which might contribute to difficulties in gaining a complete picture of the architecture. These challenges can hinder effective management and optimization of cloud resources.
AWS serves a variety of customers varying in size and experience in the cloud. New customers in the cloud space often find themselves relying more on their AWS Solutions Architect to provide clarity and technical guidance into solving different customer problems. In order to provide advice and increase customer trust, the Solutions Architect must perform “discovery” to understand the customer’s pain points and gather a comprehensive understanding of the state of the current technology.
As artificial intelligence and machine learning capabilities become increasingly robust, technical stakeholders can utilize powerful, flexible tools to draw inferences from existing data. Amazon Bedrock is a fully managed service that offers serverless access to various foundation models (FMs) through an API, allowing you to quickly integrate and customize AI capabilities without managing infrastructure.
In this post, we show how to use Amazon Bedrock to analyze audit log files and generate high- level architectures. The purpose of this solution is to provide AWS users with a self-service way to better understanding of how their infrastructure is interconnected and what's happening in their account.
Discovery with AWS
AWS CloudTrail logs can be invaluable for customers and users who are new to the cloud environment, especially those in the greenfield or inbound stage with limited log data. Amazon Bedrock can help you understand and interpret activities within your AWS environment, providing crucial insights into what is happening. By using CloudTrail logs, you can gain visibility into your AWS actions, monitor resource usage, and identify any unexpected or unauthorized activities. This understanding is essential for maintaining security, optimizing resource utilization, and maintaining compliance within the AWS environment.
The following are some of the common questions that may be asked when a new end-user is on AWS:
What services are running in your environment?
What are the different instance types?
What AWS Regions are the services running in?
How are the services interacting with one another?
What have been issues or roadblocks with the current service utilization?
Are there any flags in the AWS account?
What does the service utilization look like in the context of an AWS architecture?
Although having initial discovery is important and necessary to understanding the customer’s environment, it can result in many repetitive calls before actual building and problem-solving can occur.
You can save time during initial discovery and customer calls by using Amazon Bedrock to get an insightful view of what’s occurring in your to get to building and troubleshooting immediately.
Solution overview
Anthropic Claude is a large language model (LLM) that’s capable of analyzing large files using Retrieval Augmented Generation (RAG) to generate insights based off the submitted queries. In this case, Anthropic Claude is able to look through the files that have been stored in your Amazon Simple Storage Service (Amazon S3) bucket and draw insights from the data due to its built-in capabilities of log sentiment analysis.
The output from Amazon Bedrock is continuously improving, as LLMs continue to evolve. This output is able to give answers regarding the preceding questions and describe what is happening within the log files. The model is able to generate a high-level diagram to show one- to-one relationship mapping based off the given information in the log file. A minimum amount of data is required for the results to be accurate and prevent hallucinations.
Amazon Bedrock streamlines log analysis by providing rapid, native insights from your AWS account data without complex setup or third-party integrations, while maintaining clear traceability to source logs.
The following diagram illustrates an example solution architecture. End user logs into AWS account (either through SSO/authentication or directly into their account) and creates an S3 bucket to serve as the repository for their knowledge base. The S3 bucket will be filled with the relevant log files to be analyzed. Once the bucket is created, the user will navigate over to Bedrock and create a Knowledge base using the previously mentioned S3 bucket. Once the knowledge base is created, the user can either choose to get log insights directly from Bedrock (Bedrock Chatbot) or create a chatbot (rag application) for external use, leveraging Lambda and relevant Chatbot applications.
Architecture
The user will be leveraging Amazon Bedrock Knowledge Bases to obtain log insights from log files that are already in their AWS account. In the following sections, we demonstrate how experiment with a model in Amazon Bedrock. After you have set up the knowledge base with the appropriate log files in an S3 bucket, you can start retrieving log insights.
Prerequisites
To set up this solution, complete the following prerequisites:
Ensure AWS CloudTrail logs are enabled in a centralized location from the account(s) and region(s) users wish to retrieve logs from.
Enable access to Amazon Bedrock models. Amazon Bedrock users need to request access to FMs before they are available for use. This is a one-time action and takes less than a minute. For this solution, we use the Amazon Titan Embeddings G1 – Text and Anthropic Claude Sonnet 3.5 models on Amazon Bedrock. For more information, refer to Manage access to Amazon Bedrock foundation models.
Choose a Region from the Amazon Bedrock supported Regions.
Download the appropriate log files to your local machine from your AWS environment (CloudTrail logs).
Query the model with an Amazon Bedrock playground
Amazon Bedrock playgrounds provide you a console environment to experiment with running inference on different models and with different configurations, before deciding to use them in an application. On the Amazon Bedrock console, you can access the playgrounds by choosing Playgrounds in the navigation pane. You can also navigate directly to the playground when you choose a model from a model details page or the examples page. There are playgrounds for text, chat, and image models. Within each playground, you can enter prompts and experiment with inference parameters. Prompts are usually one or more sentences of text that set up a scenario, question, or task for a model.
To start using Amazon Bedrock playgrounds, complete the following steps:
On the Amazon Bedrock console, choose Chat under Playgrounds in the navigation pane.
Choose your model (for this post, we use Anthropic Claude 3.5 Sonnet) and type of throughput.
Choose Apply.
Insert log files with up to a 200,000-token context limit.
Query the LLM to draw an AWS architecture and gain insight with the following prompts: “Explain the AWS architecture in the log file.” “Draw the AWS architecture from the log file.” “Explain AWS service interactions in the log file.”
The results will populate with a visualization, as shown in the following screenshot.
Create and use a knowledge base Amazon Bedrock Knowledge Bases enables you to amass data sources into a repository of information. With knowledge bases, you can build an application that takes advantage of RAG, a technique in which the retrieval of information from data sources augments the generation of model responses.
A knowledge base can be used not only to answer user queries and analyze documents, but also to augment prompts provided to FMs by providing context to the prompt. Knowledge base responses also come with citations, so you can find further information by looking up the exact text that a response is based on and check that the response makes sense and is factually correct.
A prompt template describes how the agent should evaluate and use the prompt that it receives at the step for which you’re editing the template. Templates will differ depending on the model that you’re using. The following is the prompt template that we use for this post:
We ask the following questions:
“What is the structure of a typical CloudTrail log entry?”
“Can you describe the components involved in the architecture inferred from these CloudTrail logs?”
“Can you outline the flow of a typical request through the architecture depicted in the CloudTrail logs?”
“Is there evidence of networking configurations such as VPC, subnets, or security groups in these CloudTrail logs?”
“How are the different AWS services (EC2, RDS, ELB) interconnected based on the events recorded?”
Complete the following steps to create your knowledge base:
On the Amazon Bedrock console, choose Knowledge bases under Builder tools in the navigation pane.
Choose Create knowledge base.
On the Provide knowledge base details page, provide a name, description, and permissions for your knowledge base, then choose Next.
On the Configure data source page, provide the details of your Amazon S3 data source, then choose Next.
Log files are stored in an S3 bucket in a format type that is readable to Amazon Bedrock, which includes .txt, .md, .html, .doc/.docx, .csv, .xls/.xlsx, and .pdf. File size can’t exceed 50 MB each. The S3 bucket is used as a knowledge base for Amazon Bedrock to make queries against using only the relevant data (in this case, the log files). For more information about data sources, see Set up a data source connector for your knowledge base.
On the Select embeddings model and configure vector data store, choose your embeddings model and vector database creation method, then choose Next.
For this post, we create the knowledge base using the Amazon Titan Text embeddings model. The embeddings are stored in an Amazon OpenSearch Serverless vector database that is automatically generated—no additional configuration is required.
On the Review and create page, review the final configurations and choose Create knowledge base.
To test the knowledge base, choose Select model in the Test knowledge base pane.
Choose the Anthropic Claude 3 Sonnet model and choose Apply
You can submit chat queries to Amazon Bedrock using Amazon Bedrock Agents to obtain log insights.
The following screenshot shows the results in the chat window of the Amazon Bedrock agent.
Clean up
Failing to delete resources such as the S3 bucket, OpenSearch Serverless collection, and knowledge base will incur charges. To clean up these resources, complete the following steps:
On the Amazon S3 console, empty and delete the S3 bucket you created.
On the OpenSearch Service console, delete the collection that was created as part of setting up the knowledge base.
On the Amazon Bedrock console, delete the knowledge base you created
Conclusion
Log files can be tedious to go through despite containing insightful and accurate information of what’s occurring in your AWS landscape. With Amazon Bedrock, you can seamlessly integrate your log files in your AWS account into Amazon Bedrock to gain valuable log insights within minutes.
In this post, we showed how to use an Amazon Bedrock playground to experiment with a model. We also demonstrated how to set up a knowledge base with files stored in an Amazon S3 bucket, and get started with retrieving log insights.
Through the interactive features of Amazon Bedrock Agents and the built-in capabilities of Amazon Bedrock Knowledge Bases, the speed for gaining log insights is significantly reduced and provides an additional view of granularity all in one location. Although this post focuses specifically on CloudTrail log integrations, you can use Amazon Bedrock with log insights across not only AWS services but third-party applications as well.
To get started with generating valuable log insights with Amazon Bedrock, start by exploring the Amazon Bedrock User Guide to see how Amazon Bedrock can help your organization reduce overhead. Take the first step in discovering one of the many ways generative AI can reduce time to build with your cloud infrastructure.