AWS Logo
Menu
Hands on guide to build & deploy a project management assistant using Generative AI (Part 2)

Hands on guide to build & deploy a project management assistant using Generative AI (Part 2)

Learn how to build an efficient GenAI solution using Amazon Bedrock - Knowledge base and agent

Kowsalya Jaganathan
Amazon Employee
Published Nov 22, 2024
Last Modified Dec 3, 2024
Part 1 - A complete guide to design a project management assistant using Generative AI
This is the second article of the blog series on building a project management assistant powered by Generative AI. In the previous article, the groundwork was laid by exploring the solution's design considerations and providing an in-depth look at the architecture and process flows for each feature. Now, we can dive into the practical aspects of bringing the project management assistant to life.
In this article, we'll focus on the nitty-gritty details of implementation. I'll walk you through the step-by-step process of constructing the project management assistant, offering code sample, best practices, and practical tips to help you replicate and adapt this solution for your own needs. Whether you're a seasoned developer or just starting your journey with Generative AI, this guide will provide valuable insights into creating a powerful tool for optimizing resource allocation and management. Let's get started with the hands-on implementation of our AI-driven project management assistant.

Pre-requisite

In this section, we will look into the pre-requisites to build the core components of project management assistant.
A. AWS Account
Refer to create an AWS account to create an AWS account. This solution requires an IAM User with Administrator access policy granted to your user.
B. Model evaluation
This solution uses a Large Language Model (LLM) within the agents for Bedrock and an embedding model to ingest the data within Knowledge base. At this time of the blog, Anthropic Claude3 Sonnet and Amazon Titan Embeddings are the most efficient models for this use case. You can use Bedrock model evaluation jobs to evaluate the foundation models and choose appropriate models for your applications.
C. Bedrock Model Access
To access Amazon Bedrock foundation models, you need to request access as they are not granted by default. You can request access to foundation models through Amazon Bedrock console. Bedrock model access documentation provides step by step instruction on this process.
D. Sample Code
Clone the GitHub repository that contains sample code for this project
Explore the project folders and navigate to the ‘DataToUpload’ folder which contains below files & folders:
  • artifacts/project_management_asisstant.json - OpenAPI schema that will be used by agents to invoke the Lambda function
  • artifacts/pma_db – SQL3Lite database file to act as a temporary database
  • data/Contract_* - Sample data files for knowledgebase.
E. Data Source
Create an S3 bucket to act as a data source for this project Ensure the bucket is created in the same region where you want to deploy the solution. Note the S3 bucket name as it will be required when configuring the knowledge base. Upload the contents from the ‘DataToUpload’ folder from sample code (Refer above : Pre-requisite D) to the newly created S3 bucket.
F. Lambda layer
OpenSearch-py library is used to access Amazon OpenSearch DB from Lambda function. Execute the below code snippet to download OpenSearch-py library and create a zip file.
Upload the created 'layer.zip' file to the newly created S3 bucket (Refer Pre-requisite E). This zip file will be used to create a Lambda layer. A Lambda layer is a .zip file archive that contains supplementary code. Refer the AWS documentation to create a Lambda layer. Please refer the below images to see the folder structure of the S3 bucket and a visual reference of a created Lambda layer
Image: S3 Folder Structure
Image: S3 Folder Structure
Image: Lambda Layer
Image: Lambda Layer

Project Management Assistant

With all the pre-requisites taken care of, now we can start building the core components of Project Management Assistant. You could develop the solution either in AWS Console (Option 1) or create & deploy the services through CloudFormation template (Option 2). In this article, we will explore both the options. You could choose whichever option you prefer to build the solution.

Option:1 Building the solution through AWS Console

In this section, you will build the Project Management Assistant from the scratch in AWS Console. Below is the order in which we will build this solution:
Interactive search functionality
  • Step: 1 Create Amazon Bedrock knowledge base
  • Step : 2 Data Ingestion
Resource allocation functionality
  • Step : 3 Create Lambda function with business logic for resource allocation
  • Step : 4 Create Agents for Bedrock
  • Step: 5 Create an Action group to invoke Lambda function
  • Step : 6 Integrate Knowledge Base

Step: 1 Create Amazon Bedrock knowledge base

In the AWS console, navigate to Amazon Bedrock and select Knowledge Bases from the left menu. Click on the Create Knowledge base button. Refer below images to guide you through this process.
Image: KnowledgeBase_creation_1
Image: KnowledgeBase Creation Step 1
Amazon Bedrock knowledge base creation is a 4-step process. First, knowledge base details need to be entered and IAM role & data source must be selected. For our use case, S3 data source will be used, so select S3 data source as shown in the image below
Image: KnowledgeBase_creation_2
Image: KnowledgeBase Creation Step 2
Second, configure the S3 data source. Provide the bucket URI of the bucket created in the Pre-requisite E as shown in below image. In this blog, we will be using default chunking and parsing configuration. You can explore the chunking options available in Knowledge base and configure your preferred option. Also, note that you cannot modify the chunking strategy after the data source is created.
KnowledgeBase_creation_3
Image: KnowledgeBase Creation Step 3
Third, choose the embedding model and vector store. For this application, Titan Text Embedding V2 is chosen for the embeddings model and a new OpenSearch data store will be created. Alternatively, You can choose a supported embeddings model and a supported vector store, of your choice from the available options. You can use Bedrock model evaluation jobs to evaluate the foundation models and choose appropriate models for your applications.
KnowledgeBase_creation_4
Image: KnowledgeBase Creation Step 4
Lastly, review the knowledge base configuration and click the ‘Create’ button.

Step:2 Data Ingestion

The accuracy and relevancy of the interactive search functionality depends on the quality of the data that we use for the RAG approach. Ensure that you have defined a data strategy which can help you to maintain clean, valid and relevant data in your data source.
Once the knowledge base is created, navigate to the newly created knowledge base and synchronize the data source. This can be done by clicking the ‘Sync’ button under the data source section. The data in the S3 bucket will be converted into embedding using the embeddings model and store in the vector store that we configured in image:4. One thing to keep in mind is every time you modify the contents in your data source, you must sync the data source so that it is re-indexed to the knowledge base. Syncing is incremental, so Amazon Bedrock only processes added, modified, or deleted documents since the last sync.
KnowledgeBase Data Sync
Image: KnowledgeBase Data Sync

Step: 3 Create a Lambda function

In the AWS console, navigate to AWS Lambda and create a function. Note that the runtime used for this sample code is Python 3.12 and architecture x86_64.
Lambda Function Creation
Image: Lambda Function Creation
Sample Lambda function code is available in the GitHub repo (under Lambda folder). This code can be used to get started with the Lambda function for resource allocation functionality. Copy the code from the cloned repo to the newly created Lambda function and deploy it.
Lambda Sample Code
Image: Lambda Sample Code
Next, create a resource-based policy to enable Bedrock to access the Lambda Function. Refer the detailed documentation and the image below to get the permissions setup appropriately
Lambda Function Permission
Image: Lambda Function Permission

Step: 4 Create Agents for Bedrock

In the AWS console, navigate to Amazon Bedrock and select Agents from the left menu. Click on the Create Agent button. This will allow you to create an agent and then takes you further configuration screens.
Agents for Bedrock
Image: Agents for Bedrock
Edit the newly created agent to configure it for our use case by clicking ‘Edit in Agent Builder’ button. In this screen, you can set the instruction for the foundation model, create action group and integrate the knowledge base that we have created in previous section. Follow the images below to see the configuration details.
Agent Instruction
Image: Agent Instruction
Sample Instruction:

Step: 5 Create an Action group to invoke Lambda function

An action group defines actions that the agent can help the user perform. Create an action group t invoke the created Lambda function and the schema JSON file for the action group is provided in the GitHub repo. Refer the image below to create an action group
Action Group Creation
Image: Action Group Creation

Step: 6 Integrate Knowledge Base

Next, integrate the knowledge base created in step:1 to the Bedrock agent. You have successfully configured all the required backend services for the project management assistant! Note the Agent Alias ID, Agent ID and Region to integrate the bedrock agent to the front end Streamlit application
KnowledgeBase Agent
Image: KnowledgeBase Agent

Option:2 Create & deploy the services through CloudFormation template

This CloudFormation template automates the creation and configuration of the required AWS services to build the solution. The cloud formation template for our solution is available in the Github Repo. The template takes 2 parameters – StackVersion and S3BucketName. StackVersion has to be a unique version number as it is aimed to differentiate the resources created for each CloudFormation deployment. Provide the newly created S3 bucket name from pre-requisite (E) as the input for the parameter ‘S3BucketName’
Following are the tasks performed by the template for this solution.
  • Create IAM roles
  • Create Amazon OpenSearch Serverless collection
  • Create Amazon Bedrock knowledge base
  • Initiate data ingestion for the S3 data source
  • Create Lambda function to implement resource allocation business logic
  • Create Agents for Bedrock
  • Integrate Knowledge Base with Agent
  • Create Action Group to invoke the Lamba function
  • Outputs the details of the key resources created in the stack
In AWS Console, Navigate to CloudFormation and create a stack using the CloudFormation template from the Github Repo. Provide the newly created S3 bucket name from pre-requisite (E) as the input for the parameter ‘S3BucketName’ and a unique number as input for the parameter ‘StackVersion’.
CloudFormation
Image: CloudFormation
While creating the stack, ensure you select “Delete all newly created resources” option in stack configuration step to cleanup resources automatically in case the stack fails to complete deployment. Once the deployment is completed, the stack displays the Ids of the created key resources under 'Outputs' tab. Note the ‘AgentId’, ‘region’ and ‘AgentAliasId’, which will be used in the next step

Streamlit application integration

Now we are ready to integrate the Bedrock agent to a sample Streamlit application. Open the cloned project repository in your local IDE (VS code or similar IDE). Navigate to the ‘FrontEnd’ folder, which contains 2 files - app.py and agent.py. The Streamlit based UI has been developed in app.py and the agent integration is handled in agent.py.
Open “agent.py” and "app.py" files and update the values for the “AGENT_ID”, “REGION” and “AGENT_ALIAS_ID” variables with details from the CloudFormation stack outputs (option 2) or from AWS Console – Bedrock (option 1). Now the UI is integrated with Bedrock agent and is ready to serve as Project Management Assistant.
Streamlit Sample Code
Image: Streamlit Sample Code
Streamlit Sample Code
Image: Streamlit Sample Code
Ensure you have programmatic access to your AWS account, before running the application. Refer the AWS documentation to configure the access, if needed.
In the VS Code terminal, execute the following command to start the Streamlit application.
streamlit run app.py
Below image shows the Project Management Assistant UI
PMA UI
Image: Project Management Assistant UI
You have successfully deployed the solution and now you can test the Project Management Assistant functionalities.

Conclusion

This blog series focussed on guiding you to design & build a GenAI Project Management Assistant to improve the efficiency of the project managers. You have learnt the key design & build concepts to build a GenAI solution using Agents for Amazon Bedrock agents and knowledge base. Keep Learning !!
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments