logo
Menu
Streamline Generative AI with bedrock-genai-builder for AWS

Streamline Generative AI with bedrock-genai-builder for AWS

bedrock-genai-builder: Simplify generative AI app development on AWS Bedrock with a structured framework, prompt service flows, and seamless model integration.

Published Jun 3, 2024
Last Modified Jun 16, 2024

Introduction

Generative AI has revolutionized the way we build intelligent applications, enabling the creation of highly interactive and personalized user experiences. However, developing and deploying generative AI models can be complex and time-consuming. This is where the bedrock-genai-builder Python package comes in, offering a streamlined solution for building generative AI applications using AWS Bedrock.
In this article, we’ll explore the key features and benefits of the bedrock-genai-builder package and how it simplifies the development process for generative AI applications.

What is bedrock-genai-builder?

bedrock-genai-builder is a Python package designed to facilitate the development and deployment of generative AI applications using AWS Bedrock. It provides a well-framed, lightweight structure that encapsulates generative AI operations, offering a structured and efficient approach to building and managing generative AI models.
The main goal of bedrock-genai-builder is to enhance generative AI development using AWS Bedrock, making it easier for developers to integrate generative AI capabilities into their applications.

Key Features of bedrock-genai-builder

Project Structure Generation

bedrock-genai-builder simplifies the setup process by generating an optimized project structure tailored for different application types. By running a simple command, developers can create a well-organized and modular directory structure that adheres to best practices.

Prompt Service Framework

The package includes a robust prompt service framework that allows developers to define and execute predefined prompt flows for generating text completions. Developers can configure prompt templates, input variables, and allowed foundation model providers in the prompt_store.yaml file. To execute a prompt service flow, developers can use the run_service function.

Direct Model Invocation Utility

bedrock-genai-builder also provides a utility function for directly invoking foundation models and generating text completions based on a provided prompt. Developers can use the generate_text_completion function to quickly generate text completions without additional configuration.

Getting Started with bedrock-genai-builder

To start using the bedrock-genai-builder package (https://pypi.org/project/bedrock-genai-builder/), follow these step-by-step instructions:
Installation: Begin by installing the bedrock-genai-builder package using pip. Open your terminal and run the following command:
 Project Structure Generation: Navigate to your desired project folder (root folder) in the terminal. Run one of the following commands based on your application type:
  • For AWS Lambda applications:
  • For non-Lambda applications:
This command will generate the necessary files and folders for your generative AI application, including:
  • bedrock_util/: A directory containing dependencies and utilities for prompt service and generative AI API operations.
  • Additional folders related to boto3 and other dependencies.
  • prompt_store.yaml: A configuration file for storing prompt templates and service flows.
  • lambda_function.py (for AWS Lambda applications) or bedrock_app.py (for non-Lambda applications) as the main entry point for your application.

Prompt Service Framework

The bedrock-genai-builder package includes a robust prompt service framework that allows developers to define and execute predefined prompt flows for generating text completions. This framework provides a structured approach to configuring and managing prompt templates, input variables, and allowed foundation model providers.

Configuring Prompt Service Flows

The prompt_store.yaml file serves as a blueprint for defining prompt service flows. Each prompt service flow is defined under the PromptServices key and includes the following fields:
Let’s go through each field in detail:
  • <serviceID>: A unique identifier for the prompt service flow. It should be a meaningful name that describes the purpose of the service.
  • prompt: The prompt template for the service. It defines the structure and content of the prompt that will be sent to the foundation model for text completion. You can include input variables within the prompt using curly braces (e.g., {input}).
  • inputVariables: A list of input variable names required by the prompt. These variables will be provided when executing the prompt service flow.
  • guardrailIdentifier (optional): The guardrail identifier (string data type) created in AWS Bedrock to filter and secure prompt input and model responses. Guardrails help ensure that the generated text adheres to specific guidelines and constraints.
  • guardrailVersion (optional but required if guardrailIdentifier is mentioned): The version of the guardrail (string data type). It allows you to specify a specific version of the guardrail to be used.
  • allowedFoundationModelProviders: A list of allowed foundation model providers for the service. It specifies which providers can be used to generate text completions for this prompt service flow. Allowed values are "Amazon", "Meta", "Anthropic", "Mistral AI", and "Cohere".
Here are a few examples of prompt service flows defined in the prompt_store.yaml file:
Example 1: Math Assistance
Example 2: Product Description Generator

Executing Prompt Service Flows

To execute a prompt service flow, you can use the run_service function from the bedrock_util.bedrock_genai_util.prompt_service module. Here's an examples:
Calling Math service-
Calling Product description service:
The run_service function has the following method signature:
Lets see each parameters in details —
  • bedrock_client: The Bedrock runtime client used for interacting with AWS Bedrock. You need to initialize the Bedrock client before calling the run_service function. The client is responsible for making API calls to AWS Bedrock to generate text completions.
  • service_id: The ID of the prompt service flow to run. This ID should match the <serviceID> defined in the prompt_store.yaml file for the desired prompt service flow. It identifies which prompt template and configuration to use for generating the text completion.
  • model_id: The ID of the foundation model to use for text completion generation. This ID specifies the specific model to be used for generating the text completion. It should be a valid model ID supported by AWS Bedrock, such as "amazon.titan-text-premier-v1:0" for Amazon's Titan model.
  • prompt_input_variables (optional): A dictionary containing the input variables required by the prompt template. The keys of the dictionary should match the input variable names defined in the inputVariables field of the prompt service flow in the prompt_store.yaml file. The corresponding values should be the actual values for those input variables. If the prompt template doesn't require any input variables, you can omit this parameter or pass an empty dictionary.
  • **model_kwargs (optional): Additional keyword arguments specific to the foundation model provider. These arguments are passed directly to the underlying API call for generating the text completion. The available keyword arguments may vary depending on the foundation model provider. You can refer to the documentation of the specific provider for more information on supported keyword arguments.
The run_service function performs the following steps:
  1. It retrieves the prompt service configuration from the prompt_store.yaml file based on the provided service_id.
  2. It validates the model_id against the allowedFoundationModelProviders list defined in the prompt service configuration. If the model ID is not allowed for the specified service, an exception is raised.
  3. It formats the prompt template by replacing the input variable placeholders with the corresponding values from the prompt_input_variables dictionary.
  4. It constructs the API request payload based on the formatted prompt, guardrail identifier, guardrail version, and any additional model-specific keyword arguments.
  5. It makes an API call to AWS Bedrock using the Bedrock runtime client to generate the text completion.
  6. It returns the generated text completion as the result.
By using the run_service function, you can easily execute prompt service flows defined in the prompt_store.yaml file and generate text completions based on the provided input variables and selected foundation model.

Direct Model Invocation

In addition to the prompt service framework, the bedrock-genai-builder package provides a utility function called generate_text_completion for directly invoking foundation models and generating text completions based on a provided prompt. This function allows you to bypass the prompt service configuration and directly interact with the foundation models.
To use the generate_text_completion function, you need to import it from the bedrock_util.bedrock_genai_util.TextCompletionUtil module. Here's the import statement:
This import statement allows you to access the generate_text_completion function in your code.
The generate_text_completion function has the following method signature:
Lets see parameters in details:
  • bedrock_client: The Bedrock runtime client used for interacting with AWS Bedrock. You need to initialize the Bedrock client before calling the generate_text_completion function. The client is responsible for making API calls to AWS Bedrock to generate text completions.
  • model: The ID of the foundation model to use for text completion generation. This ID specifies the specific model to be used for generating the text completion. It should be a valid model ID supported by AWS Bedrock, such as "amazon.titan-text-premier-v1:0" for Amazon's Titan model.
  • prompt: The input prompt for generating the text completion. This is the text that will be provided to the foundation model as the starting point for generating the completion. It can be a string or a list of strings, depending on the requirements of the specific foundation model.
  • guardrail_identifier (optional): The guardrail identifier (string data type) created in AWS Bedrock to filter and secure prompt input and model responses. Guardrails help ensure that the generated text adheres to specific guidelines and constraints. If you don't want to apply any guardrails, you can omit this parameter or set it to None.
  • guardrail_version (optional): The version of the guardrail (string data type). It allows you to specify a specific version of the guardrail to be used. If you provide a guardrail_identifier, you must also provide the corresponding guardrail_version. If no guardrails are used, you can omit this parameter or set it to None.
  • **model_kwargs (optional): Additional keyword arguments specific to the foundation model provider. These arguments are passed directly to the underlying API call for generating the text completion. The available keyword arguments may vary depending on the foundation model provider. You can refer to the documentation of the specific provider for more information on supported keyword arguments.
The generate_text_completion function performs the following steps:
  1. It determines the foundation model provider based on the provided model ID.
  2. It constructs the API request payload based on the provided prompt, guardrail_identifier, guardrail_version, and any additional model-specific keyword arguments.
  3. It makes an API call to AWS Bedrock using the Bedrock runtime client to generate the text completion.
  4. It returns the generated text completion as the result.
Here’s an example of how to use the generate_text_completion function:
In this example, we directly invoke the “amazon.titan-text-premier-v1:0” foundation model to generate a text completion for the provided prompt. We also apply a guardrail with the identifier “abcdefg” and version “1” to filter and secure the generated text.
The generate_text_completion function takes care of making the API call to AWS Bedrock and returns the generated text completion.
By using the generate_text_completion function, you can directly interact with foundation models and generate text completions without the need for a predefined prompt service configuration. This provides flexibility when you want to use custom prompts or when you don't require the structure and input variable handling provided by the prompt service framework.

Agent Service

The bedrock-genai-builder package includes an Agent Service feature that allows you to define and execute agent-based operations using user-defined functions. The package automatically generates the necessary configuration files, agent_store.yaml and tool_spec.json, to facilitate the creation and maintenance of agent services.

Running an Agent Service

To run an Agent Service, you can use the run_agent function provided by the bedrock-genai-builder package. Here's the function signature:
  • bedrock_client: The Bedrock runtime client used for interacting with AWS Bedrock.
  • model_id: The ID of the foundation model to use for the agent operation.
  • agent_service_id: The ID of the agent service to run. It should match the <agent service id> defined in the agent_store.yaml file.
  • function_list: A list of user-defined functions that can be used as tools in the agent operation. These functions should be defined in the tool_spec.json file and listed in the allowedTools field of the corresponding agent service in the agent_store.yaml file.
  • prompt: The input prompt for the agent operation.
  • inference_config (optional): A dictionary containing inference configuration parameters. The available parameters are:
    • maxTokens: The maximum number of tokens to allow in the generated response.
    • stopSequences: A list of stop sequences. A stop sequence is a sequence of characters that causes the model to stop generating the response.
    • temperature: The likelihood of the model selecting higher-probability options while generating a response.
    • topP: The percentage of most-likely candidates that the model considers for the next token.

Agent Service Configuration

The bedrock-genai-builder package automatically generates two configuration files, agent_store.yaml and tool_spec.json, to define and manage agent services.
AGENT_STORE.YAML
The agent_store.yaml file defines the agent services and their associated instructions and allowed tools. It follows this format:
  • AgentServices: This is the root key for the YAML file. It contains one or more agent services.
  • <agent service id>: This is the key for a specific agent service. It is used to identify and reference the service.
  • agentInstruction: This is the instruction or description for the agent service. It can be a multi-line string using the pipe (|) character.
  • allowedTools: This is the list of allowed tools (functions) for the agent service. Each tool is listed as an item in the list.
Example agent_store.yaml file:
TOOL_SPEC.JSON
The tool_spec.json file defines the agent functions and their configurations. It follows this format:
  • agentFunctions: This is the root key for the JSON file. It contains one or more agent functions.
  • <function name>: This is the key for a specific agent function. It should match the function name used in the allowedTools list in the agent_store.yaml file.
  • description: This is a brief description of the agent function.
  • functionProperties: This is an object that defines the parameters of the agent function and their data types.
  • requiredProperties: This is an array that lists the required parameters for the agent function.
Example tool_spec.json file:

How run_agent Works

When you call the run_agent function, it performs the following steps:
  1. It retrieves the agent service configuration from the agent_store.yaml file based on the provided agent_service_id.
  2. It validates the function_list against the allowedTools defined in the agent service configuration and the function definitions in the tool_spec.json file. Only the functions that are allowed and properly defined will be used as tools in the agent operation.
  3. It constructs the agent prompt by combining the agentInstruction from the agent service configuration and the provided prompt.
  4. It invokes the specified foundation model (model_id) using the Bedrock runtime client (bedrock_client) and passes the constructed agent prompt, the validated function_list, and any additional inference_config parameters.
  5. The foundation model processes the agent prompt and generates a response based on the available tools and the provided prompt.
  6. The generated response is returned as the result of the run_agent function.

Example Usage

Here's an example of how to use the run_agent function:
In this example:
  • We define two user-defined functions, getWeather and getForecast, that retrieve weather information for a given location.
  • We initialize the Bedrock runtime client (bedrock_client) and specify the foundation model ID (model_id) as "anthropic.claude-3-sonnet-20240229-v1:0".
  • We set the agent_service_id to "weatherService", which corresponds to the agent service defined in the agent_store.yaml file.
  • We provide the function_list containing the getWeather and getForecast functions, which are allowed tools for the "weatherService" agent service.
  • We specify the prompt that asks for the current weather and a 3-day forecast for New York.
  • We call the run_agent function with the provided parameters and store the result in the result variable.
  • Finally, we print the generated response.

Benefits of Agent Service

The Agent Service feature in bedrock-genai-builder offers several benefits:
  1. Modularity and Reusability: By defining agent services and their allowed tools in the agent_store.yaml file, you can create modular and reusable agent configurations. These configurations can be easily shared and reused across different projects or teams.
  2. Separation of Concerns: The agent_store.yaml file focuses on defining the agent services and their instructions, while the tool_spec.json file defines the actual implementation of the agent functions. This separation of concerns allows for better organization and maintainability of the agent service configurations and function implementations.
  3. Flexibility and Extensibility: The Agent Service feature provides flexibility in defining custom agent services with specific instructions and allowed tools. You can easily extend the functionality of agent services by adding new functions to the tool_spec.json file and updating the allowedTools list in the agent_store.yaml file.
  4. Integration with Foundation Models: The run_agent function seamlessly integrates with foundation models provided by AWS Bedrock. It allows you to leverage the power of these models while providing a structured approach to define and execute agent-based operations.
  5. Scalability and Maintainability: The configuration-driven approach of the Agent Service feature enables scalability and maintainability. As your agent services grow in complexity, you can easily manage and update the configurations in the agent_store.yaml and tool_spec.json files without modifying the underlying code.
By utilizing the Agent Service feature in bedrock-genai-builder, you can create powerful and flexible agent-based applications that combine the capabilities of foundation models with custom-defined agent services and tools.

Deploying to AWS

The bedrock-genai-builder package provides support for deploying both Lambda and non-Lambda applications in AWS. For Lambda applications, the package generates a lambda_function.py file as the main entry point. This file contains the necessary code to handle Lambda function invocations and integrate with the bedrock-genai-builder package. To deploy a Lambda application, you can package the generated files and dependencies into a ZIP file and upload it to AWS Lambda. You can then configure the Lambda function with the appropriate runtime, handler, and other settings. The bedrock-genai-builder package takes care of the integration with AWS Bedrock and provides a seamless way to generate text completions within the Lambda function.
For non-Lambda applications, the bedrock-genai-builder package generates a bedrock_app.py file as the main entry point. This file serves as the starting point for your application and can be run on various compute services in AWS, such as EC2 instances, ECS tasks, or EKS pods. To deploy a non-Lambda application, you can package the generated files and dependencies into a suitable format (e.g., Docker container) and deploy it to the desired compute service. The bedrock_app.py file contains the necessary code to initialize the Bedrock runtime client and interact with the bedrock-genai-builder package. You can extend and customize this file based on your application's specific requirements. The bedrock-genai-builder package provides the necessary utilities and frameworks to generate text completions and integrate with AWS Bedrock within your non-Lambda application.

Benefits of Using bedrock-genai-builder

The bedrock-genai-builder package offers several key benefits that make it a valuable tool for developing generative AI applications using AWS Bedrock:
  1. Streamlined Development Process: bedrock-genai-builder simplifies the development process by providing a well-structured project setup and a set of tools and utilities specifically designed for generative AI applications. It abstracts away the complexities of interacting with different foundation model providers and provides a consistent and intuitive interface for generating text completions. This allows developers to focus on the core logic of their applications rather than worrying about the low-level details of integrating with AWS Bedrock.
  2. Rapid Project Setup: With the project structure generation feature of bedrock-genai-builder, developers can quickly set up a new generative AI project with just a single command. The package automatically creates the necessary files and directories based on the specified application type (Lambda or non-Lambda), following best practices and conventions. This saves time and effort in manually setting up the project structure and ensures a consistent and organized codebase.
  3. Prompt Service Framework: The prompt service framework provided by bedrock-genai-builder enables developers to define and execute predefined prompt flows for generating text completions. It allows developers to configure prompt templates, input variables, and allowed foundation model providers in a declarative manner using the prompt_store.yaml file. This framework promotes code reusability, maintainability, and modularity by separating the prompt configuration from the application logic.
  4. Agent Service framework: The Agent Service framework in bedrock-genai-builder offers significant benefits for developing agent-based AI applications. It enables developers to define and execute agent services using a configuration-driven approach, promoting modularity, reusability, and scalability. The separation of concerns between the agent service configuration (agent_store.yaml) and function implementations (tool_spec.json) allows for better organization and maintainability. The framework provides flexibility in defining custom agent services, seamlessly integrates with foundation models, and enables scalability and maintainability through a configuration-driven approach. Overall, the Agent Service framework empowers developers to create powerful and flexible agent-based applications, combining the capabilities of foundation models with custom-defined agent services and tools.
  5. Flexibility and Customization: bedrock-genai-builder provides flexibility and customization options to cater to different application requirements. Developers can easily configure prompt service flows, specify input variables, and select the appropriate foundation models for their use cases. The package also supports direct model invocation, allowing developers to generate text completions without the need for a predefined prompt service configuration. This flexibility enables developers to adapt the package to their specific needs and leverage the full potential of AWS Bedrock.
  6. Integration with AWS Bedrock: bedrock-genai-builder seamlessly integrates with AWS Bedrock, providing a high-level interface to interact with foundation models and generate text completions. It abstracts away the complexities of making API calls to AWS Bedrock and handles the necessary request and response formatting. This integration allows developers to leverage the power of AWS Bedrock without having to deal with the low-level details of the API.
  7. Guardrail Support: The package supports the use of guardrails to filter and secure prompt input and model responses. Guardrails help ensure that the generated text adheres to specific guidelines and constraints, promoting responsible and safe usage of generative AI. bedrock-genai-builder allows developers to easily specify guardrail identifiers and versions in the prompt service configuration or during direct model invocation, providing an additional layer of control and security.
By leveraging the bedrock-genai-builder package, developers can accelerate the development and deployment of generative AI applications using AWS Bedrock. The package provides a structured and efficient approach to building and managing generative AI models, enabling developers to focus on creating innovative and impactful applications while abstracting away the complexities of integrating with AWS Bedrock.

Conclusion

The bedrock-genai-builder Python package offers a comprehensive solution for developing and deploying generative AI applications using AWS Bedrock. It provides a well-structured and efficient approach to building and managing generative AI models, enabling developers to focus on creating innovative and impactful applications.
The package offers several key features that streamline the development process and enhance productivity:
  1. Project Structure Generation: The bedrock-genai-builder package simplifies the setup process by generating an optimized project structure tailored for different application types. This ensures best practices and provides a solid foundation for developing generative AI applications.
  2. Prompt Service Framework: The prompt service framework allows developers to define and execute predefined prompt flows for generating text completions. It provides a structured approach to configuring and managing prompt templates, input variables, and allowed foundation model providers. The run_service function enables easy execution of prompt service flows, making it convenient to generate text completions based on predefined templates and input variables.
  3. Direct Model Invocation Utility: The package includes a utility function, generate_text_completion, for directly invoking foundation models and generating text completions based on a provided prompt. This utility offers flexibility when custom prompts are needed or when the prompt service framework is not required. It simplifies the process of interacting with different foundation model providers and provides a unified interface for generating text completions.
  4. Agent Service: The Agent Service feature allows developers to define and execute agent-based operations using user-defined functions. It automatically generates configuration files (agent_store.yaml and tool_spec.json) to facilitate the creation and maintenance of agent services. The run_agent function enables the execution of agent services, combining the capabilities of foundation models with custom-defined agent services and tools. This feature promotes modularity, reusability, and scalability in building agent-based AI applications.
By leveraging the bedrock-genai-builder package, developers can accelerate the development and deployment of generative AI applications using AWS Bedrock. The package abstracts away the complexities of integrating with AWS Bedrock and provides a high-level interface for generating text completions and executing agent-based operations.
The configuration-driven approach of the package, through files like prompt_store.yaml, agent_store.yaml, and tool_spec.json, enables scalability and maintainability. Developers can easily manage and update configurations without modifying the underlying code, making it convenient to adapt to evolving requirements and scale their applications.
Overall, the bedrock-genai-builder Python package empowers developers to build powerful and flexible generative AI applications using AWS Bedrock. It offers a structured and efficient approach to developing and deploying AI-powered solutions, enabling developers to focus on creating innovative and impactful applications while abstracting away the complexities of integrating with AWS Bedrock.
 

Comments