AI powered video summarizer with Amazon Bedrock
Explore how to use Amazon Bedrock with Anthropic's Claude to build a Youtube video summarizer
Published Jan 4, 2024
Last Modified May 10, 2024
At times, I find myself wanting to quickly get a summary of a video or capture the key points of a tech talk. Thanks to the capabilities of generative AI, achieving this is entirely possible with minimal effort.
In this article, I’ll walk you through the process of creating a service that summarizes YouTube videos based on their transcripts and generates audio from these summaries.
We’ll leverage Anthropic’s Claude 2.1 foundation model through Amazon Bedrock for summary generation, and Amazon Polly to synthesize speech from these summaries.
I will use a step functions to orchestrate the different steps involved in the summary and audio generation :
🔍 Let’s break this down:
- The ‘Get Video Transcript’ function retrieves the transcript from a specified YouTube video URL. Upon successful retrieval, the transcript is stored in an S3 bucket, ready for processing in the next step.
- ‘Generate Model Parameters’ function retrieves the transcript from the bucket and generates the prompt and inference parameters specific to Anthropic’s Claude v2 model. These parameters are then stored in the bucket for use by the Bedrock API in the subsequent step.
- Invoking the Bedrock API is achieved through the step functions’ AWS SDK integration, enabling the execution of the model inferences with inputs stored in the bucket. This step generates a structured JSON containing the summary.
- ‘Generate audio form summary’ relies on Amazon Polly to perform speech synthesis from the summary produced in the previous step. This step returns the final output containing the video summary in text format, as well as a presigned URL for the generated audio file.
- The bucket serves as a state storage used across all the steps of the state machine. In fact, we don’t know the size of generated video transcript upfront; it might reach the Step Functions’ payload size limit of 256 KB in some lengthy videos.
At the time of writing, Claude 2.1 model supports 200K tokens, an estimated word count of 150K. It provides also a good accuracy over long documents, making it well-suited for summarizing lengthy video transcripts.
You will find the complete source code here 👇
I will use NodeJs, typescript and CDK for IaC.
Amazon Bedrock offers a range of foundational models, including Amazon Titan, Anthropic’s Claude, Meta Llama2, etc., which are accessible through Bedrock APIs. By default, these foundational models are not enabled; they must be enabled through the console before use.
We’ll request access to Anthropic’s Claude models. But first we’ll need to submit a use case details:
I will rely on this lib for the video transcript extraction (It feels like a cheat code 😉) ; in fact, this library makes use of an unofficial YouTube API without relying on a headless Chrome solution. For now, it yields good results on several YouTube videos, but I might explore a more robust solutions in the future :
The extracted transcript is then stored on the s3 bucket using
${requestId}/transcript
as a key.You can find the code for this lambda function here
At the time of writing, Bedrock currently only supports Claude’s Text Completions API. Prompts must be wrapped in
\n\nHuman:
and \n\nAssistant:
markers to let Claude understand the conversation context.Here is the prompt; I find that it produces good results for our use case:
🤖 Helping Claude producing good results:
- To clearly mark to the transcript to summarize, we use <transcript/> XML tags. Claude will specifically focus on the structure encapsulated by these XML tags. I will be substituting {{transcript}} string**** with the actual video transcript.
- To assist Claude in generating a reliable JSON output format, I include in the prompt the JSON schema that needs to be adhered to.
- Finally, I also need to inform Claude that I want to generate only a concise JSON response without unnecessary chattiness, meaning without including a preamble and postscript while returning the JSON payload:
Note that the full prompt ends with a trailing
{
As mentioned on the section above, we will store this generated prompt as well as the model parameters in the bucket so that It can be used as an input of Bedrock API:
You can follow this link for the full code of the
generate-model-parameters
lambda function.In this step, we’ll avoid writing custom lambda function to invoke Bedrock API. Instead, we’ll use Step functions direct SDK integration. This state loads from the bucket the model inference parameters that were generated in the previous step:
☝️ Note: As we instructed Claude to generate the response in JSON format, the completion API response misses a leading
{
as Claude outputs the rest of the requested JSON schema.We use intrinsic functions on the state’s
ResultSelector
to add the missing opening curly brace and to format the state output in a well formed JSON payload :I have to admit, it is not ideal but this helps get by without writing a custom Lambda function.
This step is heavily inspired by this previous blog post. Amazon Polly generates the audio from the video summary:
Here are the details of
synthesize
function:Once the audio generated, we store it on the S3 bucket and we generate a presigned Url so it can be downloaded afterwards.
☝️ On language detection : In this example, I am not performing language detection; by default, I am assuming that the video is in English. You can find in my previous article how to perform such a process in speech synthesis. Alternatively, We can also leverage Claude model capabilities to detect the language of the transcript.
Alright, let’s put it all together and let’s take a look at the CDK definition of the state machine:
In order to be able to invoke Bedrock API, we’ll need to add this policy to the workflow’s role (And it’s important to remember granting the S3 bucket read & write permissions to the state machine):
I find creating generative AI based applications to be a fun exercise, I am always impressed by how quickly we can develop such applications by combining Serverless and Gen AI.
Certainly, there is room for improvement to make this solution production-grade. This workflow can be integrated into a larger process, allowing the video summary to be sent asynchronously to a client, and let’s not forget robust error handling.
Follow this link to get the source code for this article.
Thanks for reading and hope you enjoyed it !