logo
Menu
Working with Meta Llama3  | Short, Sweet Summarization using Serverless Event-Driven Architecture

Working with Meta Llama3 | Short, Sweet Summarization using Serverless Event-Driven Architecture

This blog post will take you through the steps that are needed to summarize the meeting transcribes using Serverless Event-Driven Architecture.

Published May 2, 2024
Hello Folks ๐Ÿ‘‹๐Ÿ‘‹,
This is Raghul Gopal, an AWS Community Builder (ML & GenAI)๐Ÿฅท , a Research freak who is an enthusiast in AI & AGI Research ๐Ÿ” ๐Ÿ“ˆ.

Quick Start behind Llama3

With an emphasis on effective language encoding and decoding, Llama3 is a text-based big language model. It is designed to generate text rather than handle inputs since it has a decoder-only architecture. Talking about the Dataset, A huge dataset including 15 trillion tokens obtained from publicly available data is used to train Llama3. A vast variety of content from many sources, including books, essays, webpages, and other textual sources, is covered by these tokens.
Want to know more about Meta Llama3? Follow this link to know more about Llama3: https://llama.meta.com/llama3/

Serverless Event-Driven Architecture

In this blog, we will be accessing Bedrock to call Llama3 Model to create a Summary of the Conversation between Dhoni and Mandira Bedi at Priceless Moments. Thanks to @Mike Chambers - https://www.linkedin.com/in/mikegchambers/ for his beautiful course on Bedrock. Here you go with the Course Link: https://learn.deeplearning.ai/courses/serverless-llm-apps-amazon-bedrock/lesson/1/introduction
Here you go with the Architecture:
Serverless Event Driven Architecture
First, transcribe an audio file using Amazon Transcribe. Write a Lambda Function to take the objects from the Audio File, and transcribe the audio file using Amazon Transcribe. Note that, to take the objects from the Audio File, the Lambda Function should have an Incoming Trigger from the S3 Event Notifications. Inside the Lambda Function, Create a Transcription Job using Boto3 Client by adding the following snippet.
The Transcription Job usually takes quite a little time to complete based on the length of the audio file given to the job. Hence, used time.sleep(5) to wait for the program for 5 seconds until the transcribe job ended. Once it is done, the transcribe job created above will store the transcribe file in the specific format.
Now, it's time for the bedrock to call the Llama3 Model. A very important note to follow. When I started implementing GenAI models by calling Bedrock Runtime API, I landed with these issues.
  1. Make sure that the model you are accessing from Bedrock is available for you. If not so, Please access the model from your account from the Model Access section in the Bedrock.
  2. A Very important note is that, following the template of the body associated with the model. This makes the model accessible with some parameters.
Here is the prompt template, I've used to summarize the conversation. It's very simple enough and straightforward to ask the model to do the job.
Here is the body template for the Llama3 70B Instruct Model.
After execution, the results look quite simple enough but concise to the point of action covered in the conversation. Here are the results (The Summary) of the conversation from the response_body.
To access the full code, use my GitHub Repository: https://github.com/Raghul-G2002/bedrock-llama3-summarization.git
That's it for now. Happy AI, Happy Coding.
Let's see in the new series of outcomings of Generative AI. ๐Ÿ‘จโ€๐Ÿ’ป๐Ÿ‘จโ€๐Ÿ’ป๐Ÿ‘จโ€๐Ÿ’ป
Stay Connected with me
๐Ÿ”—Raghul Gopal Linkedin: https://www.linkedin.com/in/raghulgopaltech/
๐Ÿ”—Raghul Gopal YouTube:https://www.youtube.com/@rahulg2980
๐Ÿ“’Subscribe to my Newsletter: Subscribe on LinkedIn Subscribe on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7183725729254158336
ย 

Comments