Advanced Text Summarization with Amazon Bedrock| S02 E12 | Build On Generative AI

Summarizing text is not hard right? We just paste something to a LLM and we are good to go? Well, you can do it way better, so let's look at some more advanced text summarization practices with Justin

AWS Admin
Amazon Employee
Published Nov 6, 2023
Last Modified Jun 25, 2024
On today's edition of Build On Generative AI, Darko is joined by Justin Muller. Justin is a Senior Solutions Architect, Generative AI aficionado and an all around language enjoyer! He is here to day to teach us all about some advanced techniques when it comes to text summarization with Generative AI.
Summarizing text is not all that easy. Think of it as compression, it is easy to compress an image as the human brain would not detect the small little details lost and the image would seem pretty normal. But when "compressing" text (by "compressing" I am talking about summarization) little details that are lost can change the story by a lot. An excellent example given by Justin goes:
"A man broke the window and took the little child from its bed"
Wow, that sounds horrible. Well, the little detail that is left out is that man is a firefighter and the house was on fire so he saved the child. That's why summarization can be hard.
My favorite part of today's stream was a little function Justin showed that automatically refines the summarization responses by having the LLM ask itself questions about the summary, and then refine the response based on those.
WE PROMISED YOU THE CODE, SO MAKE SURE TO BOOKMARK THIS PAGE AS WE WILL BE RELEASING IT HERE AS SOON AS IT IS AVAILABLE 👏
For all this and more, make sure to check out the recording here:
Loading...

Links from today's episode

Reach out to the hosts and guests:

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

2 Comments