logo
Menu
State of AI/ML in American Sign Language | S02E10 | Lets talk about Data Show

State of AI/ML in American Sign Language | S02E10 | Lets talk about Data Show

In this Twitch show, we are talking about the different known players in the current industry trying to translate sign language into spoken or written language. We also talk about how AI/ML is impacting access for better or worse.

Ibrahim Emara
Amazon Employee
Published Mar 22, 2024
This "Let's Talk About Data" show focused on using AI and machine learning to interpret sign language. Rob Koch, a data engineer from Slalom, and Suresh Pupandi, a senior solutions architect from AWS, demonstrated an ASL video production avatar generation application they built and discussed the technology behind it.
Key Highlights:
  • Machine learning models need large datasets to effectively process and classify signs, but there is limited sign language data available.
  • New large language models allow feeding examples to instruct the model instead of needing huge datasets.
  • Context is key - models can now understand the intended meaning of signs based on surrounding words.
  • Application architecture uses Amplify, Step Functions and Lambda to orchestrate text and audio to sign language video translation.
Check out the recording here:
Loading...

Hosts of the show 🎤

Ibrahim Emara, RDS Specialist Solutions Architect @ AWS

Guests

Suresh Poopandi, Sr. Solutions Architect @ 𝐀𝐖𝐒
Rob Koch, Principal @ Slalom Build
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments