Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

AWS Logo
Menu

Security mechanisms for GenAI chatbots | S2Ep11 | Security Ramp-Up

This Twitch episode we cover the OWASP Top 10 for Large Language Model Applications. The session covered LLM vulnerabilities, security mechanisms, and practical examples for GenAI chatbot protection.

Ben Fletcher
Amazon Employee
Published Jan 15, 2025
Last Modified Feb 14, 2025
This episode featured Louise Fox discussing the OWASP Top 10 for Large Language Model Applications. The session covered the main vulnerabilities found in LLM implementations and security mechanisms to address them.
The episode included:
  • Overview of the OWASP Top 10 for LLMs
  • Examples of vulnerabilities
  • Security practices for GenAI chatbots
  • Methods to prevent misconfigurations
We explained how to identify common security issues in AI applications and provided guidance on implementing protective measures. The session was aimed at developers, security professionals, and others working with AI systems.
To view this Twitch stream, please accept cookies.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments