Creating Safer Online Communities using AI | Build On Live | Open Source & Machine Learning
Today we are looking at a solution built by Todd which detects potentially sensitive content on a Live Stream. We look at some code, we see it in action, and we also add new features to it
Live streaming is massively popular due to its interactive and unpredictable nature. But, the unpredictability means that sometimes less than desirable events can be live streamed and that offensive, insensitive chat messages are posted in chat messages during a live stream. Manual admin intervention can be used to prevent or stop offensive live streams and moderate inappropriate chat messages, but manual intervention isn't always 100% perfect. Automated moderation can help, but lacks the contextual awareness of the manual approach. The best approach is a healthy mix of manual and automatic moderation. In this session, we'll look at the various approaches to moderating a live stream and how they can be approved with the help of AI and ML. We'll look at how to analyze portions of a live stream for offensive or inappropriate content and use that analysis as a prompt for moderator intervention.
To get the full guide how to deploy this yourself, as well as the code, make sure to check out Todd's blog post in the link section below.
Check out the recording here:
🐦 Reach out to the hosts and guests: