Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
Over 51% of tweets found to be in violation of the site’s terms of service are automatically flagged by AI systems, Twitter CEO Jack Dorsey told Fast Company Thursday. Those tweets are passed to human moderators who ultimately decide their fate.
Dorsey says his goal is to get Twitter to a 90% automatic flagging rate. Just a couple years ago, the service was at 0%—users or moderators flagged all the TOS-violating tweets themselves.
- Twitter has come a long way with machine learning in a short time, but finding and flagging the next 39% of violations will be harder.
Hybridized machine learning-human moderator systems are par for the course in Silicon Valley.
- In February, the EU said those hybrids at Facebook, Google, and Twitter were getting speedier at removing hate speech.
- Facebook says its AI detected 89% of the hate speech content removed in Q1, an 80% increase over Q4 2019.
Zoom out: The platforms have embraced more algorithmic moderation out of necessity during the pandemic, making false positives and negatives more common.
🚀 Want to learn more? Check out The Human’s Handbook to Computers that Think, where we break down the key concepts, players, and data surrounding AI