Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
Yesterday, Twitter announced the winners of a brand-new bounty competition. But unlike the “bug bounties” commonly offered by tech companies—which reward those who spot security flaws and site vulnerabilities—this challenge was focused on something entirely different.
It was billed as the industry’s first-ever algorithmic bias bounty competition. It kicked off July 30 and was spearheaded by Rumman Chowdhury, who leads Twitter’s Machine Learning Ethics, Transparency and Accountability (META) team, and Jutta Williams, a product lead on the META team.
How it worked
Participants were given access to the code underlying Twitter’s saliency algorithm for image cropping, which predicts the ideal way to crop and display a picture on Twitter.
- In May, the company found that this saliency model had gender and racial biases and moved away from using the technique.
To pull the competition off, Chowdhury told us META needed to work across multiple teams at Twitter. Overall, she said the toughest task was building a rubric for algorithmic harms and biases.
- “There’s a lot of great research and a lot of great work done on taxonomy of harms and biases, but we could find very little in specifically breaking them down into itemized tasks or itemized harms, and being able to enumerate what that might look like—and also providing a value to it,” Chowdhury said.
- The rubric was ultimately based on a range of factors, including harm types (e.g., stereotyping, psychological harm), the damage or impact of a harm, and the number of affected users.
Winners, winners: Twitter awarded five prizes (and a couple of honorable mentions) on the day: first place ($3,500 prize), second ($1,000), third ($500); most innovative ($1,000); most generalizable ($1,000). A panel of four judges from the AI and infosec worlds graded the submissions according to META's rubric.
Looking ahead…questions remain about participation from members of affected communities, the type of community the program builds, how well Twitter will translate the program’s individual findings into big-picture changes, and whether the tech industry at large will adopt the practice.
But Camille François, who is co-leading a project on algorithmic harms at the Algorithmic Justice League, told us it’s encouraging to see advancements in the evaluation of how AI systems affect people.
“I think we're quite eager to see how it plays out,” François said. “We want a wider set of communities and affected parties to participate. ...We know that there's appetite out there for people saying, ‘Hey, I feel affected by this. ...Let's disclose this together.’”—HF