Mehul Reuben DasJan 06, 2023 11:52:58 IST
After firing most of the employees as well as contractors who moderated content on Twitter, Elon Musk had his team rely heavily on AI bots to do the job. As a result of this, several normal photos and tweets got taken down or had their reach restricted, accidentally. One hilarious consequence of Twitter using AI bots to moderate content, however, has only recently come to the surface.
Speaking to Quartz, a former Twitter employee who left the company recently, revealed that the AI bots that Twitter is using, keeps confusing photos of rockets with that of the male genitalia. Not only that, accounts that posted photos of rockets were suspended from the platform.
For example, Spaceflight Now, and user Michael Baylor, both of whom live stream launches for NASA, were all locked out of their accounts after posting content about a SpaceX launch on Tuesday. Similarly, Starbase Watcher, which tracks activity at SpaceX’s Texas facility was also locked out.
Similarly, Spaceflight photographer John Kraus said his account was also suspended after he shared a video of NASA’s Artemis I launch. All four accounts have since been unlocked.
In response to a tweet by Baylor’s colleague about his account being suspended, Musk replied “seems like our image recognition needs some work!”
Quartz interviewed an anonymous ex-employee who used to work on Twitter’s content moderation systems. They told the outlet the tools had been known to misidentify appropriate pictures for pornographic content. This could include, for example, a pedicure photo that contains lots of flesh-colored pixels.
The AI bots have been coded in such a way that they can take down a post or even suspend an account, if they are 95 per cent certain that a post has broken the platform’s rules,
Musk and his team cracking down on porn on the platform is undoubtedly a great thing. But as is the case with everything else that Muks has done with Twitter since he took over, the execution has been a disaster.
If Twitter really wants to rely less on human moderators, it should reduce the precision thresholds of the AI bots and machine learning tools that it uses to distinguish sensitive content, so that it does not flag every post as inappropriate.