Twitch on Tuesday introduced a new way for streamers to battle accounts that bypass channel-level bans. The automated tool, called Suspicious User Detection, can detect users trying to bypass bans, so anyone moderating a Twitch channel has more options to deal with potentially disruptive behavior before it starts. The company announced that the new prohibition bypass detection tool will be on the way in August.
The company says it developed the tool in direct response to community feedback, calling for more robust moderation options for dealing with users who show up with new accounts after being banned. After an account has been marked as a “possible” or “probable” ban bypasser, the moderators of a channel can take action against it manually.
All messages sent from an account flagged as a probable violation will be automatically removed from chat pending review by a moderator. For channels that want to be more aggressive in moderation, the same setting can be enabled for accounts that are flagged as potential bans bypassers. Mods can also manually add users to the list of suspicious accounts for a closer look at them.
Twitch points out that, as with any automated moderation tool, false positives are possible, although it is hoped to strike a balance between proactive detection by machine learning systems and human intervention. “You are the expert when it comes to your community and you should make the final decision who can participate,” Twitch wrote on a blog post, adding that the system will improve over time after input from human moderators have been trained.
Twitch sees the new ban bypass detection system as a modular solution alongside AutoMod, which gives moderators the ability to review potentially harmful messages in chat, and phone-verified chat, an option Twitch added last month, where users can use their accounts need to check before chatting. Twitch users can sign up for five accounts with a single phone number, but a channel ban will now affect all accounts associated with that number, which is one of the easier workarounds for anyone looking to bypass the platform’s policies.
Twitch streamers have long pushed the company to do more to protect YouTubers, especially those most vulnerable to online harassment. This year alone, the #ADayOffTwitch and #TwitchDoBetter campaigns have increased the visibility of marginalized YouTubers who are widespread on the platform, prompting the company to respond.
“We saw a lot of conversations about botting, hate attacks and other forms of harassment against marginalized creators,” the company tweeted at the time. “They ask us to do better, and we know we need to do more to address these issues.”
Twitch’s long-standing lack of discovery tools already made success on the platform a huge challenge for underrepresented YouTubers, but targeted harassment campaigns made matters much worse. A treasure trove of Twitch payout data leaked last month painted a bleak picture of diversity in the upper echelons of streaming success, where the top creators are almost entirely white men.
In May, Twitch added more than 350 tags to help users find streamers based on identifiers such as gender, sexuality, race, and skills. The update was an overdue move to encourage discovery and find more diverse creators on the platform, but without adequate moderation tools, many users feared that the system was causing targeted harassment against their communities. In September, Twitch took the unusual step of filing a lawsuit against two users linked to thousands of bots driving mass harassment campaigns.