Twitch using machine learning to catch ban evaders

Twitch will soon be using machine learning to catch users involved in suspicious activities. Called Suspicious User Detection, the new technology looks to be aimed specifically at users trying to evade bans.

“When you ban someone from your channel, they should be banned from your community for good,” reads the company’s announcement. “Unfortunately, bad actors often choose to create new accounts, jump back into Chat, and continue their abusive behavior. Suspicious User Detection, powered by machine learning, is here to help you identify those users based on a number of account signals. By detecting and analyzing these signals, this tool will flag suspicious accounts as either “likely” or “possible” channel-ban evaders, so you can take action as needed.”

What happens after a user is flagged is determined by the type of flag used. Those suspected of being “Likely” will have no messages displayed in general chat, but they will be visible to mods and creators, who can then take further action against the user. (Read: Twitch implementing features to stop hate raids)

Meanwhile, “possible” cases will be able to post messages in chat and see them appear, but those messages will be flagged on the creator/mod end “so they can monitor the user and restrict them from chatting if needed.”

When it goes live, Suspicious User Detection will be enabled by default. However, streamers will be able to tweak it by going into settings and doing things like increasing the posting restrictions on “Possible” cases, or manually adding users who the machine learning hasn’t detected but you’re suspicious of anyway.

Suspicious User Detection looks to be a powerful tool for streamers to moderate their channels. That said, as a machine learning-powered tool, it also comes with some risks. Twitch acknowledged these in a follow-up statement on their blog, confirming that the technology will not automatically ban suspicious users.

“One thing to prepare for, particularly around launch, is that no machine learning will ever be 100% accurate, which means there is a possibility of false positives and false negatives,” the company states. “That’s why Suspicious User Detection doesn’t automatically ban all possible or likely evaders. You’re the expert when it comes to your community, and you should make the final call on who can participate. The tool will learn from the actions you take and the accuracy of its predictions should improve over time as a result.”

If you like reading our content, why not show your appreciation by treating us to a cup of coffee? (or two, if you’re feeling generous)



Vee-bot

AI, robot, needs tacos or it will explode.

%d bloggers like this: