
online gamers stopped playing certain games after experiencing verbal harassment, according to a survey by the Anti-Defamation League. Why it matters: There’s a clear need for tools that help people enjoy networked communications without being targeted by abuse. Two-Hat’s text-moderation system detects efforts to subvert moderation by, say, intentionally misspelling slurs and other potentially offensive language. Its customers include Chatroulette, which uses Hive’s technology to help users avoid unwanted nudity. Hive’s system is designed to moderate audio, video, text, and images.
BLEEP INTEL WINDOWS
It runs on Windows PCs and, since it interacts directly with Windows’ audio controls, it can work with a variety of voice-chat apps.īehind the news: ToxMod also aims to moderate video game voice chat and provides a dashboard for human moderators to track offensive speech across servers.For a tenth category called N-word, the system offers an on/off switch. Users can opt to filter out none, some, most, or all content in any category. The system classifies offensive speech in nine categories including misogyny, sexually explicit language, and anti-LGBTQ hate speech.Bleep combines speech detection technology with Spirit’s flagship product, which determines whether a phrase constitutes harassment in the context of surrounding chatter.How it works: Chip maker Intel worked with Spirit AI, which develops technology for content moderation, to let users of voice chat fine-tune how much of specific types of offensive language can reach their ears. The system is in beta-test and scheduled for release later this year. What’s new: Intel announced a voice recognition tool called Bleep that the company claims can moderate voice chat automatically, allowing users to silence offensive language.

A new tool aims to let video gamers control how much vitriol they receive from fellow players.
