How Social Media Is Trying To Combat Hate Speech, Terrorism

Facebook Inc FB on Tuesday published an explanation of its policy on “hate speech” that, if anything, shows just how hard it is for the company to accurately police a problem that goes far beyond social media.

It’s the company’s latest attempt to fend off criticism not just of Facebook, but Alphabet Inc GOOG GOOGL's YouTube, Twitter Inc TWTR and other platforms that often are havens for hate-spewing, terror-inciting users.

“Over the last two months, on average, we deleted around 66,000 posts reported as hate speech per week — that’s around 288,000 posts a month globally,” said the report by vice president for public policy Richard Allan, adding almost indecipherably:

“(This includes posts that may have been reported for hate speech but deleted for other reasons, although it doesn’t include posts reported for other reasons but deleted for hate speech.)”

Allan said hate speech is defined as anything that attacks people based on race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.

Facebook: We’re Not Perfect

The company, which released its guidelines regarding terrorism on Monday, said it makes plenty of mistakes. “Often there are close calls — and too often we get it wrong,” Allan wrote.

The company said there are clear cases of incitement of violence, in which case it contacts law enforcement. Other times, the language is so ambiguous that it’s hard to tell.

It’s still fairly easy to find such warm-and-fuzzy pages such as the Anti-Islam Alliance or Jewish Ritual Murder, latter of which claims there’s a vodka called “Kabbalah” made with “Christian infants.”

“Language also continues to evolve, and a word that was not a slur yesterday may become one today,” Allan wrote.

Allan cites as an example the term “burn flags not fags,” which could be a protest exhortation, an anti-gay expletive, an attempt to reclaim a slur by the offended group, or an anti-smoking slogan in Britain.

Regional Differences And Ethnic Divides

The influx of war refugees into Europe and the rise in anti-immigrant fervor in the United States led to commensurate increases in hate speech, but Allan talked about the difficulties in teaching its AI bots how to detect nuances, such as when a debate about hate speech is in fact hate speech.

Allan brought up the case of Shaun King, the prominent activist who posted hate mail that included slurs. “We took down Mr. King’s post in error — not recognizing at first that it was shared to condemn the attack.”

He said Facebook is experimenting with technology that will filter “the most obviously toxic language,” but said “we’re a long way from being able to rely on machine learning and AI to handle the complexity involved in assess hate speech.”

He said Facebook will add another 3,000 human content reviewers to the 4,500 currently deployed.

Related Link: Here’s How YouTube’s New Policing Policies Will Impact Content

Posted In: FuturesPoliticsTopicsTop StoriesMarketsTechMediaGeneralgaslightingHate SpeechterrorismYouTube
We simplify the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Loading...