Gruesome Jihadi Content Still Flourishes on Facebook and Google+

May 17, 2018 at 09:06AM
via Feed: All Latest

Facebook announced this week that algorithms catch 99.5 percent of the terrorism-related content it deletes before a single user reports it. Thanks to steadily advancing AI tools, that's an improvement from last year, when that figure hovered around 97 percent. But promising as those developments may be, a new report by the internet safety nonprofit Digital Citizens Alliance demonstrates how easy it still is to find grisly images of dead bodies, calls to jihad, and ISIS and Al Qaeda imagery on both Facebook and Instagram.

The report contains dozens of screenshots of beheadings and terrorist recruitment content linked to accounts that, as of this week, remained live on both platforms. It also includes links to even more graphic content that lives on Google+, a platform that has largely gone undiscussed amid its parent company Alphabet’s overtures about eliminating radical content on both YouTube and Google Search.

"It seems based on everything we know the platforms are stuck in a loop. There’s criticism, promises to fix, and it doesn't go away," says Tom Galvin, executive director of the Digital Citizens Alliance, which has conducted research on topics like the sale of counterfeit goods and illicit drugs online.

Working with researchers at the Global Intellectual Property Enforcement Center, or GIPEC, the Digital Citizens Alliance amassed a trove of evidence documenting terrorist activity on these online platforms. The researchers used a combination of machine learning and human vetting to search for suspicious keywords and hashtags, then scoured the networks connected to those posts to find more. On Instagram and Facebook, they found users sharing copious images of ISIS soldiers posing with the black flag. One Instagram account reviewed by WIRED on Tuesday posted a photo of two men being beheaded by soldiers in black face masks. By Wednesday, that particular photo had disappeared, but the account, which has posted a slew of equally disturbing images including executions and dead bodies strewn on the sidewalk, remained live. It's not clear whether the post was deleted by the user or by Instagram.

In many cases, the most hideous photos contained captions with innocuous hashtags in Arabic, including #Dads, #Girls, and #Cooking. Below are some of the researchers' more tame discoveries.

Screenshots taken by WIRED from accounts flagged by the Digital Citizens Alliance.

Instagram

Screenshots taken by WIRED from accounts flagged by the Digital Citizens Alliance.

Instagram

On Facebook, the researchers spotted public posts inciting people to violence. One, written in Bangla, urges followers to "kill the unbelievers," complete with tips on how to do it, including by motorbike. It was posted in November 2016, and remained online this week.

In a statement, a Facebook spokesperson told WIRED, “There is no place for terrorists or content that promotes terrorism on Facebook or Instagram, and we remove it as soon as we become aware of it. We take this seriously and are committed to making the environment of our platforms safe. We know we can do more, and we’ve been making major investments to add more technology and human expertise, as well as deepen partnerships to combat this global issue.”

Screenshots taken by WIRED from accounts flagged by the Digital Citizens Alliance.

Facebook

The fact that in some cases individual posts were taken down but the accounts remained up suggests to Eric Feinberg, GIPEC's founder, that while Facebook and Instagram may proactively spot millions of terrorism-related posts, they're not adequately dealing with the networks connected to those posts. Chasing down hashtags has become central to Feinberg's work. A hashtag like #Islamic_country, in Arabic, will lead Instagram users down a gruesome and disturbing rabbit hole full of violent imagery. As a result, Feinberg says, "We’re finding stuff they're not."

Facebook does try to automatically detect clusters of terrorist accounts and Pages by analyzing a given account's friend networks. But, the spokesperson acknowledged, this automation effort is only about a year-and-a-half old, and still has "a long way to go."

While Facebook is a much larger platform, the researchers found ample evidence of similar jihadi content on Google+, as well, a long-forgotten property that's being abused by terrorists. One especially graphic series of images included in the report shows a bearded man in orange staring into a camera in what appear to be the last moments of his life. In the next shot, his bloodied, detached head is resting on his own dead body.

'We’re not seeing inter-platform collaboration, the way the casinos might catch a card cheat.'

Tom Galvin

In Alphabet's ongoing fight against terrorist groups on its platforms, it rarely mentions Google+. Like Facebook, YouTube has developed technology that automatically deletes terrorist content before users flag it. Today, 98 percent of the content YouTube takes down related to terrorism has been identified by algorithms. The company has even been accused of overcorrecting in its quest, removing videos that were used for academic and research purposes. YouTube's CEO Susan Wojicki said the company would scale up to 10,000 human moderators by the end of this year. And yet, it seems far less attention has been paid to cleaning up Google+. Google did not respond to WIRED's request for comment.

"Google+ feels like an abandoned warehouse that ISIS felt was a great place to work," Galvin says.

These disturbing discoveries couldn't come as a surprise to either tech giant. Congress called both Facebook and YouTube to testify about this very topic in January. Facebook has also said it will employ 20,000 safety and security moderators by the end of the year. Meanwhile, the two companies joined with Microsoft and Twitter in 2016 to form the Global Internet Forum to Counter Terrorism, a joint effort aimed at blocking terrorist content across platforms. The companies submit images and videos along with a unique identifying signature that can help other companies identify that same content on their platforms. So far, 80,000 images and 8,000 videos have been marked.

Still, a Facebook spokesperson notes that this system only works if the content posted to another platform is an exact match. The companies don't currently share any information about who's behind those initial posts, either. Galvin views that as a problem. "We’re not seeing inter-platform collaboration, the way the casinos might catch a card cheat," he says. Another notable blind spot: While YouTube is part of the forum, the broader Google family is not.

Galvin says Facebook recently took an important step toward transparency in publishing its lengthy community standards for the first time, revealing in minute detail the level of granularity that guides its content moderators’ decisions. The standards clearly prohibit terrorists and terrorist groups, as well as speech that promotes violence and sensational images of graphic violence and human suffering.

"I think it’s great that Facebook put it out, and I think it should provoke a conversation about where that line is that becomes an ongoing conversation," he says.

That doesn't change the fact that the business model behind these platforms is designed to let anyone, anywhere, post whatever they want. And as the platforms grow, so does the offensive content. It's hard to imagine a world where this problem ever really gets fixed. But it's easy to imagine one where companies try a lot harder to fix it.

More Great WIRED Stories