Facebook has announced new details of its anti-terrorism concept: Artificial Intelligence and trained personnel are to identify and eliminate corresponding content more quickly and precisely in the future.
“We want to become a hostile environment for terrorists,” writes Monika Bickert, Global Policy Management Director at Facebook in a blog post. In the article titled Difficult Questions: What we oppose to terrorism, the social media company announces details of its anti-terrorism concept. The challenge for online communities is the same as in the real world: “The first signals recognize before it is too late,” says Bickert.
In order to turn Facebook into a terror-free content zone, so the noble goal, the company mainly relies on three things: artificial intelligence, human expertise, and partnerships with other tech companies and organizations. The latter confirmed Facebook already at the end of last year, as CEO Mark Zuckerberg announced to develop a strategy against content with terrorism reference together with Microsoft, Twitter and YouTube.
Although the use of AI is still relatively new, the communication further states that it has already changed the way in which potential terrorist propaganda or facebook accounts are kept apart. Currently, the technology is used against terrorist content related to the IS, Al-Qaeda and its allies. “We are planning to expand to other terrorist organizations soon,” Bickert said.
The AI was able to record successes in image matching. When certain images or videos are marked, the AI recognizes this and prevents its re-uploading. In addition, Facebook is working on a speech recognition algorithm. This is to analyze written texts and after terror-Lingo scour. Extremist clusters from Facebook profiles should also be able to recognize the AI as well as fake accounts, especially from repeat recipients.
Even if the AI can save a lot of time, Facebook seems to know that it can not replace people completely yet. “To filter which content actually supports terrorism and which is not is not always clear – and algorithms can still context Do not read as well as people do, “it says on the blog. A photo on which an armed man would swing an IS flag could be both a propaganda material and a picture of a message. “We need human expertise to look at such more opaque cases,” he says.
According to the blog post, 150 people are currently working on Facebook, “which are exclusively or mainly concerned with combating terrorism”. These are academically trained experts, former prosecutors, and police, as well as analysts and engineers. “The specialists of this team speak a total of more than 30 different languages,” it says. Also, his community operations team wants to jam up Facebook, which promised Mark Zuckerberg travels in early May, from 4500 to 7500 coworkers. The department checks the content reported by the community and removes inappropriate content.
“We are committed to keeping terrorism out of our platform,” says the blog post. With the letter, Facebook is also responding to the joint anti-terrorist campaign of France and Great Britain. Earlier this week, Theresa May and Emmanuel Macron had announced that they would ask for fines from tech companies that did not participate in the fight against terrorist propaganda.