Published: Fri, June 16, 2017
Hi-Tech | By Ted Wilson

Facebook Using AI to Wipe Out Terrorist Accounts

Facebook Using AI to Wipe Out Terrorist Accounts

In a Thursday blog post, Facebook's director of global policy management, Monika Bickert, and counterterrorism policy manager Brian Fishman gave us a look into the social media giant's plan to combat terrorist content on their platform.

"This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods too".

The post comes in response to public concerns over the role of tech companies in fighting terrorism online following recent terror attacks. "We want Facebook to be a hostile place for terrorists", the blog post reads.

The post Facebook Inc (NASDAQ:FB) Using Artificial Intelligence to Tackle "Terrorist Content" appeared first on Market Exclusive.

Ainslie retires from America's Cup trials race with damage
Having lost the first encounter, Nathan Outteridge came back fighting to clinch a crucial win and level the scores at 1-1. The Japanese team now have a two-race lead in their best-of-nine series.

They described how the network is automating the process of identifying and removing jihadist content linked to the Islamic State group, al-Qaeda and their affiliates, and intends to add other extremist organizations over time. Following attacks in London and Manchester in the past four months, UK Prime Minister Theresa May pressed other leaders from the Group of Seven nations to consider further regulation of social media companies to compel them to take additional steps against extremist content. Some experts criticized it as a drop in the bucket considering how much content Facebook's users share. "But we know we can do better at using technology".

"This means that if we previously removed a propaganda video from ISIS, we can work to prevent other accounts from uploading the same video to our site".

The Facebook executives said they remove terrorists and posts that support terrorism once they become aware of them.

New measures include image-matching, language analysis and deeper forms of artificial intelligence to weed out threatening activity. "What we see is terrorist actors and their supporters start to understand the kind of things that we're doing and they try to change what they do and we have to be reactive to that". But the company doesn't use technology to screen new content for policy violations, saying computers lack the nuance to determine whether a previously uncategorized video is extremist. The company said it uses algorithms to identify related material that may also support terrorism.

Green stays in Game 4 after technical foul fiasco
Making matters worse was the fact that the third-quarter technical call on Green was a very, very controversial call. For several minutes, however, the only people who seemed to know that in the entire arena were the three officials.

Facebook says it has grown its team of specialists so that it now has 150 people working on counter-terrorism specifically, including academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, and engineers. Some government agencies, including the U.S. Federal Bureau of Investigation and the U.K. Home Office, have called on tech companies to ensure that law enforcement can access encrypted messages. Facebook, Twitter, Google and Microsoft said they would begin sharing unique digital fingerprints of flagged images and video, to keep them from resurfacing on different online platforms.

This team of specialists has "significantly grown" over the previous year, according to a Facebook blog post Thursday detailing its efforts to crack down on terrorists and their posts.

In the post, Bickert and Fishman admit "AI can't catch everything".

Philippine marines killed in fighting with militants
Maute's husband, Cayamora, was arrested at a police checkpoint in the southern city of Davao on Tuesday. Colonel Arevalo added the "temporary setback" had "not diminished our resolve a bit".

Like this: