With brand safety in mind, YouTube steps up efforts to ‘fight online terror’

With brand safety in mind, YouTube steps up efforts to ‘fight online terror’

Google has announced several steps to address advertiser concerns. On Sunday, Kent Walker, Google’s general counsel, outlined four steps Google is taking to address extremist-related content on YouTube. “There should be no place for terrorist content on our services,” wrote Walker, while acknowledging Google, and the industry as a whole, needs to accelerate efforts to address it. Now.” YouTube will be applying more machine learning technology, more people and more discretion to its policing of extremist content going forward. Google says it has applied content analysis models to analyze more than half the terrorism-related content removed over the past six months to determine, for example, if a video was posted by an extremist group or was a news broadcast about a terrorist attack. YouTube’s Trusted Flagger program enlists independent experts to flag inappropriate videos. Google says it’s also expanding work with counter-extremist groups to rout out videos aiming to radicalize or recruit. For example, videos that contain inflammatory religious or supremacist content may not violate the hate speech policy but will now appear behind a new interstitial warning when identified. In a blog post published last week titled “Hard Questions: How We Counter Terrorism,” Facebook laid out the behind-the-scenes steps it is taking to keep terrorist content off the network. Twitter reported it removed nearly 377,000 accounts in the last half of 2016 for promoting terrorism.

Facebook exec: Marketers must take a broad view on digital video
How to Get More Clicks on Your Facebook Ads
Facebook Marketing: Why It Is Time to Rethink Everything

The challenges YouTube has long faced in policing hateful, extremist and inflammatory content came into full view this spring when advertisements from major brands were found running alongside extremist propoganda videos. Advertisers on both sides of the Atlantic pulled or threatened to pull their ads from the platform.

Google has announced several steps to address advertiser concerns. On Sunday, Kent Walker, Google’s general counsel, outlined four steps Google is taking to address extremist-related content on YouTube. The blog post also appeared as an op-ed in Financial Times.

“There should be no place for terrorist content on our services,” wrote Walker, while acknowledging Google, and the industry as a whole, needs to accelerate efforts to address it. “While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now.”

YouTube will be applying more machine learning technology, more people and more discretion to its policing of extremist content going forward. The four steps Walker put forth are as follows:

  1. Devote more engineering resources. Machine learning models are being used to train new “content classifiers” that help identify and remove extremist and terrorism-related content faster. Google says it has applied content analysis models to analyze more than half the terrorism-related content removed over the past six months to determine, for example, if a video was posted by an extremist group or was a news broadcast about a terrorist attack. That is the type of nuance that makes policing YouTube’s huge and ever-growing body of content so challenging.
  2. Add more independent human reviewers. Anyone can…

COMMENTS

WORDPRESS: 0
DISQUS: 0